|
Angel Sappa and Boris X. Vintimilla. 2007. Cost-Based Closed Contour Representations.
|
|
|
Angel Sappa and M.A. Garcia. 2007. Incremental Integration of Multiresolution Range Images.
|
|
|
Jaume Amores, N. Sebe and Petia Radeva. 2007. Context-Based Object-Class Recognition and Retrieval by Generalized Correlograms.
|
|
|
Yu Jie, Jaume Amores, N. Sebe, Petia Radeva and Tian Qi. 2008. Distance Learning for Similarity Estimation.
|
|
|
Joan Serrat, Ferran Diego and Felipe Lumbreras. 2008. Los faros delanteros a traves del objetivo.
|
|
|
Carme Julia, Angel Sappa and Felipe Lumbreras. 2008. Aprendiendo a recrear la realidad en 3D.
|
|
|
Enrique Cabello, Cristina Conde, Angel Serrano, Licesio Rodriguez and David Vazquez. 2006. Empleo de sistemas biométricos para el reconocimiento de personas en aeropuertos. Instituto Universitario de Investigación sobre Seguridad Interior (IUSI 2006).
Abstract: El presente proyecto se desarrolló a lo largo del año 2005, probando un prototipo de un sistema de verificación facial con imágenes extraídas de las cámaras de video vigilancia del aeropuerto de Barajas. Se diseñaron varios experimentos, agrupados en dos clases. En el primer tipo, el sistema es entrenado con imágenes obtenidas en condiciones de laboratorio y luego probado con imágenes extraídas de las cámaras de video vigilancia del aeropuerto de Barajas. En el segundo caso, tanto las imágenes de entrenamiento como las de prueba corresponden a imágenes extraídas de Barajas. Se ha desarrollado un sistema completo, que incluye adquisición y digitalización de las imágenes, localización y recorte de las caras en escena, verificación de sujetos y obtención de resultados. Los resultados muestran, que, en general, un sistema de verificación facial basado en imágenes puede ser una ayuda a un operario que deba estar vigilando amplias zonas.
Keywords: Surveillance; Face detection; Face recognition
|
|
|
Sergio Vera, Debora Gil, Antonio Lopez and Miguel Angel Gonzalez Ballester. 2012. Multilocal Creaseness Measure.
Abstract: This document describes the implementation using the Insight Toolkit of an algorithm for detecting creases (ridges and valleys) in N-dimensional images, based on the Local Structure Tensor of the image. In addition to the filter used to calculate the creaseness image, a filter for the computation of the structure tensor is also included in this submission.
Keywords: Ridges, Valley, Creaseness, Structure Tensor, Skeleton,
|
|
|
David Vazquez and 7 others. 2017. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images. JHCE, 2040–2295.
Abstract: Colorectal cancer (CRC) is the third cause of cancer death world-wide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss- rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aim- ing to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endolumninal scene, tar- geting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCN). We perform a compar- ative study to show that FCN significantly outperform, without any further post-processing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Keywords: Colonoscopy images; Deep Learning; Semantic Segmentation
|
|
|
Gabriel Villalonga and Antonio Lopez. 2020. Co-Training for On-Board Deep Object Detection. ACCESS, 194441–194456.
Abstract: Providing ground truth supervision to train visual models has been a bottleneck over the years, exacerbated by domain shifts which degenerate the performance of such models. This was the case when visual tasks relied on handcrafted features and shallow machine learning and, despite its unprecedented performance gains, the problem remains open within the deep learning paradigm due to its data-hungry nature. Best performing deep vision-based object detectors are trained in a supervised manner by relying on human-labeled bounding boxes which localize class instances (i.e. objects) within the training images. Thus, object detection is one of such tasks for which human labeling is a major bottleneck. In this article, we assess co-training as a semi-supervised learning method for self-labeling objects in unlabeled images, so reducing the human-labeling effort for developing deep object detectors. Our study pays special attention to a scenario involving domain shift; in particular, when we have automatically generated virtual-world images with object bounding boxes and we have real-world images which are unlabeled. Moreover, we are particularly interested in using co-training for deep object detection in the context of driver assistance systems and/or self-driving vehicles. Thus, using well-established datasets and protocols for object detection in these application contexts, we will show how co-training is a paradigm worth to pursue for alleviating object labeling, working both alone and together with task-agnostic domain adaptation.
|
|