|
Patricia Suarez, Angel Sappa and Boris X. Vintimilla. 2017. Learning to Colorize Infrared Images. 15th International Conference on Practical Applications of Agents and Multi-Agent System.
Abstract: This paper focuses on near infrared (NIR) image colorization by using a Generative Adversarial Network (GAN) architecture model. The proposed architecture consists of two stages. Firstly, it learns to colorize the given input, resulting in a RGB image. Then, in the second stage, a discriminative model is used to estimate the probability that the generated image came from the training dataset, rather than the image automatically generated. The proposed model starts the learning process from scratch, because our set of images is very dierent from the dataset used in existing pre-trained models, so transfer learning strategies cannot be used. Infrared image colorization is an important problem when human perception need to be considered, e.g, in remote sensing applications. Experimental results with a large set of real images are provided showing the validity of the proposed approach.
Keywords: CNN in multispectral imaging; Image colorization
|
|
|
Gemma Rotger, Francesc Moreno-Noguer, Felipe Lumbreras and Antonio Agudo. 2019. Single view facial hair 3D reconstruction. 9th Iberian Conference on Pattern Recognition and Image Analysis.423–436. (LNCS.)
Abstract: n this work, we introduce a novel energy-based framework that addresses the challenging problem of 3D reconstruction of facial hair from a single RGB image. To this end, we identify hair pixels over the image via texture analysis and then determine individual hair fibers that are modeled by means of a parametric hair model based on 3D helixes. We propose to minimize an energy composed of several terms, in order to adapt the hair parameters that better fit the image detections. The final hairs respond to the resulting fibers after a post-processing step where we encourage further realism. The resulting approach generates realistic facial hair fibers from solely an RGB image without assuming any training data nor user interaction. We provide an experimental evaluation on real-world pictures where several facial hair styles and image conditions are observed, showing consistent results and establishing a comparison with respect to competing approaches.
Keywords: 3D Vision; Shape Reconstruction; Facial Hair Modeling
|
|
|
M.J. Yzuel, J. Pladellorens, Joan Serrat and A. Dupuy. 1993. Application restauration and edge detection techniques in the calculation of left ventricular volumes. Optics in Medicine, Biology and Environmental Research : Selected contributions to the first International Conference on Optics within Life Sciences (OWLS I). Elsevier, 374–375.
|
|
|
Jiaolong Xu, Sebastian Ramos, David Vazquez and Antonio Lopez. 2014. Cost-sensitive Structured SVM for Multi-category Domain Adaptation. 22nd International Conference on Pattern Recognition. IEEE, 3886–3891.
Abstract: Domain adaptation addresses the problem of accuracy drop that a classifier may suffer when the training data (source domain) and the testing data (target domain) are drawn from different distributions. In this work, we focus on domain adaptation for structured SVM (SSVM). We propose a cost-sensitive domain adaptation method for SSVM, namely COSS-SSVM. In particular, during the re-training of an adapted classifier based on target and source data, the idea that we explore consists in introducing a non-zero cost even for correctly classified source domain samples. Eventually, we aim to learn a more targetoriented classifier by not rewarding (zero loss) properly classified source-domain training samples. We assess the effectiveness of COSS-SSVM on multi-category object recognition.
Keywords: Domain Adaptation; Pedestrian Detection
|
|
|
Jiaolong Xu, Sebastian Ramos, Xu Hu, David Vazquez and Antonio Lopez. 2013. Multi-task Bilinear Classifiers for Visual Domain Adaptation. Advances in Neural Information Processing Systems Workshop.
Abstract: We propose a method that aims to lessen the significant accuracy degradation
that a discriminative classifier can suffer when it is trained in a specific domain (source domain) and applied in a different one (target domain). The principal reason for this degradation is the discrepancies in the distribution of the features that feed the classifier in different domains. Therefore, we propose a domain adaptation method that maps the features from the different domains into a common subspace and learns a discriminative domain-invariant classifier within it. Our algorithm combines bilinear classifiers and multi-task learning for domain adaptation.
The bilinear classifier encodes the feature transformation and classification
parameters by a matrix decomposition. In this way, specific feature transformations for multiple domains and a shared classifier are jointly learned in a multi-task learning framework. Focusing on domain adaptation for visual object detection, we apply this method to the state-of-the-art deformable part-based model for cross domain pedestrian detection. Experimental results show that our method significantly avoids the domain drift and improves the accuracy when compared to several baselines.
Keywords: Domain Adaptation; Pedestrian Detection; ADAS
|
|
|
David Vazquez, Jiaolong Xu, Sebastian Ramos, Antonio Lopez and Daniel Ponsa. 2013. Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes. CVPR Workshop on Ground Truth – What is a good dataset?. IEEE, 706–711.
Abstract: Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs.
Keywords: Pedestrian Detection; Domain Adaptation
|
|
|
David Vazquez, Antonio Lopez and Daniel Ponsa. 2012. Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection. 21st International Conference on Pattern Recognition. Tsukuba Science City, JAPAN, IEEE, 3492–3495.
Abstract: Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate, and lead it to self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1).
Keywords: Pedestrian Detection; Domain Adaptation; Virtual worlds
|
|
|
David Vazquez, Antonio Lopez, Daniel Ponsa and Javier Marin. 2011. Cool world: domain adaptation of virtual and real worlds for human detection using active learning. NIPS Domain Adaptation Workshop: Theory and Application. Granada, Spain.
Abstract: Image based human detection is of paramount interest for different applications. The most promising human detectors rely on discriminatively learnt classifiers, i.e., trained with labelled samples. However, labelling is a manual intensive task, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, in Marin et al. we have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera and the same type of scenario. Accordingly, in Vazquez et al. we cast the problem as one of supervised domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we use an active learning technique. Thus, ultimately our human model is learnt by the combination of virtual- and real-world labelled samples which, to the best of our knowledge, was not done before. Here, we term such combined space cool world. In this extended abstract we summarize our proposal, and include quantitative results from Vazquez et al. showing its validity.
Keywords: Pedestrian Detection; Virtual; Domain Adaptation; Active Learning
|
|
|
David Vazquez, Antonio Lopez, Daniel Ponsa and Javier Marin. 2011. Virtual Worlds and Active Learning for Human Detection. 13th International Conference on Multimodal Interaction. New York, NY, USA, USA, ACM DL, 393–400.
Abstract: Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.
Keywords: Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning
|
|
|
David Vazquez and 7 others. 2017. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images. 31st International Congress and Exhibition on Computer Assisted Radiology and Surgery.
Abstract: Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss-rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. We provide new baselines on this dataset by training standard fully convolutional networks (FCN) for semantic segmentation and significantly outperforming, without any further post-processing, prior results in endoluminal scene segmentation.
Keywords: Deep Learning; Medical Imaging
|
|