|
Arnau Ramisa, Adriana Tapus, Ramon Lopez de Mantaras and Ricardo Toledo. 2008. Mobile Robot Localization using Panoramic Vision and Combination of Feature Region Detectors. IEEE International Conference on Robotics and Automation,.538–543.
|
|
|
Angel Sappa, Niki Aifanti, Sotiris Malassiotis and Michael G. Strintzis. 2003. Monocular 3D Human Body Reconstruction Towards Depth Augmentation of Television Sequences. IEEE International Conference on Image Processing, Barcelona, Spain, September 2003.325–328.
|
|
|
Akhil Gurram, Onay Urfalioglu, Ibrahim Halfaoui, Fahd Bouzaraa and Antonio Lopez. 2018. Monocular Depth Estimation by Learning from Heterogeneous Datasets. IEEE Intelligent Vehicles Symposium.2176–2181.
Abstract: Depth estimation provides essential information to perform autonomous driving and driver assistance. Especially, Monocular Depth Estimation is interesting from a practical point of view, since using a single camera is cheaper than many other options and avoids the need for continuous calibration strategies as required by stereo-vision approaches. State-of-the-art methods for Monocular Depth Estimation are based on Convolutional Neural Networks (CNNs). A promising line of work consists of introducing additional semantic information about the traffic scene when training CNNs for depth estimation. In practice, this means that the depth data used for CNN training is complemented with images having pixel-wise semantic labels, which usually are difficult to annotate (eg crowded urban images). Moreover, so far it is common practice to assume that the same raw training data is associated with both types of ground truth, ie, depth and semantic labels. The main contribution of this paper is to show that this hard constraint can be circumvented, ie, that we can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets. In order to illustrate the benefits of our approach, we combine KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on Monocular Depth Estimation.
|
|
|
Diego Cheda, Daniel Ponsa and Antonio Lopez. 2012. Monocular Depth-based Background Estimation. 7th International Conference on Computer Vision Theory and Applications.323–328.
Abstract: In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences.
|
|
|
Diego Cheda, Daniel Ponsa and Antonio Lopez. 2012. Monocular Egomotion Estimation based on Image Matching. 1st International Conference on Pattern Recognition Applications and Methods.425–430.
|
|
|
Joan Serrat, Jordi Vitria and J. Pladellorens. 1991. Morphological Segmentation of Heart Scintigraphic image Sequences. Computer Assisted Radiology..
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat and Antonio Lopez. 2007. Motion Segmentation from Feature Trajectories with Missing Data. In J. Marti et al.(Eds.), ed. 3rd. Iberian Conference on Pattern Recognition and Image Analysis.483–490.
|
|
|
Josefina Mauri and 14 others. 2000. Moviment del vas en l anàlisi d imatges d ecografia intracoronària: un model matemàtic. Congrés de la Societat Catalana de Cardiologia..
|
|
|
Jiaolong Xu, Sebastian Ramos, Xu Hu, David Vazquez and Antonio Lopez. 2013. Multi-task Bilinear Classifiers for Visual Domain Adaptation. Advances in Neural Information Processing Systems Workshop.
Abstract: We propose a method that aims to lessen the significant accuracy degradation
that a discriminative classifier can suffer when it is trained in a specific domain (source domain) and applied in a different one (target domain). The principal reason for this degradation is the discrepancies in the distribution of the features that feed the classifier in different domains. Therefore, we propose a domain adaptation method that maps the features from the different domains into a common subspace and learns a discriminative domain-invariant classifier within it. Our algorithm combines bilinear classifiers and multi-task learning for domain adaptation.
The bilinear classifier encodes the feature transformation and classification
parameters by a matrix decomposition. In this way, specific feature transformations for multiple domains and a shared classifier are jointly learned in a multi-task learning framework. Focusing on domain adaptation for visual object detection, we apply this method to the state-of-the-art deformable part-based model for cross domain pedestrian detection. Experimental results show that our method significantly avoids the domain drift and improves the accuracy when compared to several baselines.
Keywords: Domain Adaptation; Pedestrian Detection; ADAS
|
|
|
Fernando Barrera, Felipe Lumbreras and Angel Sappa. 2010. Multimodal Template Matching based on Gradient and Mutual Information using Scale-Space. 17th IEEE International Conference on Image Processing.2749–2752.
Abstract: This paper presents the combined use of gradient and mutual information for infrared and intensity templates matching. We propose to joint: (i) feature matching in a multiresolution context and (ii) information propagation through scale-space representations. Our method consists in combining mutual information with a shape descriptor based on gradient, and propagate them following a coarse-to-fine strategy. The main contributions of this work are: to offer a theoretical formulation towards a multimodal stereo matching; to show that gradient and mutual information can be reinforced while they are propagated between consecutive levels; and to show that they are valid cost functions in multimodal template matchings. Comparisons are presented showing the improvements and viability of the proposed approach.
|
|