|
Jose Manuel Alvarez, Theo Gevers and Antonio Lopez. 2010. Learning photometric invariance for object detection. IJCV, 90(1), 45–61.
Abstract: Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously.
Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time.
Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods
Keywords: road detection
|
|
|
Jose Manuel Alvarez and Antonio Lopez. 2011. Road Detection Based on Illuminant Invariance. TITS, 12(1), 184–193.
Abstract: By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms.
Keywords: road detection
|
|
|
Jaume Amores and Petia Radeva. 2005. Retrieval of IVUS Images Using Contextual Information and Elastic Matching.
|
|
|
Jaume Amores and Petia Radeva. 2005. Registration and Retrieval of Highly Elastic Bodies using Contextual Information. PRL, 26(11), 1720–1731.
|
|
|
Jaume Amores, N. Sebe and Petia Radeva. 2006. Boosting the distance estimation: Application to the K-Nearest Neighbor Classifier. PRL, 27(3), 201–209.
|
|
|
Jaume Amores, N. Sebe and Petia Radeva. 2007. Context-Based Object-Class Recognition and Retrieval by Generalized Correlograms.
|
|
|
Hugo Berti, Angel Sappa and Osvaldo Agamennoni. 2008. Improved Dynamic Window Approach by Using Lyapunov Stability Criteria.
|
|
|
Fadi Dornaika and Angel Sappa. 2007. Rigid and Non-rigid Face Motion Tracking by Aligning Texture Maps and Stereo 3D Models. PRL, 28(15), 2116–2126.
|
|
|
Fadi Dornaika and Angel Sappa. 2008. Evaluation of an Appearance-based 3D Face Tracker using Dense 3D Data.
|
|
|
Fadi Dornaika and Angel Sappa. 2009. Instantaneous 3D motion from image derivatives using the Least Trimmed Square Regression. PRL, 30(5), 535–543.
Abstract: This paper presents a new technique to the instantaneous 3D motion estimation. The main contributions are as follows. First, we show that the 3D camera or scene velocity can be retrieved from image derivatives only assuming that the scene contains a dominant plane. Second, we propose a new robust algorithm that simultaneously provides the Least Trimmed Square solution and the percentage of inliers-the non-contaminated data. Experiments on both synthetic and real image sequences demonstrated the effectiveness of the developed method. Those experiments show that the new robust approach can outperform classical robust schemes.
|
|