|
Idoia Ruiz and Joan Serrat. 2020. Rank-based ordinal classification. 25th International Conference on Pattern Recognition.8069–8076.
Abstract: Differently from the regular classification task, in ordinal classification there is an order in the classes. As a consequence not all classification errors matter the same: a predicted class close to the groundtruth one is better than predicting a farther away class. To account for this, most previous works employ loss functions based on the absolute difference between the predicted and groundtruth class labels. We argue that there are many cases in ordinal classification where label values are arbitrary (for instance 1. . . C, being C the number of classes) and thus such loss functions may not be the best choice. We instead propose a network architecture that produces not a single class prediction but an ordered vector, or ranking, of all the possible classes from most to least likely. This is thanks to a loss function that compares groundtruth and predicted rankings of these class labels, not the labels themselves. Another advantage of this new formulation is that we can enforce consistency in the predictions, namely, predicted rankings come from some unimodal vector of scores with mode at the groundtruth class. We compare with the state of the art ordinal classification methods, showing
that ours attains equal or better performance, as measured by common ordinal classification metrics, on three benchmark datasets. Furthermore, it is also suitable for a new task on image aesthetics assessment, i.e. most voted score prediction. Finally, we also apply it to building damage assessment from satellite images, providing an analysis of its performance depending on the degree of imbalance of the dataset.
|
|
|
Yi Xiao, Felipe Codevilla, Diego Porres and Antonio Lopez. 2023. Scaling Vision-Based End-to-End Autonomous Driving with Multi-View Attention Learning. International Conference on Intelligent Robots and Systems.
Abstract: On end-to-end driving, human driving demonstrations are used to train perception-based driving models by imitation learning. This process is supervised on vehicle signals (e.g., steering angle, acceleration) but does not require extra costly supervision (human labeling of sensor data). As a representative of such vision-based end-to-end driving models, CILRS is commonly used as a baseline to compare with new driving models. So far, some latest models achieve better performance than CILRS by using expensive sensor suites and/or by using large amounts of human-labeled data for training. Given the difference in performance, one may think that it is not worth pursuing vision-based pure end-to-end driving. However, we argue that this approach still has great value and potential considering cost and maintenance. In this paper, we present CIL++, which improves on CILRS by both processing higher-resolution images using a human-inspired HFOV as an inductive bias and incorporating a proper attention mechanism. CIL++ achieves competitive performance compared to models which are more costly to develop. We propose to replace CILRS with CIL++ as a strong vision-based pure end-to-end driving baseline supervised by only vehicle signals and trained by conditional imitation learning.
|
|
|
David Geronimo, Antonio Lopez and Angel Sappa. 2007. Computer Vision Approaches for Pedestrian Detection: Visible Spectrum Survey. In J. Marti et al., ed. 3rd Iberian Conference on Pattern Recognition and Image Analysis, LNCS 4477.547–554.
Abstract: Pedestrian detection from images of the visible spectrum is a high relevant area of research given its potential impact in the design of pedestrian protection systems. There are many proposals in the literature but they lack a comparative viewpoint. According to this, in this paper we first propose a common framework where we fit the different approaches, and second we use this framework to provide a comparative point of view of the details of such different approaches, pointing out also the main challenges to be solved in the future. In summary, we expect
this survey to be useful for both novel and experienced researchers in the field. In the first case, as a clarifying snapshot of the state of the art; in the second, as a way to unveil trends and to take conclusions from the comparative study.
Keywords: Pedestrian detection
|
|
|
David Geronimo, Antonio Lopez, Daniel Ponsa and Angel Sappa. 2007. Haar Wavelets and Edge Orientation Histograms for On-Board Pedestrian Detection. In J. Marti et al., ed. 3rd Iberian Conference on Pattern Recognition and Image Analysis, LNCS 4477.418–425.
Keywords: Pedestrian detection
|
|
|
P. Ricaurte, C. Chilan, Cristhian A. Aguilera-Carrasco, Boris X. Vintimilla and Angel Sappa. 2014. Performance Evaluation of Feature Point Descriptors in the Infrared Domain. 9th International Conference on Computer Vision Theory and Applications.545–550.
Abstract: This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered.
Keywords: Infrared Imaging; Feature Point Descriptors
|
|
|
Felipe Lumbreras and 7 others. 2001. Visual Inspection of Safety Belts. International Conference on Quality Control by Artificial Vision.526–531.
|
|
|
Ferran Diego, G.D. Evangelidis and Joan Serrat. 2012. Night-time outdoor surveillance by mobile cameras. 1st International Conference on Pattern Recognition Applications and Methods.365–371.
Abstract: This paper addresses the problem of video surveillance by mobile cameras. We present a method that allows online change detection in night-time outdoor surveillance. Because of the camera movement, background frames are not available and must be “localized” in former sequences and registered with the current frames. To this end, we propose a Frame Localization And Registration (FLAR) approach that solves the problem efficiently. Frames of former sequences define a database which is queried by current frames in turn. To quickly retrieve nearest neighbors, database is indexed through a visual dictionary method based on the SURF descriptor. Furthermore, the frame localization is benefited by a temporal filter that exploits the temporal coherence of videos. Next, the recently proposed ECC alignment scheme is used to spatially register the synchronized frames. Finally, change detection methods apply to aligned frames in order to mark suspicious areas. Experiments with real night sequences recorded by in-vehicle cameras demonstrate the performance of the proposed method and verify its efficiency and effectiveness against other methods.
|
|
|
X. Orriols, Ricardo Toledo, X. Binefa, Petia Radeva, Jordi Vitria and Juan J. Villanueva. 2000. Probabilistic Saliency Approach for Elongated Structure Detection using Deformable Models. 15 th International Conference on Pattern Recognition.1006–1009.
|
|
|
David Lloret, Joan Serrat, Antonio Lopez, A. Soler and Juan J. Villanueva. 2000. Retinal image registration using creases as anatomical landmarks. 15 th International Conference on Pattern Recognition.207–2010.
Abstract: Retinal images are routinely used in ophthalmology to study the optical nerve head and the retina. To assess objectively the evolution of an illness, images taken at different times must be registered. Most methods so far have been designed specifically for a single image modality, like temporal series or stereo pairs of angiographies, fluorescein angiographies or scanning laser ophthalmoscope (SLO) images, which makes them prone to fail when conditions vary. In contrast, the method we propose has shown to be accurate and reliable on all the former modalities. It has been adapted from the 3D registration of CT and MR image to 2D. Relevant features (also known as landmarks) are extracted by means of a robust creaseness operator, and resulting images are iteratively transformed until a maximum in their correlation is achieved. Our method has succeeded in more than 100 pairs tried so far, in all cases including also the scaling as a parameter to be optimized
|
|
|
Patricia Marquez, Debora Gil, R.Mester and Aura Hernandez-Sabate. 2014. Local Analysis of Confidence Measures for Optical Flow Quality Evaluation. 9th International Conference on Computer Vision Theory and Applications.450–457.
Abstract: Optical Flow (OF) techniques facing the complexity of real sequences have been developed in the last years. Even using the most appropriate technique for our specific problem, at some points the output flow might fail to achieve the minimum error required for the system. Confidence measures computed from either input data or OF output should discard those points where OF is not accurate enough for its further use. It follows that evaluating the capabilities of a confidence measure for bounding OF error is as important as the definition
itself. In this paper we analyze different confidence measures and point out their advantages and limitations for their use in real world settings. We also explore the agreement with current tools for their evaluation of confidence measures performance.
Keywords: Optical Flow; Confidence Measure; Performance Evaluation.
|
|