|
Records |
Links |
|
Author |
Idoia Ruiz; Joan Serrat |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
|
|
Title |
Rank-based ordinal classification |
Type |
Conference Article |
|
Year |
2020 |
Publication |
25th International Conference on Pattern Recognition |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
|
|
|
Volume |
|
Issue |
|
Pages |
8069-8076 |
|
|
Keywords |
|
|
|
Abstract |
Differently from the regular classification task, in ordinal classification there is an order in the classes. As a consequence not all classification errors matter the same: a predicted class close to the groundtruth one is better than predicting a farther away class. To account for this, most previous works employ loss functions based on the absolute difference between the predicted and groundtruth class labels. We argue that there are many cases in ordinal classification where label values are arbitrary (for instance 1. . . C, being C the number of classes) and thus such loss functions may not be the best choice. We instead propose a network architecture that produces not a single class prediction but an ordered vector, or ranking, of all the possible classes from most to least likely. This is thanks to a loss function that compares groundtruth and predicted rankings of these class labels, not the labels themselves. Another advantage of this new formulation is that we can enforce consistency in the predictions, namely, predicted rankings come from some unimodal vector of scores with mode at the groundtruth class. We compare with the state of the art ordinal classification methods, showing
that ours attains equal or better performance, as measured by common ordinal classification metrics, on three benchmark datasets. Furthermore, it is also suitable for a new task on image aesthetics assessment, i.e. most voted score prediction. Finally, we also apply it to building damage assessment from satellite images, providing an analysis of its performance depending on the degree of imbalance of the dataset. |
|
|
Address |
Virtual; January 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
ADAS; 600.118; 600.124 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RuS2020 |
Serial |
3549 |
|
Permanent link to this record |
|
|
|
|
Author |
Yi Xiao; Felipe Codevilla; Diego Porres; Antonio Lopez |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Scaling Vision-Based End-to-End Autonomous Driving with Multi-View Attention Learning |
Type |
Conference Article |
|
Year |
2023 |
Publication |
International Conference on Intelligent Robots and Systems |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
On end-to-end driving, human driving demonstrations are used to train perception-based driving models by imitation learning. This process is supervised on vehicle signals (e.g., steering angle, acceleration) but does not require extra costly supervision (human labeling of sensor data). As a representative of such vision-based end-to-end driving models, CILRS is commonly used as a baseline to compare with new driving models. So far, some latest models achieve better performance than CILRS by using expensive sensor suites and/or by using large amounts of human-labeled data for training. Given the difference in performance, one may think that it is not worth pursuing vision-based pure end-to-end driving. However, we argue that this approach still has great value and potential considering cost and maintenance. In this paper, we present CIL++, which improves on CILRS by both processing higher-resolution images using a human-inspired HFOV as an inductive bias and incorporating a proper attention mechanism. CIL++ achieves competitive performance compared to models which are more costly to develop. We propose to replace CILRS with CIL++ as a strong vision-based pure end-to-end driving baseline supervised by only vehicle signals and trained by conditional imitation learning. |
|
|
Address |
Detroit; USA; October 2023 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IROS |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ XCP2023 |
Serial |
3930 |
|
Permanent link to this record |
|
|
|
|
Author |
Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat; Antonio Lopez |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
|
|
Title |
Motion Segmentation from Feature Trajectories with Missing Data |
Type |
Conference Article |
|
Year |
2007 |
Publication |
3rd. Iberian Conference on Pattern Recognition and Image Analysis |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
IbPRIA 2007 |
|
|
Volume |
LNCS 4477 |
Issue |
|
Pages |
483–490 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Girona (Spain) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
J. Marti et al. (Eds.) |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ JSL2007a |
Serial |
814 |
|
Permanent link to this record |
|
|
|
|
Author |
Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat; Antonio Lopez |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Factorization with Missing and Noisy Data |
Type |
Conference Article |
|
Year |
2006 |
Publication |
6th International Conference on Computational Science |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
ICCS´06 |
|
|
Volume |
LNCS 3991 |
Issue |
|
Pages |
555–562 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Reading (United Kingdom) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ JSL2006b |
Serial |
653 |
|
Permanent link to this record |
|
|
|
|
Author |
Angel Sappa; David Geronimo; Fadi Dornaika; Antonio Lopez |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Real Time Vehicle Pose Using On-Board Stereo Vision System |
Type |
Conference Article |
|
Year |
2006 |
Publication |
International Conference on Image Analysis and Recognition |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
ICIAR |
|
|
Volume |
|
Issue |
LNCS 4142 |
Pages |
205–216 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a robust technique for a real time estimation of both camera’s position and orientation—referred as pose. A commercial stereo vision system is used. Unlike previous approaches, it can be used either for urban or highway scenarios. The proposed technique consists of two stages. Initially, a compact 2D representation of the original 3D data points is computed. Then, a RANSAC based least squares approach is used for fitting a plane to the road. At the same time,
relative camera’s position and orientation are computed. The proposed technique is intended to be used on a driving assistance scheme for applications such as obstacle or pedestrian detection. Experimental results on urban environments with different road geometries are presented. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ SGD2006b |
Serial |
671 |
|
Permanent link to this record |
|
|
|
|
Author |
Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat; Antonio Lopez |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
|
|
Title |
An Iterative Multiresolution Scheme for SFM |
Type |
Conference Article |
|
Year |
2006 |
Publication |
International Conference on Image Analysis and Recognition |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
ICIAR 2006 |
|
|
Volume |
LNCS 4141 |
Issue |
1 |
Pages |
804–815 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ JSL2006c |
Serial |
704 |
|
Permanent link to this record |
|
|
|
|
Author |
David Geronimo; Angel Sappa; Antonio Lopez; Daniel Ponsa |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Adaptive Image Sampling and Windows Classification for On-board Pedestrian Detection |
Type |
Conference Article |
|
Year |
2007 |
Publication |
Proceedings of the 5th International Conference on Computer Vision Systems |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
ICVS |
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Pedestrian Detection |
|
|
Abstract |
On–board pedestrian detection is in the frontier of the state–of–the–art since it implies processing outdoor scenarios from a mobile platform and searching for aspect–changing objects in cluttered urban environments. Most promising approaches include the development of classifiers based on feature selection and machine learning. However, they use a large number of features which compromises real–time. Thus, methods for running the classifiers in only a few image windows must be provided. In this paper we contribute in both aspects, proposing a camera
pose estimation method for adaptive sparse image sampling, as well as a classifier for pedestrian detection based on Haar wavelets and edge orientation histograms as features and AdaBoost as learning machine. Both proposals are compared with relevant approaches in the literature, showing comparable results but reducing processing time by four for the sampling tasks and by ten for the classification one. |
|
|
Address |
Bielefeld (Germany) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ gsl2007a |
Serial |
786 |
|
Permanent link to this record |
|
|
|
|
Author |
R. de Nijs; Sebastian Ramos; Gemma Roig; Xavier Boix; Luc Van Gool; K. Kühnlenz. |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
|
|
Title |
On-line Semantic Perception Using Uncertainty |
Type |
Conference Article |
|
Year |
2012 |
Publication |
International Conference on Intelligent Robots and Systems |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
IROS |
|
|
Volume |
|
Issue |
|
Pages |
4185-4191 |
|
|
Keywords |
Semantic Segmentation |
|
|
Abstract |
Visual perception capabilities are still highly unreliable in unconstrained settings, and solutions might not beaccurate in all regions of an image. Awareness of the uncertainty of perception is a fundamental requirement for proper high level decision making in a robotic system. Yet, the uncertainty measure is often sacrificed to account for dependencies between object/region classifiers. This is the case of Conditional Random Fields (CRFs), the success of which stems from their ability to infer the most likely world configuration, but they do not directly allow to estimate the uncertainty of the solution. In this paper, we consider the setting of assigning semantic labels to the pixels of an image sequence. Instead of using a CRF, we employ a Perturb-and-MAP Random Field, a recently introduced probabilistic model that allows performing fast approximate sampling from its probability density function. This allows to effectively compute the uncertainty of the solution, indicating the reliability of the most likely labeling in each region of the image. We report results on the CamVid dataset, a standard benchmark for semantic labeling of urban image sequences. In our experiments, we show the benefits of exploiting the uncertainty by putting more computational effort on the regions of the image that are less reliable, and use more efficient techniques for other regions, showing little decrease of performance |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IROS |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ NRR2012 |
Serial |
2378 |
|
Permanent link to this record |
|
|
|
|
Author |
David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
|
|
Title |
Cool world: domain adaptation of virtual and real worlds for human detection using active learning |
Type |
Conference Article |
|
Year |
2011 |
Publication |
NIPS Domain Adaptation Workshop: Theory and Application |
Abbreviated Journal ![sorted by Abbreviated Journal field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
NIPS-DA |
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Pedestrian Detection; Virtual; Domain Adaptation; Active Learning |
|
|
Abstract |
Image based human detection is of paramount interest for different applications. The most promising human detectors rely on discriminatively learnt classifiers, i.e., trained with labelled samples. However, labelling is a manual intensive task, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, in Marin et al. we have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera and the same type of scenario. Accordingly, in Vazquez et al. we cast the problem as one of supervised domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we use an active learning technique. Thus, ultimately our human model is learnt by the combination of virtual- and real-world labelled samples which, to the best of our knowledge, was not done before. Here, we term such combined space cool world. In this extended abstract we summarize our proposal, and include quantitative results from Vazquez et al. showing its validity. |
|
|
Address |
Granada, Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
Granada, Spain |
Editor |
|
|
|
Language |
English |
Summary Language |
English |
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
DA-NIPS |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ VLP2011b |
Serial |
1756 |
|
Permanent link to this record |