|
Records |
Links |
|
Author |
Jose Manuel Alvarez; Antonio Lopez |
|
|
Title |
Novel Index for Objective Evaluation of Road Detection Algorithms |
Type |
Conference Article |
|
Year |
2008 |
Publication |
Intelligent Transportation Systems. 11th International IEEE Conference on, |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
815–820 |
|
|
Keywords |
road detection |
|
|
Abstract |
|
|
|
Address |
Beijing (Xina) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ITSC |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ AlL2008 |
Serial |
1074 |
|
Permanent link to this record |
|
|
|
|
Author |
Andrew Nolan; Daniel Serrano; Aura Hernandez-Sabate; Daniel Ponsa; Antonio Lopez |
|
|
Title |
Obstacle mapping module for quadrotors on outdoor Search and Rescue operations |
Type |
Conference Article |
|
Year |
2013 |
Publication |
International Micro Air Vehicle Conference and Flight Competition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
UAV |
|
|
Abstract |
Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments. |
|
|
Address |
Toulouse; France; September 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IMAV |
|
|
Notes |
ADAS; 600.054; 600.057;IAM |
Approved |
no |
|
|
Call Number |
Admin @ si @ NSH2013 |
Serial |
2371 |
|
Permanent link to this record |
|
|
|
|
Author |
Felipe Codevilla; Antonio Lopez; Vladlen Koltun; Alexey Dosovitskiy |
|
|
Title |
On Offline Evaluation of Vision-based Driving Models |
Type |
Conference Article |
|
Year |
2018 |
Publication |
15th European Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
11219 |
Issue |
|
Pages |
246-262 |
|
|
Keywords |
Autonomous driving; deep learning |
|
|
Abstract |
Autonomous driving models should ideally be evaluated by deploying
them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and
suitable offline metrics. |
|
|
Address |
Munich; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCV |
|
|
Notes |
ADAS; 600.124; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CLK2018 |
Serial |
3162 |
|
Permanent link to this record |
|
|
|
|
Author |
Naveen Onkarappa; Angel Sappa |
|
|
Title |
On-Board Monocular Vision System Pose Estimation through a Dense Optical Flow |
Type |
Conference Article |
|
Year |
2010 |
Publication |
7th International Conference on Image Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
6111 |
Issue |
|
Pages |
230-239 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a robust technique for estimating on-board monocular vision system pose. The proposed approach is based on a dense optical flow that is robust against shadows, reflections and illumination changes. A RANSAC based scheme is used to cope with the outliers in the optical flow. The proposed technique is intended to be used in driver assistance systems for applications such as obstacle or pedestrian detection. Experimental results on different scenarios, both from synthetic and real sequences, shows usefulness of the proposed approach. |
|
|
Address |
Povoa de Varzim (Portugal) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-13771-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ OnS2010 |
Serial |
1342 |
|
Permanent link to this record |
|
|
|
|
Author |
R. de Nijs; Sebastian Ramos; Gemma Roig; Xavier Boix; Luc Van Gool; K. Kühnlenz. |
|
|
Title |
On-line Semantic Perception Using Uncertainty |
Type |
Conference Article |
|
Year |
2012 |
Publication |
International Conference on Intelligent Robots and Systems |
Abbreviated Journal |
IROS |
|
|
Volume |
|
Issue |
|
Pages |
4185-4191 |
|
|
Keywords |
Semantic Segmentation |
|
|
Abstract |
Visual perception capabilities are still highly unreliable in unconstrained settings, and solutions might not beaccurate in all regions of an image. Awareness of the uncertainty of perception is a fundamental requirement for proper high level decision making in a robotic system. Yet, the uncertainty measure is often sacrificed to account for dependencies between object/region classifiers. This is the case of Conditional Random Fields (CRFs), the success of which stems from their ability to infer the most likely world configuration, but they do not directly allow to estimate the uncertainty of the solution. In this paper, we consider the setting of assigning semantic labels to the pixels of an image sequence. Instead of using a CRF, we employ a Perturb-and-MAP Random Field, a recently introduced probabilistic model that allows performing fast approximate sampling from its probability density function. This allows to effectively compute the uncertainty of the solution, indicating the reliability of the most likely labeling in each region of the image. We report results on the CamVid dataset, a standard benchmark for semantic labeling of urban image sequences. In our experiments, we show the benefits of exploiting the uncertainty by putting more computational effort on the regions of the image that are less reliable, and use more efficient techniques for other regions, showing little decrease of performance |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IROS |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ NRR2012 |
Serial |
2378 |
|
Permanent link to this record |
|
|
|
|
Author |
Muhammad Anwer Rao; David Vazquez; Antonio Lopez |
|
|
Title |
Opponent Colors for Human Detection |
Type |
Conference Article |
|
Year |
2011 |
Publication |
5th Iberian Conference on Pattern Recognition and Image Analysis |
Abbreviated Journal |
|
|
|
Volume |
6669 |
Issue |
|
Pages |
363-370 |
|
|
Keywords |
Pedestrian Detection; Color; Part Based Models |
|
|
Abstract |
Human detection is a key component in fields such as advanced driving assistance and video surveillance. However, even detecting non-occluded standing humans remains a challenge of intensive research. Finding good features to build human models for further detection is probably one of the most important issues to face. Currently, shape, texture and motion features have deserve extensive attention in the literature. However, color-based features, which are important in other domains (e.g., image categorization), have received much less attention. In fact, the use of RGB color space has become a kind of choice by default. The focus has been put in developing first and second order features on top of RGB space (e.g., HOG and co-occurrence matrices, resp.). In this paper we evaluate the opponent colors (OPP) space as a biologically inspired alternative for human detection. In particular, by feeding OPP space in the baseline framework of Dalal et al. for human detection (based on RGB, HOG and linear SVM), we will obtain better detection performance than by using RGB space. This is a relevant result since, up to the best of our knowledge, OPP space has not been previously used for human detection. This suggests that in the future it could be worth to compute co-occurrence matrices, self-similarity features, etc., also on top of OPP space, i.e., as we have done with HOG in this paper. |
|
|
Address |
Las Palmas de Gran Canaria. Spain |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
Berlin Heidelberg |
Editor |
J. Vitria; J.M. Sanches; M. Hernandez |
|
|
Language |
English |
Summary Language |
English |
Original Title |
Opponent Colors for Human Detection |
|
|
Series Editor |
|
Series Title |
Lecture Notes on Computer Science |
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-21256-7 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IbPRIA |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ RVL2011a |
Serial |
1666 |
|
Permanent link to this record |
|
|
|
|
Author |
Naveen Onkarappa; Sujay M. Veerabhadrappa; Angel Sappa |
|
|
Title |
Optical Flow in Onboard Applications: A Study on the Relationship Between Accuracy and Scene Texture |
Type |
Conference Article |
|
Year |
2012 |
Publication |
4th International Conference on Signal and Image Processing |
Abbreviated Journal |
|
|
|
Volume |
221 |
Issue |
|
Pages |
257-267 |
|
|
Keywords |
|
|
|
Abstract |
Optical flow has got a major role in making advanced driver assistance systems (ADAS) a reality. ADAS applications are expected to perform efficiently in all kinds of environments, those are highly probable, that one can drive the vehicle in different kinds of roads, times and seasons. In this work, we study the relationship of optical flow with different roads, that is by analyzing optical flow accuracy on different road textures. Texture measures such as TeX , TeX and TeX are evaluated for this purpose. Further, the relation of regularization weight to the flow accuracy in the presence of different textures is also analyzed. Additionally, we present a framework to generate synthetic sequences of different textures in ADAS scenarios with ground-truth optical flow. |
|
|
Address |
Coimbatore, India |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1876-1100 |
ISBN |
978-81-322-0996-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICSIP |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ OVS2012 |
Serial |
2356 |
|
Permanent link to this record |
|
|
|
|
Author |
Cristina Cañero; Petia Radeva; Oriol Pujol; Ricardo Toledo; Debora Gil; J. Saludes; Juan J. Villanueva; B. Garcia del Blanco; J. Mauri; E. Fernandez-Nofrerias; J.A. Gomez-Hospital; E. Iraculis; J. Comin; C. Quiles; F. Jara; A. Cequier; E. Esplugas |
|
|
Title |
Optimal Stent Implantation: Three-dimensional Evaluation of the Mutual Position of Stent and Vessel via Intracoronary Ecography |
Type |
Conference Article |
|
Year |
1999 |
Publication |
Proceedings of International Conference on Computer in Cardiology (CIC´99) |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
We present a new automatic technique to visualize and quantify the mutual position between the stent and the vessel wall by considering their three-dimensional reconstruction. Two deformable generalized cylinders adapt to the image features in all IVUS planes corresponding to the vessel wall and the stent in order to reconstruct the boundaries of the stent and the vessel in space. The image features that characterize the stent and the vessel wall are determined in terms of edge and ridge image detectors taking into account the gray level of the image pixels. We show that the 30 reconstruction by deformable cylinders is accurate and robust due to the spatial data coherence in the considered volumetric IVUS image. The main clinic utility of the stent and vessel reconstruction by deformable’ cylinders consists of its possibility to visualize and to assess the optimal stent introduction. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; RV; IAM; ADAS; HuPBA |
Approved |
no |
|
|
Call Number |
IAM @ iam @ CRP1999a |
Serial |
1491 |
|
Permanent link to this record |
|
|
|
|
Author |
Diego Cheda; Daniel Ponsa; Antonio Lopez |
|
|
Title |
Pedestrian Candidates Generation using Monocular Cues |
Type |
Conference Article |
|
Year |
2012 |
Publication |
IEEE Intelligent Vehicles Symposium |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
7-12 |
|
|
Keywords |
pedestrian detection |
|
|
Abstract |
Common techniques for pedestrian candidates generation (e.g., sliding window approaches) are based on an exhaustive search over the image. This implies that the number of windows produced is huge, which translates into a significant time consumption in the classification stage. In this paper, we propose a method that significantly reduces the number of windows to be considered by a classifier. Our method is a monocular one that exploits geometric and depth information available on single images. Both representations of the world are fused together to generate pedestrian candidates based on an underlying model which is focused only on objects standing vertically on the ground plane and having certain height, according with their depths on the scene. We evaluate our algorithm on a challenging dataset and demonstrate its application for pedestrian detection, where a considerable reduction in the number of candidate windows is reached. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE Xplore |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1931-0587 |
ISBN |
978-1-4673-2119-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IV |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ CPL2012c; ADAS @ adas @ cpl2012d |
Serial |
2013 |
|
Permanent link to this record |
|
|
|
|
Author |
P. Ricaurte; C. Chilan; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Angel Sappa |
|
|
Title |
Performance Evaluation of Feature Point Descriptors in the Infrared Domain |
Type |
Conference Article |
|
Year |
2014 |
Publication |
9th International Conference on Computer Vision Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
1 |
Issue |
|
Pages |
545-550 |
|
|
Keywords |
Infrared Imaging; Feature Point Descriptors |
|
|
Abstract |
This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered. |
|
|
Address |
Lisboa; Portugal; January 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISAPP |
|
|
Notes |
ADAS; 600.055; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RCA2014b |
Serial |
2476 |
|
Permanent link to this record |