toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Zhijie Fang; Antonio Lopez edit   pdf
url  doi
openurl 
  Title Is the Pedestrian going to Cross? Answering by 2D Pose Estimation Type Conference Article
  Year 2018 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 1271 - 1276  
  Keywords  
  Abstract (down) Our recent work suggests that, thanks to nowadays powerful CNNs, image-based 2D pose estimation is a promising cue for determining pedestrian intentions such as crossing the road in the path of the ego-vehicle, stopping before entering the road, and starting to walk or bending towards the road. This statement is based on the results obtained on non-naturalistic sequences (Daimler dataset), i.e. in sequences choreographed specifically for performing the study. Fortunately, a new publicly available dataset (JAAD) has appeared recently to allow developing methods for detecting pedestrian intentions in naturalistic driving conditions; more specifically, for addressing the relevant question is the pedestrian going to cross? Accordingly, in this paper we use JAAD to assess the usefulness of 2D pose estimation for answering such a question. We combine CNN-based pedestrian detection, tracking and pose estimation to predict the crossing action from monocular images. Overall, the proposed pipeline provides new state-ofthe-art results.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IV  
  Notes ADAS; 600.124; 600.116; 600.118 Approved no  
  Call Number Admin @ si @ FaL2018 Serial 3181  
Permanent link to this record
 

 
Author Patricia Marquez; Debora Gil; Aura Hernandez-Sabate edit   pdf
url  doi
openurl 
  Title A Confidence Measure for Assessing Optical Flow Accuracy in the Absence of Ground Truth Type Conference Article
  Year 2011 Publication IEEE International Conference on Computer Vision – Workshops Abbreviated Journal  
  Volume Issue Pages 2042-2049  
  Keywords IEEE International Conference on Computer Vision – Workshops  
  Abstract (down) Optical flow is a valuable tool for motion analysis in autonomous navigation systems. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in real world sequences. This paper introduces a measure of optical flow accuracy for Lucas-Kanade based flows in terms of the numerical stability of the data-term. We call this measure optical flow condition number. A statistical analysis over ground-truth data show a good statistical correlation between the condition number and optical flow error. Experiments on driving sequences illustrate its potential for autonomous navigation systems.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Barcelona (Spain) Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes IAM; ADAS Approved no  
  Call Number IAM @ iam @ MGH2011 Serial 1682  
Permanent link to this record
 

 
Author Naveen Onkarappa; Sujay M. Veerabhadrappa; Angel Sappa edit  doi
isbn  openurl
  Title Optical Flow in Onboard Applications: A Study on the Relationship Between Accuracy and Scene Texture Type Conference Article
  Year 2012 Publication 4th International Conference on Signal and Image Processing Abbreviated Journal  
  Volume 221 Issue Pages 257-267  
  Keywords  
  Abstract (down) Optical flow has got a major role in making advanced driver assistance systems (ADAS) a reality. ADAS applications are expected to perform efficiently in all kinds of environments, those are highly probable, that one can drive the vehicle in different kinds of roads, times and seasons. In this work, we study the relationship of optical flow with different roads, that is by analyzing optical flow accuracy on different road textures. Texture measures such as TeX , TeX and TeX are evaluated for this purpose. Further, the relation of regularization weight to the flow accuracy in the presence of different textures is also analyzed. Additionally, we present a framework to generate synthetic sequences of different textures in ADAS scenarios with ground-truth optical flow.  
  Address Coimbatore, India  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1876-1100 ISBN 978-81-322-0996-6 Medium  
  Area Expedition Conference ICSIP  
  Notes ADAS Approved no  
  Call Number Admin @ si @ OVS2012 Serial 2356  
Permanent link to this record
 

 
Author Patricia Marquez; Debora Gil; R.Mester; Aura Hernandez-Sabate edit   pdf
openurl 
  Title Local Analysis of Confidence Measures for Optical Flow Quality Evaluation Type Conference Article
  Year 2014 Publication 9th International Conference on Computer Vision Theory and Applications Abbreviated Journal  
  Volume 3 Issue Pages 450-457  
  Keywords Optical Flow; Confidence Measure; Performance Evaluation.  
  Abstract (down) Optical Flow (OF) techniques facing the complexity of real sequences have been developed in the last years. Even using the most appropriate technique for our specific problem, at some points the output flow might fail to achieve the minimum error required for the system. Confidence measures computed from either input data or OF output should discard those points where OF is not accurate enough for its further use. It follows that evaluating the capabilities of a confidence measure for bounding OF error is as important as the definition
itself. In this paper we analyze different confidence measures and point out their advantages and limitations for their use in real world settings. We also explore the agreement with current tools for their evaluation of confidence measures performance.
 
  Address Lisboa; January 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes IAM; ADAS; 600.044; 600.060; 600.057; 601.145; 600.076; 600.075 Approved no  
  Call Number Admin @ si @ MGM2014 Serial 2432  
Permanent link to this record
 

 
Author David Geronimo; Angel Sappa; Antonio Lopez; Daniel Ponsa edit   pdf
url  openurl
  Title Adaptive Image Sampling and Windows Classification for On-board Pedestrian Detection Type Conference Article
  Year 2007 Publication Proceedings of the 5th International Conference on Computer Vision Systems Abbreviated Journal ICVS  
  Volume Issue Pages  
  Keywords Pedestrian Detection  
  Abstract (down) On–board pedestrian detection is in the frontier of the state–of–the–art since it implies processing outdoor scenarios from a mobile platform and searching for aspect–changing objects in cluttered urban environments. Most promising approaches include the development of classifiers based on feature selection and machine learning. However, they use a large number of features which compromises real–time. Thus, methods for running the classifiers in only a few image windows must be provided. In this paper we contribute in both aspects, proposing a camera
pose estimation method for adaptive sparse image sampling, as well as a classifier for pedestrian detection based on Haar wavelets and edge orientation histograms as features and AdaBoost as learning machine. Both proposals are compared with relevant approaches in the literature, showing comparable results but reducing processing time by four for the sampling tasks and by ten for the classification one.
 
  Address Bielefeld (Germany)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ gsl2007a Serial 786  
Permanent link to this record
 

 
Author Yi Xiao; Felipe Codevilla; Diego Porres; Antonio Lopez edit  url
openurl 
  Title Scaling Vision-Based End-to-End Autonomous Driving with Multi-View Attention Learning Type Conference Article
  Year 2023 Publication International Conference on Intelligent Robots and Systems Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract (down) On end-to-end driving, human driving demonstrations are used to train perception-based driving models by imitation learning. This process is supervised on vehicle signals (e.g., steering angle, acceleration) but does not require extra costly supervision (human labeling of sensor data). As a representative of such vision-based end-to-end driving models, CILRS is commonly used as a baseline to compare with new driving models. So far, some latest models achieve better performance than CILRS by using expensive sensor suites and/or by using large amounts of human-labeled data for training. Given the difference in performance, one may think that it is not worth pursuing vision-based pure end-to-end driving. However, we argue that this approach still has great value and potential considering cost and maintenance. In this paper, we present CIL++, which improves on CILRS by both processing higher-resolution images using a human-inspired HFOV as an inductive bias and incorporating a proper attention mechanism. CIL++ achieves competitive performance compared to models which are more costly to develop. We propose to replace CILRS with CIL++ as a strong vision-based pure end-to-end driving baseline supervised by only vehicle signals and trained by conditional imitation learning.  
  Address Detroit; USA; October 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IROS  
  Notes ADAS Approved no  
  Call Number Admin @ si @ XCP2023 Serial 3930  
Permanent link to this record
 

 
Author Andrew Nolan; Daniel Serrano; Aura Hernandez-Sabate; Daniel Ponsa; Antonio Lopez edit   pdf
openurl 
  Title Obstacle mapping module for quadrotors on outdoor Search and Rescue operations Type Conference Article
  Year 2013 Publication International Micro Air Vehicle Conference and Flight Competition Abbreviated Journal  
  Volume Issue Pages  
  Keywords UAV  
  Abstract (down) Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments.
 
  Address Toulouse; France; September 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IMAV  
  Notes ADAS; 600.054; 600.057;IAM Approved no  
  Call Number Admin @ si @ NSH2013 Serial 2371  
Permanent link to this record
 

 
Author Arnau Ramisa; David Aldavert; Shrihari Vasudevan; Ricardo Toledo; Ramon Lopez de Mantaras edit  url
openurl 
  Title The IIIA30 MObile Robot Object Recognition Datset Type Conference Article
  Year 2011 Publication 11th Portuguese Robotics Open Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract (down) Object perception is a key feature in order to make mobile robots able to perform high-level tasks. However, research aimed at addressing the constraints and limitations encountered in a mobile robotics scenario, like low image resolution, motion blur or tight computational constraints, is still very scarce. In order to facilitate future research in this direction, in this work we present an object detection and recognition dataset acquired using a mobile robotic platform. As a baseline for the dataset, we evaluated the cascade of weak classifiers object detection method from Viola and Jones.  
  Address Lisboa  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference Robotica  
  Notes RV;ADAS Approved no  
  Call Number Admin @ si @ RAV2011 Serial 1777  
Permanent link to this record
 

 
Author Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title Incremental Domain Adaptation of Deformable Part-based Models Type Conference Article
  Year 2014 Publication 25th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords Pedestrian Detection; Part-based models; Domain Adaptation  
  Abstract (down) Nowadays, classifiers play a core role in many computer vision tasks. The underlying assumption for learning classifiers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classifiers. However, in practice, there are different reasons that can break this constancy assumption. Accordingly, reusing existing classifiers by adapting them from the previous training environment (source domain) to the new testing one (target domain)
is an approach with increasing acceptance in the computer vision community. In this paper we focus on the domain adaptation of deformable part-based models (DPMs) for object detection. In particular, we focus on a relatively unexplored scenario, i.e. incremental domain adaptation for object detection assuming weak-labeling. Therefore, our algorithm is ready to improve existing source-oriented DPM-based detectors as soon as a little amount of labeled target-domain training data is available, and keeps improving as more of such data arrives in a continuous fashion. For achieving this, we follow a multiple
instance learning (MIL) paradigm that operates in an incremental per-image basis. As proof of concept, we address the challenging scenario of adapting a DPM-based pedestrian detector trained with synthetic pedestrians to operate in real-world scenarios. The obtained results show that our incremental adaptive models obtain equally good accuracy results as the batch learned models, while being more flexible for handling continuously arriving target-domain data.
 
  Address Nottingham; uk; September 2014  
  Corporate Author Thesis  
  Publisher BMVA Press Place of Publication Editor Valstar, Michel and French, Andrew and Pridmore, Tony  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number XRV2014c; ADAS @ adas @ xrv2014c Serial 2455  
Permanent link to this record
 

 
Author G. Roig; Xavier Boix; F. de la Torre; Joan Serrat; C. Vilella edit  openurl
  Title Hierarchical CRF with product label spaces for parts-based Models Type Conference Article
  Year 2011 Publication IEEE Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract (down) Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference FG  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RBT2011 Serial 1862  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: