toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Katerine Diaz; Aura Hernandez-Sabate; Antonio Lopez edit   pdf
doi  openurl
  Title A reduced feature set for driver head pose estimation Type Journal Article
  Year (down) 2016 Publication Applied Soft Computing Abbreviated Journal ASOC  
  Volume 45 Issue Pages 98-107  
  Keywords Head pose estimation; driving performance evaluation; subspace based methods; linear regression  
  Abstract Evaluation of driving performance is of utmost importance in order to reduce road accident rate. Since driving ability includes visual-spatial and operational attention, among others, head pose estimation of the driver is a crucial indicator of driving performance. This paper proposes a new automatic method for coarse and fine head's yaw angle estimation of the driver. We rely on a set of geometric features computed from just three representative facial keypoints, namely the center of the eyes and the nose tip. With these geometric features, our method combines two manifold embedding methods and a linear regression one. In addition, the method has a confidence mechanism to decide if the classification of a sample is not reliable. The approach has been tested using the CMU-PIE dataset and our own driver dataset. Despite the very few facial keypoints required, the results are comparable to the state-of-the-art techniques. The low computational cost of the method and its robustness makes feasible to integrate it in massive consume devices as a real time application.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.085; 600.076; Approved no  
  Call Number Admin @ si @ DHL2016 Serial 2760  
Permanent link to this record
 

 
Author Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias; A. Moreira edit   pdf
doi  openurl
  Title Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives Type Journal Article
  Year (down) 2016 Publication Robotics and Autonomous Systems Abbreviated Journal RAS  
  Volume 83 Issue Pages 312-325  
  Keywords Incremental scene reconstruction; Point clouds; Autonomous vehicles; Polygonal primitives  
  Abstract When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier B.V. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.086, 600.076 Approved no  
  Call Number Admin @ si @OSS2016a Serial 2806  
Permanent link to this record
 

 
Author Angel Sappa; P. Carvajal; Cristhian A. Aguilera-Carrasco; Miguel Oliveira; Dennis Romero; Boris Vintimilla edit   pdf
doi  openurl
  Title Wavelet based visible and infrared image fusion: a comparative study Type Journal Article
  Year (down) 2016 Publication Sensors Abbreviated Journal SENS  
  Volume 16 Issue 6 Pages 1-15  
  Keywords Image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform  
  Abstract This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.086; 600.076 Approved no  
  Call Number Admin @ si @SCA2016 Serial 2807  
Permanent link to this record
 

 
Author Angel Sappa; Cristhian A. Aguilera-Carrasco; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris Vintimilla; Ricardo Toledo edit   pdf
doi  openurl
  Title Monocular visual odometry: A cross-spectral image fusion based approach Type Journal Article
  Year (down) 2016 Publication Robotics and Autonomous Systems Abbreviated Journal RAS  
  Volume 85 Issue Pages 26-36  
  Keywords Monocular visual odometry; LWIR-RGB cross-spectral imaging; Image fusion  
  Abstract This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is empirically obtained by means of a mutual information based evaluation metric. The objective is to have a flexible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odometry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier B.V. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;600.086; 600.076 Approved no  
  Call Number Admin @ si @SAC2016 Serial 2811  
Permanent link to this record
 

 
Author Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias; A. Moreira edit   pdf
url  openurl
  Title Incremental texture mapping for autonomous driving Type Journal Article
  Year (down) 2016 Publication Robotics and Autonomous Systems Abbreviated Journal RAS  
  Volume 84 Issue Pages 113-128  
  Keywords Scene reconstruction; Autonomous driving; Texture mapping  
  Abstract Autonomous vehicles have a large number of on-board sensors, not only for providing coverage all around the vehicle, but also to ensure multi-modality in the observation of the scene. Because of this, it is not trivial to come up with a single, unique representation that feeds from the data given by all these sensors. We propose an algorithm which is capable of mapping texture collected from vision based sensors onto a geometric description of the scenario constructed from data provided by 3D sensors. The algorithm uses a constrained Delaunay triangulation to produce a mesh which is updated using a specially devised sequence of operations. These enforce a partial configuration of the mesh that avoids bad quality textures and ensures that there are no gaps in the texture. Results show that this algorithm is capable of producing fine quality textures.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.086 Approved no  
  Call Number Admin @ si @ OSS2016b Serial 2912  
Permanent link to this record
 

 
Author Jaume Amores edit   pdf
doi  openurl
  Title MILDE: multiple instance learning by discriminative embedding Type Journal Article
  Year (down) 2015 Publication Knowledge and Information Systems Abbreviated Journal KAIS  
  Volume 42 Issue 2 Pages 381-407  
  Keywords Multi-instance learning; Codebook; Bag of words  
  Abstract While the objective of the standard supervised learning problem is to classify feature vectors, in the multiple instance learning problem, the objective is to classify bags, where each bag contains multiple feature vectors. This represents a generalization of the standard problem, and this generalization becomes necessary in many real applications such as drug activity prediction, content-based image retrieval, and others. While the existing paradigms are based on learning the discriminant information either at the instance level or at the bag level, we propose to incorporate both levels of information. This is done by defining a discriminative embedding of the original space based on the responses of cluster-adapted instance classifiers. Results clearly show the advantage of the proposed method over the state of the art, where we tested the performance through a variety of well-known databases that come from real problems, and we also included an analysis of the performance using synthetically generated data.  
  Address  
  Corporate Author Thesis  
  Publisher Springer London Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0219-1377 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 601.042; 600.057; 600.076 Approved no  
  Call Number Admin @ si @ Amo2015 Serial 2383  
Permanent link to this record
 

 
Author Naveen Onkarappa; Angel Sappa edit  doi
openurl 
  Title Synthetic sequences and ground-truth flow field generation for algorithm validation Type Journal Article
  Year (down) 2015 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 74 Issue 9 Pages 3121-3135  
  Keywords Ground-truth optical flow; Synthetic sequence; Algorithm validation  
  Abstract Research in computer vision is advancing by the availability of good datasets that help to improve algorithms, validate results and obtain comparative analysis. The datasets can be real or synthetic. For some of the computer vision problems such as optical flow it is not possible to obtain ground-truth optical flow with high accuracy in natural outdoor real scenarios directly by any sensor, although it is possible to obtain ground-truth data of real scenarios in a laboratory setup with limited motion. In this difficult situation computer graphics offers a viable option for creating realistic virtual scenarios. In the current work we present a framework to design virtual scenes and generate sequences as well as ground-truth flow fields. Particularly, we generate a dataset containing sequences of driving scenarios. The sequences in the dataset vary in different speeds of the on-board vision system, different road textures, complex motion of vehicle and independent moving vehicles in the scene. This dataset enables analyzing and adaptation of existing optical flow methods, and leads to invention of new approaches particularly for driver assistance systems.  
  Address  
  Corporate Author Thesis  
  Publisher Springer US Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1380-7501 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.055; 601.215; 600.076 Approved no  
  Call Number Admin @ si @ OnS2014b Serial 2472  
Permanent link to this record
 

 
Author Monica Piñol; Angel Sappa; Ricardo Toledo edit  doi
openurl 
  Title Adaptive Feature Descriptor Selection based on a Multi-Table Reinforcement Learning Strategy Type Journal Article
  Year (down) 2015 Publication Neurocomputing Abbreviated Journal NEUCOM  
  Volume 150 Issue A Pages 106–115  
  Keywords Reinforcement learning; Q-learning; Bag of features; Descriptors  
  Abstract This paper presents and evaluates a framework to improve the performance of visual object classification methods, which are based on the usage of image feature descriptors as inputs. The goal of the proposed framework is to learn the best descriptor for each image in a given database. This goal is reached by means of a reinforcement learning process using the minimum information. The visual classification system used to demonstrate the proposed framework is based on a bag of features scheme, and the reinforcement learning technique is implemented through the Q-learning approach. The behavior of the reinforcement learning with different state definitions is evaluated. Additionally, a method that combines all these states is formulated in order to select the optimal state. Finally, the chosen actions are obtained from the best set of image descriptors in the literature: PHOW, SIFT, C-SIFT, SURF and Spin. Experimental results using two public databases (ETH and COIL) are provided showing both the validity of the proposed approach and comparisons with state of the art. In all the cases the best results are obtained with the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.055; 600.076 Approved no  
  Call Number Admin @ si @ PST2015 Serial 2473  
Permanent link to this record
 

 
Author Miguel Oliveira; Victor Santos; Angel Sappa edit  doi
openurl 
  Title Multimodal Inverse Perspective Mapping Type Journal Article
  Year (down) 2015 Publication Information Fusion Abbreviated Journal IF  
  Volume 24 Issue Pages 108–121  
  Keywords Inverse perspective mapping; Multimodal sensor fusion; Intelligent vehicles  
  Abstract Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.055; 600.076 Approved no  
  Call Number Admin @ si @ OSS2015c Serial 2532  
Permanent link to this record
 

 
Author T. Mouats; N. Aouf; Angel Sappa; Cristhian A. Aguilera-Carrasco; Ricardo Toledo edit  doi
openurl 
  Title Multi-Spectral Stereo Odometry Type Journal Article
  Year (down) 2015 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 16 Issue 3 Pages 1210-1224  
  Keywords Egomotion estimation; feature matching; multispectral odometry (MO); optical flow; stereo odometry; thermal imagery  
  Abstract In this paper, we investigate the problem of visual odometry for ground vehicles based on the simultaneous utilization of multispectral cameras. It encompasses a stereo rig composed of an optical (visible) and thermal sensors. The novelty resides in the localization of the cameras as a stereo setup rather
than two monocular cameras of different spectrums. To the best of our knowledge, this is the first time such task is attempted. Log-Gabor wavelets at different orientations and scales are used to extract interest points from both images. These are then described using a combination of frequency and spatial information within the local neighborhood. Matches between the pairs of multimodal images are computed using the cosine similarity function based
on the descriptors. Pyramidal Lucas–Kanade tracker is also introduced to tackle temporal feature matching within challenging sequences of the data sets. The vehicle egomotion is computed from the triangulated 3-D points corresponding to the matched features. A windowed version of bundle adjustment incorporating
Gauss–Newton optimization is utilized for motion estimation. An outlier removal scheme is also included within the framework to deal with outliers. Multispectral data sets were generated and used as test bed. They correspond to real outdoor scenarios captured using our multimodal setup. Finally, detailed results validating the proposed strategy are illustrated.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.055; 600.076 Approved no  
  Call Number Admin @ si @ MAS2015a Serial 2533  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: