toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author J.Poujol; Cristhian A. Aguilera-Carrasco; E.Danos; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa edit   pdf
url  doi
isbn  openurl
  Title (down) Visible-Thermal Fusion based Monocular Visual Odometry Type Conference Article
  Year 2015 Publication 2nd Iberian Robotics Conference ROBOT2015 Abbreviated Journal  
  Volume 417 Issue Pages 517-528  
  Keywords Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion.  
  Abstract The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained
representations are evaluated under a visual odometry framework, highlighting
their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.
 
  Address Lisboa; Portugal; November 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2194-5357 ISBN 978-3-319-27145-3 Medium  
  Area Expedition Conference ROBOT  
  Notes ADAS; 600.076; 600.086 Approved no  
  Call Number Admin @ si @ PAD2015 Serial 2663  
Permanent link to this record
 

 
Author David Vazquez; Antonio Lopez; Daniel Ponsa; Javier Marin edit   pdf
doi  isbn
openurl 
  Title (down) Virtual Worlds and Active Learning for Human Detection Type Conference Article
  Year 2011 Publication 13th International Conference on Multimodal Interaction Abbreviated Journal  
  Volume Issue Pages 393-400  
  Keywords Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning  
  Abstract Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.  
  Address Alicante, Spain  
  Corporate Author Thesis  
  Publisher ACM DL Place of Publication New York, NY, USA, USA Editor  
  Language English Summary Language English Original Title Virtual Worlds and Active Learning for Human Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-0641-6 Medium  
  Area Expedition Conference ICMI  
  Notes ADAS Approved yes  
  Call Number ADAS @ adas @ VLP2011a Serial 1683  
Permanent link to this record
 

 
Author Jose Carlos Rubio; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title (down) Video Co-segmentation Type Conference Article
  Year 2012 Publication 11th Asian Conference on Computer Vision Abbreviated Journal  
  Volume 7725 Issue Pages 13-24  
  Keywords  
  Abstract Segmentation of a single image is in general a highly underconstrained problem. A frequent approach to solve it is to somehow provide prior knowledge or constraints on how the objects of interest look like (in terms of their shape, size, color, location or structure). Image co-segmentation trades the need for such knowledge for something much easier to obtain, namely, additional images showing the object from other viewpoints. Now the segmentation problem is posed as one of differentiating the similar object regions in all the images from the more varying background. In this paper, for the first time, we extend this approach to video segmentation: given two or more video sequences showing the same object (or objects belonging to the same class) moving in a similar manner, we aim to outline its region in all the frames. In addition, the method works in an unsupervised manner, by learning to segment at testing time. We compare favorably with two state-of-the-art methods on video segmentation and report results on benchmark videos.  
  Address Daejeon, Korea  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-37443-2 Medium  
  Area Expedition Conference ACCV  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RSL2012d Serial 2153  
Permanent link to this record
 

 
Author Daniel Ponsa; Antonio Lopez edit   pdf
openurl 
  Title (down) Vehicle Trajectory Estimation based on Monocular Vision Type Conference Article
  Year 2007 Publication 3rd Iberian Conference on Pattern Recognition and Image Analysis, LNCS 4477 Abbreviated Journal  
  Volume Issue Pages 587-594  
  Keywords vehicle detection  
  Abstract  
  Address Girona (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ PoL2007a Serial 785  
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title (down) Vehicle geolocalization based on video synchronization Type Conference Article
  Year 2010 Publication 13th Annual International Conference on Intelligent Transportation Systems Abbreviated Journal  
  Volume Issue Pages 1511–1516  
  Keywords video alignment  
  Abstract TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters.
 
  Address Madeira Island (Portugal)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2153-0009 ISBN 978-1-4244-7657-2 Medium  
  Area Expedition Conference ITSC  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DPS2010 Serial 1423  
Permanent link to this record
 

 
Author Alex Goldhoorn; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo edit  isbn
openurl 
  Title (down) Using the Average Landmark Vector Method for Robot Homing Type Conference Article
  Year 2007 Publication Artificial Intelligence Research and Development, Proceedings of the 10th International Conference of the ACIA Abbreviated Journal  
  Volume 163 Issue Pages 331–338  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978–1–58603–798–7 Medium  
  Area Expedition Conference CCIA’07  
  Notes RV;ADAS Approved no  
  Call Number Admin @ si @ GRL2007 Serial 899  
Permanent link to this record
 

 
Author Miguel Oliveira; Angel Sappa; V.Santos edit  doi
isbn  openurl
  Title (down) Unsupervised Local Color Correction for Coarsely Registered Images Type Conference Article
  Year 2011 Publication IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 201-208  
  Keywords  
  Abstract The current paper proposes a new parametric local color correction technique. Initially, several color transfer functions are computed from the output of the mean shift color segmentation algorithm. Secondly, color influence maps are calculated. Finally, the contribution of every color transfer function is merged using the weights from the color influence maps. The proposed approach is compared with both global and local color correction approaches. Results show that our method outperforms the technique ranked first in a recent performance evaluation on this topic. Moreover, the proposed approach is computed in about one tenth of the time.  
  Address Colorado Springs  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4577-0394-2 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ OSS2011; ADAS @ adas @ Serial 1766  
Permanent link to this record
 

 
Author David Vazquez; Antonio Lopez; Daniel Ponsa edit   pdf
isbn  openurl
  Title (down) Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection Type Conference Article
  Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3492 - 3495  
  Keywords Pedestrian Detection; Domain Adaptation; Virtual worlds  
  Abstract Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate, and lead it to self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1).  
  Address Tsukuba Science City, Japan  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Tsukuba Science City, JAPAN Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium  
  Area Expedition Conference ICPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ VLP2012 Serial 1981  
Permanent link to this record
 

 
Author Jose Carlos Rubio; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title (down) Unsupervised co-segmentation through region matching Type Conference Article
  Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 749-756  
  Keywords  
  Abstract Co-segmentation is defined as jointly partitioning multiple images depicting the same or similar object, into foreground and background. Our method consists of a multiple-scale multiple-image generative model, which jointly estimates the foreground and background appearance distributions from several images, in a non-supervised manner. In contrast to other co-segmentation methods, our approach does not require the images to have similar foregrounds and different backgrounds to function properly. Region matching is applied to exploit inter-image information by establishing correspondences between the common objects that appear in the scene. Moreover, computing many-to-many associations of regions allow further applications, like recognition of object parts across images. We report results on iCoseg, a challenging dataset that presents extreme variability in camera viewpoint, illumination and object deformations and poses. We also show that our method is robust against large intra-class variability in the MSRC database.  
  Address Providence, Rhode Island  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RSL2012b; ADAS @ adas @ Serial 2033  
Permanent link to this record
 

 
Author Jiaolong Xu; Peng Wang; Heng Yang; Antonio Lopez edit   pdf
url  doi
openurl 
  Title (down) Training a Binary Weight Object Detector by Knowledge Transfer for Autonomous Driving Type Conference Article
  Year 2019 Publication IEEE International Conference on Robotics and Automation Abbreviated Journal  
  Volume Issue Pages 2379-2384  
  Keywords  
  Abstract Autonomous driving has harsh requirements of small model size and energy efficiency, in order to enable the embedded system to achieve real-time on-board object detection. Recent deep convolutional neural network based object detectors have achieved state-of-the-art accuracy. However, such models are trained with numerous parameters and their high computational costs and large storage prohibit the deployment to memory and computation resource limited systems. Low-precision neural networks are popular techniques for reducing the computation requirements and memory footprint. Among them, binary weight neural network (BWN) is the extreme case which quantizes the float-point into just bit. BWNs are difficult to train and suffer from accuracy deprecation due to the extreme low-bit representation. To address this problem, we propose a knowledge transfer (KT) method to aid the training of BWN using a full-precision teacher network. We built DarkNet-and MobileNet-based binary weight YOLO-v2 detectors and conduct experiments on KITTI benchmark for car, pedestrian and cyclist detection. The experimental results show that the proposed method maintains high detection accuracy while reducing the model size of DarkNet-YOLO from 257 MB to 8.8 MB and MobileNet-YOLO from 193 MB to 7.9 MB.  
  Address Montreal; Canada; May 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICRA  
  Notes ADAS; 600.124; 600.116; 600.118 Approved no  
  Call Number Admin @ si @ XWY2018 Serial 3182  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: