toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author J.Poujol; Cristhian A. Aguilera-Carrasco; E.Danos; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa edit   pdf
url  doi
isbn  openurl
  Title Visible-Thermal Fusion based Monocular Visual Odometry Type Conference Article
  Year (down) 2015 Publication 2nd Iberian Robotics Conference ROBOT2015 Abbreviated Journal  
  Volume 417 Issue Pages 517-528  
  Keywords Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion.  
  Abstract The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained
representations are evaluated under a visual odometry framework, highlighting
their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.
 
  Address Lisboa; Portugal; November 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2194-5357 ISBN 978-3-319-27145-3 Medium  
  Area Expedition Conference ROBOT  
  Notes ADAS; 600.076; 600.086 Approved no  
  Call Number Admin @ si @ PAD2015 Serial 2663  
Permanent link to this record
 

 
Author M. Cruz; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa edit  openurl
  Title Cross-spectral image registration and fusion: an evaluation study Type Conference Article
  Year (down) 2015 Publication 2nd International Conference on Machine Vision and Machine Learning Abbreviated Journal  
  Volume Issue Pages  
  Keywords multispectral imaging; image registration; data fusion; infrared and visible spectra  
  Abstract This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different
spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented.
 
  Address Barcelona; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MVML  
  Notes ADAS; 600.076 Approved no  
  Call Number Admin @ si @ CAV2015 Serial 2629  
Permanent link to this record
 

 
Author Marçal Rusiñol; David Aldavert; Ricardo Toledo; Josep Llados edit   pdf
doi  openurl
  Title Towards Query-by-Speech Handwritten Keyword Spotting Type Conference Article
  Year (down) 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 501-505  
  Keywords  
  Abstract In this paper, we present a new querying paradigm for handwritten keyword spotting. We propose to represent handwritten word images both by visual and audio representations, enabling a query-by-speech keyword spotting system. The two representations are merged together and projected to a common sub-space in the training phase. This transform allows to, given a spoken query, retrieve word instances that were only represented by the visual modality. In addition, the same method can be used backwards at no additional cost to produce a handwritten text-tospeech system. We present our first results on this new querying mechanism using synthetic voices over the George Washington
dataset.
 
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.084; 600.061; 601.223; 600.077;ADAS Approved no  
  Call Number Admin @ si @ RAT2015b Serial 2682  
Permanent link to this record
 

 
Author Miguel Oliveira; L. Seabra Lopes; G. Hyun Lim; S. Hamidreza Kasaei; Angel Sappa; A. Tom edit   pdf
url  doi
openurl 
  Title Concurrent Learning of Visual Codebooks and Object Categories in Openended Domains Type Conference Article
  Year (down) 2015 Publication International Conference on Intelligent Robots and Systems Abbreviated Journal  
  Volume Issue Pages 2488 - 2495  
  Keywords Visual Learning; Computer Vision; Autonomous Agents  
  Abstract In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using offline constructed codebooks.  
  Address Hamburg; Germany; October 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IROS  
  Notes ADAS; 600.076 Approved no  
  Call Number Admin @ si @ OSL2015 Serial 2664  
Permanent link to this record
 

 
Author Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias edit   pdf
doi  openurl
  Title Scene Representations for Autonomous Driving: an approach based on polygonal primitives Type Conference Article
  Year (down) 2015 Publication 2nd Iberian Robotics Conference ROBOT2015 Abbreviated Journal  
  Volume 417 Issue Pages 503-515  
  Keywords Scene reconstruction; Point cloud; Autonomous vehicles  
  Abstract In this paper, we present a novel methodology to compute a 3D scene
representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
 
  Address Lisboa; Portugal; November 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ROBOT  
  Notes ADAS; 600.076; 600.086 Approved no  
  Call Number Admin @ si @ OSS2015a Serial 2662  
Permanent link to this record
 

 
Author Ariel Amato; Felipe Lumbreras; Angel Sappa edit   pdf
openurl 
  Title A General-purpose Crowdsourcing Platform for Mobile Devices Type Conference Article
  Year (down) 2014 Publication 9th International Conference on Computer Vision Theory and Applications Abbreviated Journal  
  Volume 3 Issue Pages 211-215  
  Keywords Crowdsourcing Platform; Mobile Crowdsourcing  
  Abstract This paper presents details of a general purpose micro-task on-demand platform based on the crowdsourcing philosophy. This platform was specifically developed for mobile devices in order to exploit the strengths of such devices; namely: i) massivity, ii) ubiquity and iii) embedded sensors. The combined use of mobile platforms and the crowdsourcing model allows to tackle from the simplest to the most complex tasks. Users experience is the highlighted feature of this platform (this fact is extended to both task-proposer and tasksolver). Proper tools according with a specific task are provided to a task-solver in order to perform his/her job in a simpler, faster and appealing way. Moreover, a task can be easily submitted by just selecting predefined templates, which cover a wide range of possible applications. Examples of its usage in computer vision and computer games are provided illustrating the potentiality of the platform.  
  Address Lisboa; Portugal; January 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes ISE; ADAS; 600.054; 600.055; 600.076; 600.078 Approved no  
  Call Number Admin @ si @ ALS2014 Serial 2478  
Permanent link to this record
 

 
Author Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title Incremental Domain Adaptation of Deformable Part-based Models Type Conference Article
  Year (down) 2014 Publication 25th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords Pedestrian Detection; Part-based models; Domain Adaptation  
  Abstract Nowadays, classifiers play a core role in many computer vision tasks. The underlying assumption for learning classifiers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classifiers. However, in practice, there are different reasons that can break this constancy assumption. Accordingly, reusing existing classifiers by adapting them from the previous training environment (source domain) to the new testing one (target domain)
is an approach with increasing acceptance in the computer vision community. In this paper we focus on the domain adaptation of deformable part-based models (DPMs) for object detection. In particular, we focus on a relatively unexplored scenario, i.e. incremental domain adaptation for object detection assuming weak-labeling. Therefore, our algorithm is ready to improve existing source-oriented DPM-based detectors as soon as a little amount of labeled target-domain training data is available, and keeps improving as more of such data arrives in a continuous fashion. For achieving this, we follow a multiple
instance learning (MIL) paradigm that operates in an incremental per-image basis. As proof of concept, we address the challenging scenario of adapting a DPM-based pedestrian detector trained with synthetic pedestrians to operate in real-world scenarios. The obtained results show that our incremental adaptive models obtain equally good accuracy results as the batch learned models, while being more flexible for handling continuously arriving target-domain data.
 
  Address Nottingham; uk; September 2014  
  Corporate Author Thesis  
  Publisher BMVA Press Place of Publication Editor Valstar, Michel and French, Andrew and Pridmore, Tony  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number XRV2014c; ADAS @ adas @ xrv2014c Serial 2455  
Permanent link to this record
 

 
Author Jiaolong Xu; Sebastian Ramos;David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title Cost-sensitive Structured SVM for Multi-category Domain Adaptation Type Conference Article
  Year (down) 2014 Publication 22nd International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3886 - 3891  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract Domain adaptation addresses the problem of accuracy drop that a classifier may suffer when the training data (source domain) and the testing data (target domain) are drawn from different distributions. In this work, we focus on domain adaptation for structured SVM (SSVM). We propose a cost-sensitive domain adaptation method for SSVM, namely COSS-SSVM. In particular, during the re-training of an adapted classifier based on target and source data, the idea that we explore consists in introducing a non-zero cost even for correctly classified source domain samples. Eventually, we aim to learn a more targetoriented classifier by not rewarding (zero loss) properly classified source-domain training samples. We assess the effectiveness of COSS-SSVM on multi-category object recognition.  
  Address Stockholm; Sweden; August 2014  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN Medium  
  Area Expedition Conference ICPR  
  Notes ADAS; 600.057; 600.054; 601.217; 600.076 Approved no  
  Call Number ADAS @ adas @ XRV2014a Serial 2434  
Permanent link to this record
 

 
Author Mohammad Rouhani; E. Boyer; Angel Sappa edit   pdf
doi  openurl
  Title Non-Rigid Registration meets Surface Reconstruction Type Conference Article
  Year (down) 2014 Publication International Conference on 3D Vision Abbreviated Journal  
  Volume Issue Pages 617-624  
  Keywords  
  Abstract Non rigid registration is an important task in computer vision with many applications in shape and motion modeling. A fundamental step of the registration is the data association between the source and the target sets. Such association proves difficult in practice, due to the discrete nature of the information and its corruption by various types of noise, e.g. outliers and missing data. In this paper we investigate the benefit of the implicit representations for the non-rigid registration of 3D point clouds. First, the target points are described with small quadratic patches that are blended through partition of unity weighting. Then, the discrete association between the source and the target can be replaced by a continuous distance field induced by the interface. By combining this distance field with a proper deformation term, the registration energy can be expressed in a linear least square form that is easy and fast to solve. This significantly eases the registration by avoiding direct association between points. Moreover, a hierarchical approach can be easily implemented by employing coarse-to-fine representations. Experimental results are provided for point clouds from multi-view data sets. The qualitative and quantitative comparisons show the outperformance and robustness of our framework. %in presence of noise and outliers.  
  Address Tokyo; Japan; December 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference 3DV  
  Notes ADAS; 600.055; 600.076 Approved no  
  Call Number Admin @ si @ RBS2014 Serial 2534  
Permanent link to this record
 

 
Author Naveen Onkarappa; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Angel Sappa edit   pdf
doi  openurl
  Title Cross-spectral Stereo Correspondence using Dense Flow Fields Type Conference Article
  Year (down) 2014 Publication 9th International Conference on Computer Vision Theory and Applications Abbreviated Journal  
  Volume 3 Issue Pages 613-617  
  Keywords Cross-spectral Stereo Correspondence; Dense Optical Flow; Infrared and Visible Spectrum  
  Abstract This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach.  
  Address Lisboa; Portugal; January 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes ADAS; 600.055; 600.076 Approved no  
  Call Number Admin @ si @ OAV2014 Serial 2477  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: