toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Aura Hernandez-Sabate; Debora Gil; Jaume Garcia; Enric Marti edit   pdf
doi  openurl
  Title Image-based Cardiac Phase Retrieval in Intravascular Ultrasound Sequences Type Journal Article
  Year 2011 Publication (down) IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control Abbreviated Journal T-UFFC  
  Volume 58 Issue 1 Pages 60-72  
  Keywords 3-D exploring; ECG; band-pass filter; cardiac motion; cardiac phase retrieval; coronary arteries; electrocardiogram signal; image intensity local mean evolution; image-based cardiac phase retrieval; in vivo pullbacks acquisition; intravascular ultrasound sequences; longitudinal motion; signal extrema; time 36 ms; band-pass filters; biomedical ultrasonics; cardiovascular system; electrocardiography; image motion analysis; image retrieval; image sequences; medical image processing; ultrasonic imaging  
  Abstract Longitudinal motion during in vivo pullbacks acquisition of intravascular ultrasound (IVUS) sequences is a major artifact for 3-D exploring of coronary arteries. Most current techniques are based on the electrocardiogram (ECG) signal to obtain a gated pullback without longitudinal motion by using specific hardware or the ECG signal itself. We present an image-based approach for cardiac phase retrieval from coronary IVUS sequences without an ECG signal. A signal reflecting cardiac motion is computed by exploring the image intensity local mean evolution. The signal is filtered by a band-pass filter centered at the main cardiac frequency. Phase is retrieved by computing signal extrema. The average frame processing time using our setup is 36 ms. Comparison to manually sampled sequences encourages a deeper study comparing them to ECG signals.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0885-3010 ISBN Medium  
  Area Expedition Conference  
  Notes IAM;ADAS Approved no  
  Call Number IAM @ iam @ HGG2011 Serial 1546  
Permanent link to this record
 

 
Author Javier Marin; David Vazquez; Antonio Lopez; Jaume Amores; Ludmila I. Kuncheva edit   pdf
url  doi
openurl 
  Title Occlusion handling via random subspace classifiers for human detection Type Journal Article
  Year 2014 Publication (down) IEEE Transactions on Systems, Man, and Cybernetics (Part B) Abbreviated Journal TSMCB  
  Volume 44 Issue 3 Pages 342-354  
  Keywords Pedestriand Detection; occlusion handling  
  Abstract This paper describes a general method to address partial occlusions for human detection in still images. The Random Subspace Method (RSM) is chosen for building a classifier ensemble robust against partial occlusions. The component classifiers are chosen on the basis of their individual and combined performance. The main contribution of this work lies in our approach’s capability to improve the detection rate when partial occlusions are present without compromising the detection performance on non occluded data. In contrast to many recent approaches, we propose a method which does not require manual labelling of body parts, defining any semantic spatial components, or using additional data coming from motion or stereo. Moreover, the method can be easily extended to other object classes. The experiments are performed on three large datasets: the INRIA person dataset, the Daimler Multicue dataset, and a new challenging dataset, called PobleSec, in which a considerable number of targets are partially occluded. The different approaches are evaluated at the classification and detection levels for both partially occluded and non-occluded data. The experimental results show that our detector outperforms state-of-the-art approaches in the presence of partial occlusions, while offering performance and reliability similar to those of the holistic approach on non-occluded data. The datasets used in our experiments have been made publicly available for benchmarking purposes  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2168-2267 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 605.203; 600.057; 600.054; 601.042; 601.187; 600.076 Approved no  
  Call Number ADAS @ adas @ MVL2014 Serial 2213  
Permanent link to this record
 

 
Author Jaume Amores; N. Sebe; Petia Radeva edit  openurl
  Title Context-Based Object-Class Recognition and Retrieval by Generalized Correlograms Type Journal
  Year 2007 Publication (down) IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29(10):1818–1833, (ISI 3,81) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;MILAB Approved no  
  Call Number ADAS @ adas @ ASR2007b Serial 922  
Permanent link to this record
 

 
Author Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title Domain Adaptation of Deformable Part-Based Models Type Journal Article
  Year 2014 Publication (down) IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 12 Pages 2367-2380  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract The accuracy of object classifiers can significantly drop when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, adapting the classifiers to the scenario in which they must operate is of paramount importance. We present novel domain adaptation (DA) methods for object detection. As proof of concept, we focus on adapting the state-of-the-art deformable part-based model (DPM) for pedestrian detection. We introduce an adaptive structural SVM (A-SSVM) that adapts a pre-learned classifier between different domains. By taking into account the inherent structure in feature space (e.g., the parts in a DPM), we propose a structure-aware A-SSVM (SA-SSVM). Neither A-SSVM nor SA-SSVM needs to revisit the source-domain training data to perform the adaptation. Rather, a low number of target-domain training examples (e.g., pedestrians) are used. To address the scenario where there are no target-domain annotated samples, we propose a self-adaptive DPM based on a self-paced learning (SPL) strategy and a Gaussian Process Regression (GPR). Two types of adaptation tasks are assessed: from both synthetic pedestrians and general persons (PASCAL VOC) to pedestrians imaged from an on-board camera. Results show that our proposals avoid accuracy drops as high as 15 points when comparing adapted and non-adapted detectors.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.057; 600.054; 601.217; 600.076 Approved no  
  Call Number ADAS @ adas @ XRV2014b Serial 2436  
Permanent link to this record
 

 
Author David Vazquez; Javier Marin; Antonio Lopez; Daniel Ponsa; David Geronimo edit   pdf
doi  openurl
  Title Virtual and Real World Adaptation for Pedestrian Detection Type Journal Article
  Year 2014 Publication (down) IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 4 Pages 797-809  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number ADAS @ adas @ VML2014 Serial 2275  
Permanent link to this record
 

 
Author Daniel Hernandez; Antonio Espinosa; David Vazquez; Antonio Lopez; Juan C. Moure edit   pdf
url  doi
openurl 
  Title 3D Perception With Slanted Stixels on GPU Type Journal Article
  Year 2021 Publication (down) IEEE Transactions on Parallel and Distributed Systems Abbreviated Journal TPDS  
  Volume 32 Issue 10 Pages 2434-2447  
  Keywords Daniel Hernandez-Juarez; Antonio Espinosa; David Vazquez; Antonio M. Lopez; Juan C. Moure  
  Abstract This article presents a GPU-accelerated software design of the recently proposed model of Slanted Stixels, which represents the geometric and semantic information of a scene in a compact and accurate way. We reformulate the measurement depth model to reduce the computational complexity of the algorithm, relying on the confidence of the depth estimation and the identification of invalid values to handle outliers. The proposed massively parallel scheme and data layout for the irregular computation pattern that corresponds to a Dynamic Programming paradigm is described and carefully analyzed in performance terms. Performance is shown to scale gracefully on current generation embedded GPUs. We assess the proposed methods in terms of semantic and geometric accuracy as well as run-time performance on three publicly available benchmark datasets. Our approach achieves real-time performance with high accuracy for 2048 × 1024 image sizes and 4 × 4 Stixel resolution on the low-power embedded GPU of an NVIDIA Tegra Xavier.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.124; 600.118 Approved no  
  Call Number Admin @ si @ HEV2021 Serial 3561  
Permanent link to this record
 

 
Author Katerine Diaz; Francesc J. Ferri; W. Diaz edit  doi
openurl 
  Title Incremental Generalized Discriminative Common Vectors for Image Classification Type Journal Article
  Year 2015 Publication (down) IEEE Transactions on Neural Networks and Learning Systems Abbreviated Journal TNNLS  
  Volume 26 Issue 8 Pages 1761 - 1775  
  Keywords  
  Abstract Subspace-based methods have become popular due to their ability to appropriately represent complex data in such a way that both dimensionality is reduced and discriminativeness is enhanced. Several recent works have concentrated on the discriminative common vector (DCV) method and other closely related algorithms also based on the concept of null space. In this paper, we present a generalized incremental formulation of the DCV methods, which allows the update of a given model by considering the addition of new examples even from unseen classes. Having efficient incremental formulations of well-behaved batch algorithms allows us to conveniently adapt previously trained classifiers without the need of recomputing them from scratch. The proposed generalized incremental method has been empirically validated in different case studies from different application domains (faces, objects, and handwritten digits) considering several different scenarios in which new data are continuously added at different rates starting from an initial model.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2162-237X ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.076 Approved no  
  Call Number Admin @ si @ DFD2015 Serial 2547  
Permanent link to this record
 

 
Author Ferran Diego; Joan Serrat; Antonio Lopez edit   pdf
doi  openurl
  Title Joint spatio-temporal alignment of sequences Type Journal Article
  Year 2013 Publication (down) IEEE Transactions on Multimedia Abbreviated Journal TMM  
  Volume 15 Issue 6 Pages 1377-1387  
  Keywords video alignment  
  Abstract Video alignment is important in different areas of computer vision such as wide baseline matching, action recognition, change detection, video copy detection and frame dropping prevention. Current video alignment methods usually deal with a relatively simple case of fixed or rigidly attached cameras or simultaneous acquisition. Therefore, in this paper we propose a joint video alignment for bringing two video sequences into a spatio-temporal alignment. Specifically, the novelty of the paper is to formulate the video alignment to fold the spatial and temporal alignment into a single alignment framework. This simultaneously satisfies a frame-correspondence and frame-alignment similarity; exploiting the knowledge among neighbor frames by a standard pairwise Markov random field (MRF). This new formulation is able to handle the alignment of sequences recorded at different times by independent moving cameras that follows a similar trajectory, and also generalizes the particular cases that of fixed geometric transformation and/or linear temporal mapping. We conduct experiments on different scenarios such as sequences recorded simultaneously or by moving cameras to validate the robustness of the proposed approach. The proposed method provides the highest video alignment accuracy compared to the state-of-the-art methods on sequences recorded from vehicles driving along the same track at different times.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-9210 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ DSL2013; ADAS @ adas @ Serial 2228  
Permanent link to this record
 

 
Author A.F. Sole; S. Ngan; G. Sapiro; X. Hu; Antonio Lopez edit   pdf
openurl 
  Title Anisotropic 2-D and 3-D Averaging of fMRI Signals Type Journal Article
  Year 2001 Publication (down) IEEE Transactions on Medical Imaging, 20(2): 86–93 (IF: 3.142) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ SNS2001 Serial 165  
Permanent link to this record
 

 
Author Angel Sappa; Fadi Dornaika; Daniel Ponsa; David Geronimo; Antonio Lopez edit   pdf
url  openurl
  Title An Efficient Approach to Onboard Stereo Vision System Pose Estimation Type Journal Article
  Year 2008 Publication (down) IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 9 Issue 3 Pages 476–490  
  Keywords Camera extrinsic parameter estimation, ground plane estimation, onboard stereo vision system  
  Abstract This paper presents an efficient technique for estimating the pose of an onboard stereo vision system relative to the environment’s dominant surface area, which is supposed to be the road surface. Unlike previous approaches, it can be used either for urban or highway scenarios since it is not based on a specific visual traffic feature extraction but on 3-D raw data points. The whole process is performed in the Euclidean space and consists of two stages. Initially, a compact 2-D representation of the original 3-D data points is computed. Then, a RANdom SAmple Consensus (RANSAC) based least-squares approach is used to fit a plane to the road. Fast RANSAC fitting is obtained by selecting points according to a probability function that takes into account the density of points at a given depth. Finally, stereo camera height and pitch angle are computed related to the fitted road plane. The proposed technique is intended to be used in driverassistance systems for applications such as vehicle or pedestrian detection. Experimental results on urban environments, which are the most challenging scenarios (i.e., flat/uphill/downhill driving, speed bumps, and car’s accelerations), are presented. These results are validated with manually annotated ground truth. Additionally, comparisons with previous works are presented to show the improvements in the central processing unit processing time, as well as in the accuracy of the obtained results.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ SDP2008 Serial 1000  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: