toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links (up)
Author Albert Gordo; Jose Antonio Rodriguez; Florent Perronnin; Ernest Valveny edit   pdf
doi  isbn
openurl 
  Title Leveraging category-level labels for instance-level image retrieval Type Conference Article
  Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3045-3052  
  Keywords  
  Abstract In this article, we focus on the problem of large-scale instance-level image retrieval. For efficiency reasons, it is common to represent an image by a fixed-length descriptor which is subsequently encoded into a small number of bits. We note that most encoding techniques include an unsupervised dimensionality reduction step. Our goal in this work is to learn a better subspace in a supervised manner. We especially raise the following question: “can category-level labels be used to learn such a subspace?” To answer this question, we experiment with four learning techniques: the first one is based on a metric learning framework, the second one on attribute representations, the third one on Canonical Correlation Analysis (CCA) and the fourth one on Joint Subspace and Classifier Learning (JSCL). While the first three approaches have been applied in the past to the image retrieval problem, we believe we are the first to show the usefulness of JSCL in this context. In our experiments, we use ImageNet as a source of category-level labels and report retrieval results on two standard dataseis: INRIA Holidays and the University of Kentucky benchmark. Our experimental study shows that metric learning and attributes do not lead to any significant improvement in retrieval accuracy, as opposed to CCA and JSCL. As an example, we report on Holidays an increase in accuracy from 39.3% to 48.6% with 32-dimensional representations. Overall JSCL is shown to yield the best results.  
  Address Providence, Rhode Island  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
  Area Expedition Conference CVPR  
  Notes DAG Approved no  
  Call Number Admin @ si @ GRP2012 Serial 2050  
Permanent link to this record
 

 
Author Murad Al Haj; Jordi Gonzalez; Larry S. Davis edit  doi
isbn  openurl
  Title On Partial Least Squares in Head Pose Estimation: How to simultaneously deal with misalignment Type Conference Article
  Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2602-2609  
  Keywords  
  Abstract Head pose estimation is a critical problem in many computer vision applications. These include human computer interaction, video surveillance, face and expression recognition. In most prior work on heads pose estimation, the positions of the faces on which the pose is to be estimated are specified manually. Therefore, the results are reported without studying the effect of misalignment. We propose a method based on partial least squares (PLS) regression to estimate pose and solve the alignment problem simultaneously. The contributions of this paper are two-fold: 1) we show that the kernel version of PLS (kPLS) achieves better than state-of-the-art results on the estimation problem and 2) we develop a technique to reduce misalignment based on the learned PLS factors.  
  Address Providence, Rhode Island  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
  Area Expedition Conference CVPR  
  Notes ISE Approved no  
  Call Number Admin @ si @ HGD2012 Serial 2029  
Permanent link to this record
 

 
Author Jose Carlos Rubio; Joan Serrat; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Unsupervised co-segmentation through region matching Type Conference Article
  Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 749-756  
  Keywords  
  Abstract Co-segmentation is defined as jointly partitioning multiple images depicting the same or similar object, into foreground and background. Our method consists of a multiple-scale multiple-image generative model, which jointly estimates the foreground and background appearance distributions from several images, in a non-supervised manner. In contrast to other co-segmentation methods, our approach does not require the images to have similar foregrounds and different backgrounds to function properly. Region matching is applied to exploit inter-image information by establishing correspondences between the common objects that appear in the scene. Moreover, computing many-to-many associations of regions allow further applications, like recognition of object parts across images. We report results on iCoseg, a challenging dataset that presents extreme variability in camera viewpoint, illumination and object deformations and poses. We also show that our method is robust against large intra-class variability in the MSRC database.  
  Address Providence, Rhode Island  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RSL2012b; ADAS @ adas @ Serial 2033  
Permanent link to this record
 

 
Author Antonio Hernandez; Nadezhda Zlateva; Alexander Marinov; Miguel Reyes; Petia Radeva; Dimo Dimov; Sergio Escalera edit   pdf
doi  isbn
openurl 
  Title Graph Cuts Optimization for Multi-Limb Human Segmentation in Depth Maps Type Conference Article
  Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 726-732  
  Keywords  
  Abstract We present a generic framework for object segmentation using depth maps based on Random Forest and Graph-cuts theory, and apply it to the segmentation of human limbs in depth maps. First, from a set of random depth features, Random Forest is used to infer a set of label probabilities for each data sample. This vector of probabilities is used as unary term in α-β swap Graph-cuts algorithm. Moreover, depth of spatio-temporal neighboring data points are used as boundary potentials. Results on a new multi-label human depth data set show high performance in terms of segmentation overlapping of the novel methodology compared to classical approaches.  
  Address Portland; Oregon; June 2013  
  Corporate Author Thesis  
  Publisher IEEE Xplore Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium  
  Area Expedition Conference CVPR  
  Notes MILAB;HuPBA Approved no  
  Call Number Admin @ si @ HZM2012b Serial 2046  
Permanent link to this record
 

 
Author Marco Pedersoli; Andrea Vedaldi; Jordi Gonzalez edit  doi
openurl 
  Title A Coarse-to-fine Approach for fast Deformable Object Detection Type Conference Article
  Year 2011 Publication IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1353-1360  
  Keywords  
  Abstract  
  Address Colorado Springs; USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes ISE Approved no  
  Call Number Admin @ si @ PVG2011 Serial 1764  
Permanent link to this record
 

 
Author Miguel Oliveira; Angel Sappa; V.Santos edit  doi
isbn  openurl
  Title Unsupervised Local Color Correction for Coarsely Registered Images Type Conference Article
  Year 2011 Publication IEEE conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 201-208  
  Keywords  
  Abstract The current paper proposes a new parametric local color correction technique. Initially, several color transfer functions are computed from the output of the mean shift color segmentation algorithm. Secondly, color influence maps are calculated. Finally, the contribution of every color transfer function is merged using the weights from the color influence maps. The proposed approach is compared with both global and local color correction approaches. Results show that our method outperforms the technique ranked first in a recent performance evaluation on this topic. Moreover, the proposed approach is computed in about one tenth of the time.  
  Address Colorado Springs  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4577-0394-2 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ OSS2011; ADAS @ adas @ Serial 1766  
Permanent link to this record
 

 
Author Albert Gordo; Florent Perronnin edit  doi
isbn  openurl
  Title Asymmetric Distances for Binary Embeddings Type Conference Article
  Year 2011 Publication IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 729 - 736  
  Keywords  
  Abstract In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes which binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances which are applicable to a wide variety of embedding techniques including Locality Sensitive Hashing (LSH), Locality Sensitive Binary Codes (LSBC), Spectral Hashing (SH) and Semi-Supervised Hashing (SSH). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques. We also propose a novel simple binary embedding technique – PCA Embedding (PCAE) – which is shown to yield competitive results with respect to more complex algorithms such as SH and SSH.  
  Address Providence, RI  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4577-0394-2 Medium  
  Area Expedition Conference CVPR  
  Notes DAG Approved no  
  Call Number Admin @ si @ GoP2011; IAM @ iam @ GoP2011 Serial 1817  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title 3D Scene Priors for Road Detection Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 57–64  
  Keywords road detection  
  Abstract Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2010a Serial 1302  
Permanent link to this record
 

 
Author Javier Marin; David Vazquez; David Geronimo; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Learning Appearance in Virtual Scenarios for Pedestrian Detection Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 137–144  
  Keywords Pedestrian Detection; Domain Adaptation  
  Abstract Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title Learning Appearance in Virtual Scenarios for Pedestrian Detection  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ MVG2010 Serial 1304  
Permanent link to this record
 

 
Author David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo edit  doi
isbn  openurl
  Title Fast and Robust Object Segmentation with the Integral Linear Classifier Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1046–1053  
  Keywords  
  Abstract We propose an efficient method, built on the popular Bag of Features approach, that obtains robust multiclass pixel-level object segmentation of an image in less than 500ms, with results comparable or better than most state of the art methods. We introduce the Integral Linear Classifier (ILC), that can readily obtain the classification score for any image sub-window with only 6 additions and 1 product by fusing the accumulation and classification steps in a single operation. In order to design a method as efficient as possible, our building blocks are carefully selected from the quickest in the state of the art. More precisely, we evaluate the performance of three popular local descriptors, that can be very efficiently computed using integral images, and two fast quantization methods: the Hierarchical K-Means, and the Extremely Randomized Forest. Finally, we explore the utility of adding spatial bins to the Bag of Features histograms and that of cascade classifiers to improve the obtained segmentation. Our method is compared to the state of the art in the difficult Graz-02 and PASCAL 2007 Segmentation Challenge datasets.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ ARL2010a Serial 1311  
Permanent link to this record
 

 
Author Mohammad Rouhani; Angel Sappa edit  doi
isbn  openurl
  Title Relaxing the 3L Algorithm for an Accurate Implicit Polynomial Fitting Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3066-3072  
  Keywords  
  Abstract This paper presents a novel method to increase the accuracy of linear fitting of implicit polynomials. The proposed method is based on the 3L algorithm philosophy. The novelty lies on the relaxation of the additional constraints, already imposed by the 3L algorithm. Hence, the accuracy of the final solution is increased due to the proper adjustment of the expected values in the aforementioned additional constraints. Although iterative, the proposed approach solves the fitting problem within a linear framework, which is independent of the threshold tuning. Experimental results, both in 2D and 3D, showing improvements in the accuracy of the fitting are presented. Comparisons with both state of the art algorithms and a geometric based one (non-linear fitting), which is used as a ground truth, are provided.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ RoS2010a Serial 1303  
Permanent link to this record
 

 
Author Mario Rojas; David Masip; A. Todorov; Jordi Vitria edit  doi
isbn  openurl
  Title Automatic Point-based Facial Trait Judgments Evaluation Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2715–2720  
  Keywords  
  Abstract Humans constantly evaluate the personalities of other people using their faces. Facial trait judgments have been studied in the psychological field, and have been determined to influence important social outcomes of our lives, such as elections outcomes and social relationships. Recent work on textual descriptions of faces has shown that trait judgments are highly correlated. Further, behavioral studies suggest that two orthogonal dimensions, valence and dominance, can describe the basis of the human judgments from faces. In this paper, we used a corpus of behavioral data of judgments on different trait dimensions to automatically learn a trait predictor from facial pixel images. We study whether trait evaluations performed by humans can be learned using machine learning classifiers, and used later in automatic evaluations of new facial images. The experiments performed using local point-based descriptors show promising results in the evaluation of the main traits.  
  Address San Francisco CA, USA  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RMT2010 Serial 1282  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title Learning Photometric Invariance from Diversified Color Model Ensembles Type Conference Article
  Year 2009 Publication 22nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 565–572  
  Keywords road detection  
  Abstract Color is a powerful visual cue for many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions affecting negatively the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, those reflection models might be too restricted to model real-world scenes in which different reflectance mechanisms may hold simultaneously. Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is taken on input composed of both color variants and invariants. Then, the proposed method combines and weights these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, the fusion method uses a multi-view approach to minimize the estimation error. In this way, the method is robust to data uncertainty and produces properly diversified color invariant ensembles. Experiments are conducted on three different image datasets to validate the method. From the theoretical and experimental results, it is concluded that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning. Further, the method outperforms state-of- the-art detection techniques in the field of object, skin and road recognition.  
  Address Miami (USA)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-3992-8 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2009 Serial 1169  
Permanent link to this record
 

 
Author Agata Lapedriza; David Masip; Jordi Vitria edit  doi
openurl 
  Title On the Use of Independent Tasks for Face Recognition Type Conference Article
  Year 2008 Publication IEEE Computer Society Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 1–6  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes OR; MV Approved no  
  Call Number BCNPCL @ bcnpcl @ LMV2008b Serial 1043  
Permanent link to this record
 

 
Author Paula Fritzsche; C.Roig; Ana Ripoll; Emilio Luque; Aura Hernandez-Sabate edit   pdf
doi  openurl
  Title A Performance Prediction Methodology for Data-dependent Parallel Applications Type Conference Article
  Year 2006 Publication Proceedings of the IEEE International Conference on Cluster Computing Abbreviated Journal  
  Volume Issue Pages 1-8  
  Keywords  
  Abstract The increase in the use of parallel distributed architectures in order to solve large-scale scientific problems has generated the need for performance prediction for both deterministic applications and non-deterministic applications. In particular, the performance prediction of data dependent programs is an extremely challenging problem because for a specific issue the input datasets may cause different execution times. Generally, a parallel application is characterized as a collection of tasks and their interrelations. If the application is time-critical it is not enough to work with only one value per task, and consequently knowledge of the distribution of task execution times is crucial. The development of a new prediction methodology to estimate the performance of data-dependent parallel applications is the primary target of this study. This approach makes it possible to evaluate the parallel performance of an application without the need of implementation. A real data-dependent arterial structure detection application model is used to apply the methodology proposed. The predicted times obtained using the new methodology for genuine datasets are compared with predicted times that arise from using only one execution value per task. Finally, the experimental study shows that the new methodology generates more precise predictions.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM Approved no  
  Call Number IAM @ iam @ FRR2006 Serial 1497  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: