|   | 
Details
   web
Records
Author Esteve Cervantes; Long Long Yu; Andrew Bagdanov; Marc Masana; Joost Van de Weijer
Title Hierarchical Part Detection with Deep Neural Networks Type Conference Article
Year 2016 Publication 23rd IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages
Keywords Object Recognition; Part Detection; Convolutional Neural Networks
Abstract Part detection is an important aspect of object recognition. Most approaches apply object proposals to generate hundreds of possible part bounding box candidates which are then evaluated by part classifiers. Recently several methods have investigated directly regressing to a limited set of bounding boxes from deep neural network representation. However, for object parts such methods may be unfeasible due to their relatively small size with respect to the image. We propose a hierarchical method for object and part detection. In a single network we first detect the object and then regress to part location proposals based only on the feature representation inside the object. Experiments show that our hierarchical approach outperforms a network which directly regresses the part locations. We also show that our approach obtains part detection accuracy comparable or better than state-of-the-art on the CUB-200 bird and Fashionista clothing item datasets with only a fraction of the number of part proposals.
Address (up) Phoenix; Arizona; USA; September 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes LAMP; 600.106 Approved no
Call Number Admin @ si @ CLB2016 Serial 2762
Permanent link to this record
 

 
Author Olivier Lefebvre; Pau Riba; Charles Fournier; Alicia Fornes; Josep Llados; Rejean Plamondon; Jules Gagnon-Marchand
Title Monitoring neuromotricity on-line: a cloud computing approach Type Conference Article
Year 2015 Publication 17th Conference of the International Graphonomics Society IGS2015 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract The goal of our experiment is to develop a useful and accessible tool that can be used to evaluate a patient's health by analyzing handwritten strokes. We use a cloud computing approach to analyze stroke data sampled on a commercial tablet working on the Android platform and a distant server to perform complex calculations using the Delta and Sigma lognormal algorithms. A Google Drive account is used to store the data and to ease the development of the project. The communication between the tablet, the cloud and the server is encrypted to ensure biomedical information confidentiality. Highly parameterized biomedical tests are implemented on the tablet as well as a free drawing test to evaluate the validity of the data acquired by the first test compared to the second one. A blurred shape model descriptor pattern recognition algorithm is used to classify the data obtained by the free drawing test. The functions presented in this paper are still currently under development and other improvements are needed before launching the application in the public domain.
Address (up) Pointe-à-Pitre; Guadeloupe; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IGS
Notes DAG; 600.077 Approved no
Call Number Admin @ si @ LRF2015 Serial 2617
Permanent link to this record
 

 
Author Ivan Huerta; Ariel Amato; Jordi Gonzalez; Juan J. Villanueva
Title Fusing Edge Cues to Handle Colour Problems in Image Segmentation Type Book Chapter
Year 2008 Publication Articulated Motion and Deformable Objects, 5th International Conference Abbreviated Journal
Volume 5098 Issue Pages 279–288
Keywords
Abstract
Address (up) Port d'Andratx (Mallorca)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AMDO
Notes ISE Approved no
Call Number ISE @ ise @ HAG2008 Serial 973
Permanent link to this record
 

 
Author Pau Baiget; Xavier Roca; Jordi Gonzalez
Title Autonomous Virtual Agents for Performance Evaluation of Tracking Algorithms Type Book Chapter
Year 2008 Publication Articulated Motion and Deformable Objects, 5th International Conference AMDO 2008, Abbreviated Journal
Volume 5098 Issue Pages 299-308
Keywords
Abstract
Address (up) Port d'Andratx (Mallorca)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number ISE @ ise @ BRG2008 Serial 974
Permanent link to this record
 

 
Author Bhaskar Chakraborty; Marco Pedersoli; Jordi Gonzalez
Title View-Invariant Human Action Detection using Component-Wise HMM of Body Parts Type Book Chapter
Year 2008 Publication Articulated Motion and Deformable Objects, 5th International Conference Abbreviated Journal
Volume 5098 Issue Pages 208–217
Keywords
Abstract
Address (up) Port d'Andratx (Mallorca)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AMDO
Notes ISE Approved no
Call Number ISE @ ise @ CPG2008 Serial 975
Permanent link to this record
 

 
Author Ivo Everts; Jan van Gemert; Theo Gevers
Title Evaluation of Color STIPs for Human Action Recognition Type Conference Article
Year 2013 Publication IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 2850-2857
Keywords
Abstract This paper is concerned with recognizing realistic human actions in videos based on spatio-temporal interest points (STIPs). Existing STIP-based action recognition approaches operate on intensity representations of the image data. Because of this, these approaches are sensitive to disturbing photometric phenomena such as highlights and shadows. Moreover, valuable information is neglected by discarding chromaticity from the photometric representation. These issues are addressed by Color STIPs. Color STIPs are multi-channel reformulations of existing intensity-based STIP detectors and descriptors, for which we consider a number of chromatic representations derived from the opponent color space. This enhanced modeling of appearance improves the quality of subsequent STIP detection and description. Color STIPs are shown to substantially outperform their intensity-based counterparts on the challenging UCF~sports, UCF11 and UCF50 action recognition benchmarks. Moreover, the results show that color STIPs are currently the single best low-level feature choice for STIP-based approaches to human action recognition.
Address (up) Portland; oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN Medium
Area Expedition Conference CVPR
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ EGG2013 Serial 2364
Permanent link to this record
 

 
Author Antonio Hernandez; Nadezhda Zlateva; Alexander Marinov; Miguel Reyes; Petia Radeva; Dimo Dimov; Sergio Escalera
Title Graph Cuts Optimization for Multi-Limb Human Segmentation in Depth Maps Type Conference Article
Year 2012 Publication 25th IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 726-732
Keywords
Abstract We present a generic framework for object segmentation using depth maps based on Random Forest and Graph-cuts theory, and apply it to the segmentation of human limbs in depth maps. First, from a set of random depth features, Random Forest is used to infer a set of label probabilities for each data sample. This vector of probabilities is used as unary term in α-β swap Graph-cuts algorithm. Moreover, depth of spatio-temporal neighboring data points are used as boundary potentials. Results on a new multi-label human depth data set show high performance in terms of segmentation overlapping of the novel methodology compared to classical approaches.
Address (up) Portland; Oregon; June 2013
Corporate Author Thesis
Publisher IEEE Xplore Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4673-1226-4 Medium
Area Expedition Conference CVPR
Notes MILAB;HuPBA Approved no
Call Number Admin @ si @ HZM2012b Serial 2046
Permanent link to this record
 

 
Author Rahat Khan; Joost Van de Weijer; Fahad Shahbaz Khan; Damien Muselet; christophe Ducottet; Cecile Barat
Title Discriminative Color Descriptors Type Conference Article
Year 2013 Publication IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 2866 - 2873
Keywords
Abstract Color description is a challenging task because of large variations in RGB values which occur due to scene accidental events, such as shadows, shading, specularities, illuminant color changes, and changes in viewing geometry. Traditionally, this challenge has been addressed by capturing the variations in physics-based models, and deriving invariants for the undesired variations. The drawback of this approach is that sets of distinguishable colors in the original color space are mapped to the same value in the photometric invariant space. This results in a drop of discriminative power of the color description. In this paper we take an information theoretic approach to color description. We cluster color values together based on their discriminative power in a classification problem. The clustering has the explicit objective to minimize the drop of mutual information of the final representation. We show that such a color description automatically learns a certain degree of photometric invariance. We also show that a universal color representation, which is based on other data sets than the one at hand, can obtain competing performance. Experiments show that the proposed descriptor outperforms existing photometric invariants. Furthermore, we show that combined with shape description these color descriptors obtain excellent results on four challenging datasets, namely, PASCAL VOC 2007, Flowers-102, Stanford dogs-120 and Birds-200.
Address (up) Portland; Oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN Medium
Area Expedition Conference CVPR
Notes CIC; 600.048 Approved no
Call Number Admin @ si @ KWK2013a Serial 2262
Permanent link to this record
 

 
Author Andreas Møgelmose; Chris Bahnsen; Thomas B. Moeslund; Albert Clapes; Sergio Escalera
Title Tri-modal Person Re-identification with RGB, Depth and Thermal Features Type Conference Article
Year 2013 Publication 9th IEEE Workshop on Perception beyond the visible Spectrum, Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 301-307
Keywords
Abstract Person re-identification is about recognizing people who have passed by a sensor earlier. Previous work is mainly based on RGB data, but in this work we for the first time present a system where we combine RGB, depth, and thermal data for re-identification purposes. First, from each of the three modalities, we obtain some particular features: from RGB data, we model color information from different regions of the body, from depth data, we compute different soft body biometrics, and from thermal data, we extract local structural information. Then, the three information types are combined in a joined classifier. The tri-modal system is evaluated on a new RGB-D-T dataset, showing successful results in re-identification scenarios.
Address (up) Portland; oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-0-7695-4990-3 Medium
Area Expedition Conference CVPRW
Notes HUPBA;MILAB Approved no
Call Number Admin @ si @ MBM2013 Serial 2253
Permanent link to this record
 

 
Author David Vazquez; Jiaolong Xu; Sebastian Ramos; Antonio Lopez; Daniel Ponsa
Title Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes Type Conference Article
Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal
Volume Issue Pages 706 - 711
Keywords Pedestrian Detection; Domain Adaptation
Abstract Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs.
Address (up) Portland; Oregon; June 2013
Corporate Author Thesis
Publisher IEEE Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes ADAS; 600.054; 600.057; 601.217 Approved no
Call Number ADAS @ adas @ VXR2013a Serial 2219
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Sebastian Ramos; Antonio Lopez; Daniel Ponsa
Title Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers Type Conference Article
Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal
Volume Issue Pages 688 - 693
Keywords Pedestrian Detection; Domain Adaptation
Abstract Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%.
Address (up) Portland; oregon; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language English Summary Language English Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes ADAS; 600.054; 600.057; 601.217 Approved yes
Call Number XVR2013; ADAS @ adas @ xvr2013a Serial 2220
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu
Title Out-of-Sample Embedding for Manifold Learning Applied to Face Recognition Type Conference Article
Year 2013 Publication IEEE International Workshop on Analysis and Modeling of Faces and Gestures Abbreviated Journal
Volume Issue Pages 862-868
Keywords
Abstract Manifold learning techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data---the out-of-sample problem. For the first aspect, the proposed schemes were heuristically driven. For the second aspect, the difficulty resides in finding an accurate mapping that transfers unseen data samples into an existing manifold. Past works addressing these two aspects were heavily parametric in the sense that the optimal performance is only reached for a suitable parameter choice that should be known in advance. In this paper, we demonstrate that sparse coding theory not only serves for automatic graph reconstruction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. To evaluate the effectiveness of the proposed out-of-sample embedding, experiments are conducted using the k-nearest neighbor (KNN) and Kernel Support Vector Machines (KSVM) classifiers on four public face databases. The experimental results show that the proposed model is able to achieve high categorization effectiveness as well as high consistency with non-linear embeddings/manifolds obtained in batch modes.
Address (up) Portland; USA; June 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes OR; 600.046;MV Approved no
Call Number Admin @ si @ DoR2013 Serial 2236
Permanent link to this record
 

 
Author Bojana Gajic; Eduard Vazquez; Ramon Baldrich
Title Evaluation of Deep Image Descriptors for Texture Retrieval Type Conference Article
Year 2017 Publication Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) Abbreviated Journal
Volume Issue Pages 251-257
Keywords Texture Representation; Texture Retrieval; Convolutional Neural Networks; Psychophysical Evaluation
Abstract The increasing complexity learnt in the layers of a Convolutional Neural Network has proven to be of great help for the task of classification. The topic has received great attention in recently published literature.
Nonetheless, just a handful of works study low-level representations, commonly associated with lower layers. In this paper, we explore recent findings which conclude, counterintuitively, the last layer of the VGG convolutional network is the best to describe a low-level property such as texture. To shed some light on this issue, we are proposing a psychophysical experiment to evaluate the adequacy of different layers of the VGG network for texture retrieval. Results obtained suggest that, whereas the last convolutional layer is a good choice for a specific task of classification, it might not be the best choice as a texture descriptor, showing a very poor performance on texture retrieval. Intermediate layers show the best performance, showing a good combination of basic filters, as in the primary visual cortex, and also a degree of higher level information to describe more complex textures.
Address (up) Porto, Portugal; 27 February – 1 March 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISIGRAPP
Notes CIC; 600.087 Approved no
Call Number Admin @ si @ Serial 3710
Permanent link to this record
 

 
Author Carles Sanchez; Antonio Esteban Lansaque; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil
Title Towards a Videobronchoscopy Localization System from Airway Centre Tracking Type Conference Article
Year 2017 Publication 12th International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume Issue Pages 352-359
Keywords Video-bronchoscopy; Lung cancer diagnosis; Airway lumen detection; Region tracking; Guided bronchoscopy navigation
Abstract Bronchoscopists use fluoroscopy to guide flexible bronchoscopy to the lesion to be biopsied without any kind of incision. Being fluoroscopy an imaging technique based on X-rays, the risk of developmental problems and cancer is increased in those subjects exposed to its application, so minimizing radiation is crucial. Alternative guiding systems such as electromagnetic navigation require specific equipment, increase the cost of the clinical procedure and still require fluoroscopy. In this paper we propose an image based guiding system based on the extraction of airway centres from intra-operative videos. Such anatomical landmarks are matched to the airway centreline extracted from a pre-planned CT to indicate the best path to the nodule. We present a
feasibility study of our navigation system using simulated bronchoscopic videos and a multi-expert validation of landmarks extraction in 3 intra-operative ultrathin explorations.
Address (up) Porto; Portugal; February 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes IAM; 600.096; 600.075; 600.145 Approved no
Call Number Admin @ si @ SEB2017 Serial 2943
Permanent link to this record
 

 
Author Cristhian Aguilera; Xavier Soria; Angel Sappa; Ricardo Toledo
Title RGBN Multispectral Images: a Novel Color Restoration Approach Type Conference Article
Year 2017 Publication 15th International Conference on Practical Applications of Agents and Multi-Agent System Abbreviated Journal
Volume Issue Pages
Keywords Multispectral Imaging; Free Sensor Model; Neural Network
Abstract This paper describes a color restoration technique used to remove NIR information from single sensor cameras where color and near-infrared images are simultaneously acquired|referred to in the literature as RGBN images. The proposed approach is based on a neural network architecture that learns the NIR information contained in the RGBN images. The proposed approach is evaluated on real images obtained by using a pair of RGBN cameras. Additionally, qualitative comparisons with a nave color correction technique based on mean square
error minimization are provided.
Address (up) Porto; Portugal; June 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference PAAMS
Notes ADAS; MSIAU; 600.118; 600.122 Approved no
Call Number Admin @ si @ ASS2017 Serial 2918
Permanent link to this record