|   | 
Details
   web
Records
Author Bogdan Raducanu; Jordi Vitria
Title Online Learning for Human-Robot Interaction Type Conference Article
Year 2007 Publication IEEE Conference on Computer Vision and Pattern Recognition Workshop on Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address (down) Minneapolis (USA)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes OR; MV Approved no
Call Number BCNPCL @ bcnpcl @ RaV2007a Serial 791
Permanent link to this record
 

 
Author Sergio Escalera; Petia Radeva; Oriol Pujol
Title Complex Salient Regions for Computer Vision Problems Type Conference Article
Year 2007 Publication IEEE Conference on Computer Vision and Pattern Recognition Workshop on Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address (down) Minneapolis (USA)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes MILAB;HuPBA Approved no
Call Number BCNPCL @ bcnpcl @ ERP2007 Serial 908
Permanent link to this record
 

 
Author George A. Triantafyllid; Nikolaos Thomos; Cristina Cañero; P. Vieyres; Michael G. Strintzis
Title A User Interface for Mobile Robotized Tele-Echography Type Miscellaneous
Year 2005 Publication 3rd International Conference on Imaging Technologies in Biomedical Sciences (ITBS 2005) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address (down) Milos Island (Greece)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number Admin @ si @ TTC2005 Serial 587
Permanent link to this record
 

 
Author M. Campos-Taberner; Adriana Romero; Carlo Gatta; Gustavo Camps-Valls
Title Shared feature representations of LiDAR and optical images: Trading sparsity for semantic discrimination Type Conference Article
Year 2015 Publication IEEE International Geoscience and Remote Sensing Symposium IGARSS2015 Abbreviated Journal
Volume Issue Pages 4169 - 4172
Keywords
Abstract This paper studies the level of complementary information conveyed by extremely high resolution LiDAR and optical images. We pursue this goal following an indirect approach via unsupervised spatial-spectral feature extraction. We used a recently presented unsupervised convolutional neural network trained to enforce both population and lifetime spar-sity in the feature representation. We derived independent and joint feature representations, and analyzed the sparsity scores and the discriminative power. Interestingly, the obtained results revealed that the RGB+LiDAR representation is no longer sparse, and the derived basis functions merge color and elevation yielding a set of more expressive colored edge filters. The joint feature representation is also more discriminative when used for clustering and topological data visualization.
Address (down) Milan; Italy; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IGARSS
Notes LAMP; 600.079;MILAB Approved no
Call Number Admin @ si @ CRG2015 Serial 2724
Permanent link to this record
 

 
Author Maria Vanrell; Naila Murray; Robert Benavente; C. Alejandro Parraga; Xavier Otazu; Ramon Baldrich
Title Perception Based Representations for Computational Colour Type Conference Article
Year 2011 Publication 3rd International Workshop on Computational Color Imaging Abbreviated Journal
Volume 6626 Issue Pages 16-30
Keywords colour perception, induction, naming, psychophysical data, saliency, segmentation
Abstract The perceived colour of a stimulus is dependent on multiple factors stemming out either from the context of the stimulus or idiosyncrasies of the observer. The complexity involved in combining these multiple effects is the main reason for the gap between classical calibrated colour spaces from colour science and colour representations used in computer vision, where colour is just one more visual cue immersed in a digital image where surfaces, shadows and illuminants interact seemingly out of control. With the aim to advance a few steps towards bridging this gap we present some results on computational representations of colour for computer vision. They have been developed by introducing perceptual considerations derived from the interaction of the colour of a point with its context. We show some techniques to represent the colour of a point influenced by assimilation and contrast effects due to the image surround and we show some results on how colour saliency can be derived in real images. We outline a model for automatic assignment of colour names to image points directly trained on psychophysical data. We show how colour segments can be perceptually grouped in the image by imposing shading coherence in the colour space.
Address (down) Milan, Italy
Corporate Author Thesis
Publisher Springer-Verlag Place of Publication Editor Raimondo Schettini, Shoji Tominaga, Alain Trémeau
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-642-20403-6 Medium
Area Expedition Conference CCIW
Notes CIC Approved no
Call Number Admin @ si @ VMB2011 Serial 1733
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer
Title Physics-based Edge Evaluation for Improved Color Constancy Type Conference Article
Year 2009 Publication 22nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 581 – 588
Keywords
Abstract Edge-based color constancy makes use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as shadow, geometry, material and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation.
Address (down) Miami, USA
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4244-3992-8 Medium
Area Expedition Conference CVPR
Notes CAT;ISE Approved no
Call Number CAT @ cat @ GGW2009 Serial 1197
Permanent link to this record
 

 
Author Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera
Title Dominance Detection in Face-to-face Conversations Type Conference Article
Year 2009 Publication 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis Abbreviated Journal
Volume Issue Pages 97–102
Keywords
Abstract Dominance is referred to the level of influence a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on dominance detection from visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers opinion. Moreover, the considered indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analysis shows a high correlation and allows the categorization of dominant people in public discussion video sequences.
Address (down) Miami, USA
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2160-7508 ISBN 978-1-4244-3994-2 Medium
Area Expedition Conference CVPR
Notes HuPBA; OR; MILAB;MV Approved no
Call Number BCNPCL @ bcnpcl @ EMV2009 Serial 1227
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Theo Gevers; Antonio Lopez
Title Learning Photometric Invariance from Diversified Color Model Ensembles Type Conference Article
Year 2009 Publication 22nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 565–572
Keywords road detection
Abstract Color is a powerful visual cue for many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions affecting negatively the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, those reflection models might be too restricted to model real-world scenes in which different reflectance mechanisms may hold simultaneously. Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is taken on input composed of both color variants and invariants. Then, the proposed method combines and weights these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, the fusion method uses a multi-view approach to minimize the estimation error. In this way, the method is robust to data uncertainty and produces properly diversified color invariant ensembles. Experiments are conducted on three different image datasets to validate the method. From the theoretical and experimental results, it is concluded that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning. Further, the method outperforms state-of- the-art detection techniques in the field of object, skin and road recognition.
Address (down) Miami (USA)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4244-3992-8 Medium
Area Expedition Conference CVPR
Notes ADAS;ISE Approved no
Call Number ADAS @ adas @ AGL2009 Serial 1169
Permanent link to this record
 

 
Author Sergio Escalera; Eloi Puertas; Petia Radeva; Oriol Pujol
Title Multimodal laughter recognition in video conversations Type Conference Article
Year 2009 Publication 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis Abbreviated Journal
Volume Issue Pages 110–115
Keywords
Abstract Laughter detection is an important area of interest in the Affective Computing and Human-computer Interaction fields. In this paper, we propose a multi-modal methodology based on the fusion of audio and visual cues to deal with the laughter recognition problem in face-to-face conversations. The audio features are extracted from the spectogram and the video features are obtained estimating the mouth movement degree and using a smile and laughter classifier. Finally, the multi-modal cues are included in a sequential classifier. Results over videos from the public discussion blog of the New York Times show that both types of features perform better when considered together by the classifier. Moreover, the sequential methodology shows to significantly outperform the results obtained by an Adaboost classifier.
Address (down) Miami (USA)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2160-7508 ISBN 978-1-4244-3994-2 Medium
Area Expedition Conference CVPR
Notes MILAB;HuPBA Approved no
Call Number BCNPCL @ bcnpcl @ EPR2009c Serial 1188
Permanent link to this record
 

 
Author Sounak Dey; Anguelos Nicolaou; Josep Llados; Umapada Pal
Title Local Binary Pattern for Word Spotting in Handwritten Historical Document Type Conference Article
Year 2016 Publication Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) Abbreviated Journal
Volume Issue Pages 574-583
Keywords Local binary patterns; Spatial sampling; Learning-free; Word spotting; Handwritten; Historical document analysis; Large-scale data
Abstract Digital libraries store images which can be highly degraded and to index this kind of images we resort to word spotting as our information retrieval system. Information retrieval for handwritten document images is more challenging due to the difficulties in complex layout analysis, large variations of writing styles, and degradation or low quality of historical manuscripts. This paper presents a simple innovative learning-free method for word spotting from large scale historical documents combining Local Binary Pattern (LBP) and spatial sampling. This method offers three advantages: firstly, it operates in completely learning free paradigm which is very different from unsupervised learning methods, secondly, the computational time is significantly low because of the LBP features, which are very fast to compute, and thirdly, the method can be used in scenarios where annotations are not available. Finally, we compare the results of our proposed retrieval method with other methods in the literature and we obtain the best results in the learning free paradigm.
Address (down) Merida; Mexico; December 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference S+SSPR
Notes DAG; 600.097; 602.006; 603.053 Approved no
Call Number Admin @ si @ DNL2016 Serial 2876
Permanent link to this record
 

 
Author Juan Ignacio Toledo; Sebastian Sudholt; Alicia Fornes; Jordi Cucurull; A. Fink; Josep Llados
Title Handwritten Word Image Categorization with Convolutional Neural Networks and Spatial Pyramid Pooling Type Conference Article
Year 2016 Publication Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR) Abbreviated Journal
Volume 10029 Issue Pages 543-552
Keywords Document image analysis; Word image categorization; Convolutional neural networks; Named entity detection
Abstract The extraction of relevant information from historical document collections is one of the key steps in order to make these documents available for access and searches. The usual approach combines transcription and grammars in order to extract semantically meaningful entities. In this paper, we describe a new method to obtain word categories directly from non-preprocessed handwritten word images. The method can be used to directly extract information, being an alternative to the transcription. Thus it can be used as a first step in any kind of syntactical analysis. The approach is based on Convolutional Neural Networks with a Spatial Pyramid Pooling layer to deal with the different shapes of the input images. We performed the experiments on a historical marriage record dataset, obtaining promising results.
Address (down) Merida; Mexico; December 2016
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-3-319-49054-0 Medium
Area Expedition Conference S+SSPR
Notes DAG; 600.097; 602.006 Approved no
Call Number Admin @ si @ TSF2016 Serial 2877
Permanent link to this record
 

 
Author Rahat Khan; Joost Van de Weijer; Dimosthenis Karatzas; Damien Muselet
Title Towards multispectral data acquisition with hand-held devices Type Conference Article
Year 2013 Publication 20th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 2053 - 2057
Keywords Multispectral; mobile devices; color measurements
Abstract We propose a method to acquire multispectral data with handheld devices with front-mounted RGB cameras. We propose to use the display of the device as an illuminant while the camera captures images illuminated by the red, green and
blue primaries of the display. Three illuminants and three response functions of the camera lead to nine response values which are used for reflectance estimation. Results are promising and show that the accuracy of the spectral reconstruction improves in the range from 30-40% over the spectral
reconstruction based on a single illuminant. Furthermore, we propose to compute sensor-illuminant aware linear basis by discarding the part of the reflectances that falls in the sensorilluminant null-space. We show experimentally that optimizing reflectance estimation on these new basis functions decreases
the RMSE significantly over basis functions that are independent to sensor-illuminant. We conclude that, multispectral data acquisition is potentially possible with consumer hand-held devices such as tablets, mobiles, and laptops, opening up applications which are currently considered to be unrealistic.
Address (down) Melbourne; Australia; September 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes CIC; DAG; 600.048 Approved no
Call Number Admin @ si @ KWK2013b Serial 2265
Permanent link to this record
 

 
Author Shida Beigpour; Marc Serra; Joost Van de Weijer; Robert Benavente; Maria Vanrell; Olivier Penacchio; Dimitris Samaras
Title Intrinsic Image Evaluation On Synthetic Complex Scenes Type Conference Article
Year 2013 Publication 20th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 285 - 289
Keywords
Abstract Scene decomposition into its illuminant, shading, and reflectance intrinsic images is an essential step for scene understanding. Collecting intrinsic image groundtruth data is a laborious task. The assumptions on which the ground-truth
procedures are based limit their application to simple scenes with a single object taken in the absence of indirect lighting and interreflections. We investigate synthetic data for intrinsic image research since the extraction of ground truth is straightforward, and it allows for scenes in more realistic situations (e.g, multiple illuminants and interreflections). With this dataset we aim to motivate researchers to further explore intrinsic image decomposition in complex scenes.
Address (down) Melbourne; Australia; September 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICIP
Notes CIC; 600.048; 600.052; 600.051 Approved no
Call Number Admin @ si @ BSW2013 Serial 2264
Permanent link to this record
 

 
Author Xinhang Song; Shuqiang Jiang; Luis Herranz
Title Combining Models from Multiple Sources for RGB-D Scene Recognition Type Conference Article
Year 2017 Publication 26th International Joint Conference on Artificial Intelligence Abbreviated Journal
Volume Issue Pages 4523-4529
Keywords Robotics and Vision; Vision and Perception
Abstract Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at high-levels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities.
Address (down) Melbourne; Australia; August 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IJCAI
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ SJH2017b Serial 2966
Permanent link to this record
 

 
Author Miquel Ferrer; Ernest Valveny; F. Serratosa; I. Bardaji; Horst Bunke
Title Graph-based k-means clustering: A comparison of the set versus the generalized median graph Type Conference Article
Year 2009 Publication 13th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 5702 Issue Pages 342–350
Keywords
Abstract In this paper we propose the application of the generalized median graph in a graph-based k-means clustering algorithm. In the graph-based k-means algorithm, the centers of the clusters have been traditionally represented using the set median graph. We propose an approximate method for the generalized median graph computation that allows to use it to represent the centers of the clusters. Experiments on three databases show that using the generalized median graph as the clusters representative yields better results than the set median graph.
Address (down) Münster, Germany
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-03766-5 Medium
Area Expedition Conference CAIP
Notes DAG Approved no
Call Number DAG @ dag @ FVS2009d Serial 1219
Permanent link to this record