|   | 
Details
   web
Records
Author Marcel P. Lucassen; Theo Gevers; Arjan Gijsenij
Title Texture Affects Color Emotion Type Journal Article
Year 2011 Publication Color Research & Applications Abbreviated Journal CRA
Volume 36 Issue 6 Pages 426–436
Keywords color;texture;color emotion;observer variability;ranking
Abstract Several studies have recorded color emotions in subjects viewing uniform color (UC) samples. We conduct an experiment to measure and model how these color emotions change when texture is added to the color samples. Using a computer monitor, our subjects arrange samples along four scales: warm–cool, masculine–feminine, hard–soft, and heavy–light. Three sample types of increasing visual complexity are used: UC, grayscale textures, and color textures (CTs). To assess the intraobserver variability, the experiment is repeated after 1 week. Our results show that texture fully determines the responses on the Hard-Soft scale, and plays a role of decreasing weight for the masculine–feminine, heavy–light, and warm–cool scales. Using some 25,000 observer responses, we derive color emotion functions that predict the group-averaged scale responses from the samples' color and texture parameters. For UC samples, the accuracy of our functions is significantly higher (average R2 = 0.88) than that of previously reported functions applied to our data. The functions derived for CT samples have an accuracy of R2 = 0.80. We conclude that when textured samples are used in color emotion studies, the psychological responses may be strongly affected by texture. © 2010 Wiley Periodicals, Inc. Col Res Appl, 2010
Address
Corporate Author Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ LGG2011 Serial 1844
Permanent link to this record
 

 
Author G.D. Evangelidis; Ferran Diego; Joan Serrat; Antonio Lopez
Title Slice Matching for Accurate Spatio-Temporal Alignment Type Conference Article
Year 2011 Publication In ICCV Workshop on Visual Surveillance Abbreviated Journal
Volume Issue Pages
Keywords video alignment
Abstract Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works.
Address
Corporate Author Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VS
Notes ADAS Approved no
Call Number Admin @ si @ EDS2011; ADAS @ adas @ eds2011a Serial 1861
Permanent link to this record
 

 
Author Gemma Roig; Xavier Boix; F. de la Torre; Joan Serrat; C. Vilella
Title Hierarchical CRF with product label spaces for parts-based Models Type Conference Article
Year 2011 Publication IEEE Conference on Automatic Face and Gesture Recognition Abbreviated Journal
Volume Issue Pages 657-664
Keywords Shape; Computational modeling; Principal component analysis; Random variables; Color; Upper bound; Facial features
Abstract Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset.
Address Santa Barbara, CA, USA, 2011
Corporate Author Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference FG
Notes ADAS Approved no
Call Number Admin @ si @ RBT2011 Serial 1862
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell
Title Portmanteau Vocabularies for Multi-Cue Image Representation Type Conference Article
Year 2011 Publication 25th Annual Conference on Neural Information Processing Systems Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation
Address
Corporate Author Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference NIPS
Notes CIC Approved no
Call Number Admin @ si @ KWB2011 Serial 1865
Permanent link to this record
 

 
Author Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin
Title Towards Automatic Concept Transfer Type Conference Article
Year 2011 Publication Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering Abbreviated Journal
Volume Issue Pages 167.176
Keywords chromatic modeling, color concepts, color transfer, concept transfer
Abstract This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.
Address
Corporate Author Thesis
Publisher ACM Press Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-0907-3 Medium
Area Expedition Conference NPAR
Notes CIC Approved no
Call Number Admin @ si @ MSM2011 Serial 1866
Permanent link to this record
 

 
Author Jordi Roca; C. Alejandro Parraga; Maria Vanrell
Title Categorical Focal Colours are Structurally Invariant Under Illuminant Changes Type Conference Article
Year 2011 Publication European Conference on Visual Perception Abbreviated Journal
Volume Issue Pages 196
Keywords
Abstract The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes.
Address
Corporate Author Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Perception 40 Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECVP
Notes CIC Approved no
Call Number Admin @ si @ RPV2011 Serial 1867
Permanent link to this record
 

 
Author Sergio Escalera; Ana Puig; Oscar Amoros; Maria Salamo
Title Intelligent GPGPU Classification in Volume Visualization: a framework based on Error-Correcting Output Codes Type Journal Article
Year 2011 Publication Computer Graphics Forum Abbreviated Journal CGF
Volume 30 Issue 7 Pages 2107-2115
Keywords
Abstract IF JCR 1.455 2010 25/99
In volume visualization, the definition of the regions of interest is inherently an iterative trial-and-error process finding out the best parameters to classify and render the final image. Generally, the user requires a lot of expertise to analyze and edit these parameters through multi-dimensional transfer functions. In this paper, we present a framework of intelligent methods to label on-demand multiple regions of interest. These methods can be split into a two-level GPU-based labelling algorithm that computes in time of rendering a set of labelled structures using the Machine Learning Error-Correcting Output Codes (ECOC) framework. In a pre-processing step, ECOC trains a set of Adaboost binary classifiers from a reduced pre-labelled data set. Then, at the testing stage, each classifier is independently applied on the features of a set of unlabelled samples and combined to perform multi-class labelling. We also propose an alternative representation of these classifiers that allows to highly parallelize the testing stage. To exploit that parallelism we implemented the testing stage in GPU-OpenCL. The empirical results on different data sets for several volume structures shows high computational performance and classification accuracy.
Address
Corporate Author Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; HuPBA Approved no
Call Number Admin @ si @ EPA2011 Serial 1881
Permanent link to this record
 

 
Author Laura Igual; Joan Carles Soliva; Antonio Hernandez; Sergio Escalera; Xavier Jimenez ; Oscar Vilarroya; Petia Radeva
Title A fully-automatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attention-deficit/hyperactivity disorder Type Journal Article
Year 2011 Publication BioMedical Engineering Online Abbreviated Journal BEO
Volume 10 Issue 105 Pages 1-23
Keywords Brain caudate nucleus; segmentation; MRI; atlas-based strategy; Graph Cut framework
Abstract Background
Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations.

Method
We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure.

Results
We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis.

Conclusion
CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
Address
Corporate Author Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1475-925X ISBN Medium
Area Expedition Conference
Notes MILAB;HuPBA Approved no
Call Number Admin @ si @ ISH2011 Serial 1882
Permanent link to this record
 

 
Author Mario Rojas; David Masip; A. Todorov; Jordi Vitria
Title Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models Type Journal Article
Year 2011 Publication PloS one Abbreviated Journal Plos
Volume 6 Issue 8 Pages e23323
Keywords
Abstract JCR Impact Factor 2010: 4.411
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions
Address
Corporate Author Thesis
Publisher Public Library of Science Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number Admin @ si @ RMT2011 Serial 1883
Permanent link to this record
 

 
Author Miguel Angel Bautista; Sergio Escalera; Xavier Baro; Oriol Pujol; Jordi Vitria; Petia Radeva
Title On the Design of Low Redundancy Error-Correcting Output Codes Type Book Chapter
Year 2011 Publication Ensembles in Machine Learning Applications Abbreviated Journal
Volume 373 Issue 2 Pages 21-38
Keywords
Abstract The classification of large number of object categories is a challenging trend in the Pattern Recognition field. In the literature, this is often addressed using an ensemble of classifiers . In this scope, the Error-Correcting Output Codes framework has demonstrated to be a powerful tool for combining classifiers. However, most of the state-of-the-art ECOC approaches use a linear or exponential number of classifiers, making the discrimination of a large number of classes unfeasible. In this paper, we explore and propose a compact design of ECOC in terms of the number of classifiers. Evolutionary computation is used for tuning the parameters of the classifiers and looking for the best compact ECOC code configuration. The results over several public UCI data sets and different multi-class Computer Vision problems show that the proposed methodology obtains comparable (even better) results than the state-of-the-art ECOC methodologies with far less number of dichotomizers.
Address
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1860-949X ISBN 978-3-642-22909-1 Medium
Area Expedition Conference
Notes MILAB; OR;HuPBA;MV Approved no
Call Number Admin @ si @ BEB2011b Serial 1886
Permanent link to this record
 

 
Author Miguel Angel Bautista; Oriol Pujol; Xavier Baro; Sergio Escalera
Title Introducing the Separability Matrix for Error Correcting Output Codes Coding Type Conference Article
Year 2011 Publication 10th International Conference on Multiple Classifier Systems Abbreviated Journal
Volume 6713 Issue Pages 227-236
Keywords
Abstract Error Correcting Output Codes (ECOC) have demonstrate to be a powerful tool for treating multi-class problems. Nevertheless, predefined ECOC designs may not benefit from Error-correcting principles for particular multi-class data. In this paper, we introduce the Separability matrix as a tool to study and enhance designs for ECOC coding. In addition, a novel problem-dependent coding design based on the Separability matrix is tested over a wide set of challenging multi-class problems, obtaining very satisfactory results.
Address Napoles, Italy
Corporate Author Thesis
Publisher Springer-Verlag Berlin, Heidelberg Place of Publication (up) Editor Carlo Sansone; Josef Kittler; Fabio Roli
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-21556-8 Medium
Area Expedition Conference MCS
Notes MILAB; OR;HuPBA;MV Approved no
Call Number Admin @ si @ BPB2011b Serial 1887
Permanent link to this record
 

 
Author Ruth Aylett; Ginevra Castellano; Bogdan Raducanu; Ana Paiva; Marc Hanheide
Title Long-term socially perceptive and interactive robot companions: challenges and future perspectives Type Conference Article
Year 2011 Publication 13th International Conference on Multimodal Interaction Abbreviated Journal
Volume Issue Pages 323-326
Keywords human-robot interaction, multimodal interaction, social robotics
Abstract This paper gives a brief overview of the challenges for multi-model perception and generation applied to robot companions located in human social environments. It reviews the current position in both perception and generation and the immediate technical challenges and goes on to consider the extra issues raised by embodiment and social context. Finally, it briefly discusses the impact of systems that must function continually over months rather than just for a few hours.
Address Alicante
Corporate Author Thesis
Publisher ACM Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-0641-6 Medium
Area Expedition Conference ICMI
Notes OR;MV Approved no
Call Number Admin @ si @ ACR2011 Serial 1888
Permanent link to this record
 

 
Author Antonio Hernandez; Carlos Primo; Sergio Escalera
Title Automatic user interaction correction via Multi-label Graph cuts Type Conference Article
Year 2011 Publication In ICCV 2011 1st IEEE International Workshop on Human Interaction in Computer Vision HICV Abbreviated Journal
Volume Issue Pages 1276-1281
Keywords
Abstract Most applications in image segmentation requires from user interaction in order to achieve accurate results. However, user wants to achieve the desired segmentation accuracy reducing effort of manual labelling. In this work, we extend standard multi-label α-expansion Graph Cut algorithm so that it analyzes the interaction of the user in order to modify the object model and improve final segmentation of objects. The approach is inspired in the fact that fast user interactions may introduce some pixel errors confusing object and background. Our results with different degrees of user interaction and input errors show high performance of the proposed approach on a multi-label human limb segmentation problem compared with classical α-expansion algorithm.
Address
Corporate Author Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4673-0062-9 Medium
Area Expedition Conference HICV
Notes MILAB; HuPBA Approved no
Call Number Admin @ si @ HPE2011 Serial 1892
Permanent link to this record
 

 
Author Miguel Reyes; Gabriel Dominguez; Sergio Escalera
Title Feature Weighting in Dynamic Time Warping for Gesture Recognition in Depth Data Type Conference Article
Year 2011 Publication 1st IEEE Workshop on Consumer Depth Cameras for Computer Vision Abbreviated Journal
Volume Issue Pages 1182-1188
Keywords
Abstract We present a gesture recognition approach for depth video data based on a novel Feature Weighting approach within the Dynamic Time Warping framework. Depth features from human joints are compared through video sequences using Dynamic Time Warping, and weights are assigned to features based on inter-intra class gesture variability. Feature Weighting in Dynamic Time Warping is then applied for recognizing begin-end of gestures in data sequences. The obtained results recognizing several gestures in depth data show high performance compared with classical Dynamic Time Warping approach.
Address
Corporate Author Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4673-0062-9 Medium
Area Expedition Conference CDC4CV
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ RDE2011 Serial 1893
Permanent link to this record
 

 
Author Michal Drozdzal; Santiago Segui; Petia Radeva; Jordi Vitria; Laura Igual
Title System and Method for Displaying Motility Events in an in Vivo Image Stream Type Patent
Year 2011 Publication US 61/592,786 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Given Imaging
Corporate Author US Patent Office Thesis
Publisher Place of Publication (up) Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR;MV Approved no
Call Number Admin @ si @ DSR2011 Serial 1897
Permanent link to this record