|   | 
Details
   web
Records
Author Marcel P. Lucassen; Theo Gevers; Arjan Gijsenij
Title Texture Affects Color Emotion Type Journal Article
Year 2011 Publication Color Research & Applications Abbreviated Journal CRA
Volume 36 Issue 6 Pages 426–436
Keywords color;texture;color emotion;observer variability;ranking
Abstract Several studies have recorded color emotions in subjects viewing uniform color (UC) samples. We conduct an experiment to measure and model how these color emotions change when texture is added to the color samples. Using a computer monitor, our subjects arrange samples along four scales: warm–cool, masculine–feminine, hard–soft, and heavy–light. Three sample types of increasing visual complexity are used: UC, grayscale textures, and color textures (CTs). To assess the intraobserver variability, the experiment is repeated after 1 week. Our results show that texture fully determines the responses on the Hard-Soft scale, and plays a role of decreasing weight for the masculine–feminine, heavy–light, and warm–cool scales. Using some 25,000 observer responses, we derive color emotion functions that predict the group-averaged scale responses from the samples' color and texture parameters. For UC samples, the accuracy of our functions is significantly higher (average R2 = 0.88) than that of previously reported functions applied to our data. The functions derived for CT samples have an accuracy of R2 = 0.80. We conclude that when textured samples are used in color emotion studies, the psychological responses may be strongly affected by texture. © 2010 Wiley Periodicals, Inc. Col Res Appl, 2010
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ LGG2011 Serial 1844
Permanent link to this record
 

 
Author Albert Ali Salah; E. Pauwels; R. Tavenard; Theo Gevers
Title T-Patterns Revisited: Mining for Temporal Patterns in Sensor Data Type Journal Article
Year 2010 Publication Sensors Abbreviated Journal SENS
Volume 10 Issue 8 Pages 7496-7513
Keywords sensor networks; temporal pattern extraction; T-patterns; Lempel-Ziv; Gaussian mixture model; MERL motion data
Abstract The trend to use large amounts of simple sensors as opposed to a few complex sensors to monitor places and systems creates a need for temporal pattern mining algorithms to work on such data. The methods that try to discover re-usable and interpretable patterns in temporal event data have several shortcomings. We contrast several recent approaches to the problem, and extend the T-Pattern algorithm, which was previously applied for detection of sequential patterns in behavioural sciences. The temporal complexity of the T-pattern approach is prohibitive in the scenarios we consider. We remedy this with a statistical model to obtain a fast and robust algorithm to find patterns in temporal data. We test our algorithm on a recent database collected with passive infrared sensors with millions of events.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ SPT2010 Serial 1845
Permanent link to this record
 

 
Author K.E.A. van de Sande; Theo Gevers; C.G.M. Snoek
Title Evaluating Color Descriptors for Object and Scene Recognition Type Journal Article
Year 2010 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 32 Issue 9 Pages 1582 - 1596
Keywords
Abstract Impact factor: 5.308
Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ SGS2010 Serial 1846
Permanent link to this record
 

 
Author J. Stöttinger; A. Hanbury; N. Sebe; Theo Gevers
Title Spars Color Interest Points for Image Retrieval and Object Categorization Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 5 Pages 2681-2692
Keywords
Abstract Impact factor 2010: 2.92
IF 2011/2012?: 3.32
Interest point detection is an important research area in the field of image processing and computer vision. In particular, image retrieval and object categorization heavily rely on interest point detection from which local image descriptors are computed for image matching. In general, interest points are based on luminance, and color has been largely ignored. However, the use of color increases the distinctiveness of interest points. The use of color may therefore provide selective search reducing the total number of interest points used for image matching. This paper proposes color interest points for sparse image representation. To reduce the sensitivity to varying imaging conditions, light-invariant interest points are introduced. Color statistics based on occurrence probability lead to color boosted points, which are obtained through saliency-based feature selection. Furthermore, a principal component analysis-based scale selection method is proposed, which gives a robust scale estimation per interest point. From large-scale experiments, it is shown that the proposed color interest point detector has higher repeatability than a luminance-based one. Furthermore, in the context of image retrieval, a reduced and predictable number of color features show an increase in performance compared to state-of-the-art interest points. Finally, in the context of object recognition, for the Pascal VOC 2007 challenge, our method gives comparable performance to state-of-the-art methods using only a small fraction of the features, reducing the computing time considerably.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ SHS2012 Serial 1847
Permanent link to this record
 

 
Author R. Valenti; N. Sebe; Theo Gevers
Title What are you looking at? Improving Visual gaze Estimation by Saliency Type Journal Article
Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 98 Issue 3 Pages 324-334
Keywords
Abstract Impact factor 2010: 5.15
Impact factor 2011/12?: 5.36
In this paper we present a novel mechanism to obtain enhanced gaze estimation for subjects looking at a scene or an image. The system makes use of prior knowledge about the scene (e.g. an image on a computer screen), to define a probability map of the scene the subject is gazing at, in order to find the most probable location. The proposed system helps in correcting the fixations which are erroneously estimated by the gaze estimation device by employing a saliency framework to adjust the resulting gaze point vector. The system is tested on three scenarios: using eye tracking data, enhancing a low accuracy webcam based eye tracker, and using a head pose tracker. The correlation between the subjects in the commercial eye tracking data is improved by an average of 13.91%. The correlation on the low accuracy eye gaze tracker is improved by 59.85%, and for the head pose tracker we obtain an improvement of 10.23%. These results show the potential of the system as a way to enhance and self-calibrate different visual gaze estimation systems.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0920-5691 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ VSG2012 Serial 1848
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers
Title Accurate Eye Center Location through Invariant Isocentric Patterns Type Journal Article
Year 2012 Publication IEEE Transaction on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 34 Issue 9 Pages 1785-1798
Keywords
Abstract Impact factor 2010: 5.308
Impact factor 2011/12?: 5.96
Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and impossible to use on standard (i.e., visible wavelength), low-resolution images of eyes. Systems based solely on appearance are proposed in the literature, but their accuracy does not allow us to accurately locate and distinguish eye centers movements in these low-resolution settings. Our aim is to bridge this gap by locating the center of the eye within the area of the pupil on low-resolution images taken from a webcam or a similar device. The proposed method makes use of isophote properties to gain invariance to linear lighting changes (contrast and brightness), to achieve in-plane rotational invariance, and to keep low-computational costs. To further gain scale invariance, the approach is applied to a scale space pyramid. In this paper, we extensively test our approach for its robustness to changes in illumination, head pose, scale, occlusion, and eye rotation. We demonstrate that our system can achieve a significant improvement in accuracy over state-of-the-art techniques for eye center location in standard low-resolution imagery.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ VaG 2012a Serial 1849
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers
Title Combining Head Pose and Eye Location Information for Gaze Estimation Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 2 Pages 802-815
Keywords
Abstract Impact factor 2010: 2.92
Impact factor 2011/12?: 3.32
Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ VaG 2012b Serial 1851
Permanent link to this record
 

 
Author Arjan Gijsenij; R. Lu; Theo Gevers; De Xu
Title Color Constancy for Multiple Light Source Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 2 Pages 697-707
Keywords
Abstract Impact factor 2010: 2.92
Impact factor 2011/2012?: 3.32
Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1° , the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ GLG2012a Serial 1852
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers
Title A Statistical Method for 2D Facial Landmarking Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 2 Pages 844-858
Keywords
Abstract IF = 3.32
Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in coarse-to-fine fashion and complemented with a shape prior. We assess the accuracy and the robustness of the proposed approach in extensive cross-database conditions conducted on four face data sets (Face Recognition Grand Challenge, Cohn-Kanade, Bosphorus, and BioID). Our method has 99.33% accuracy on the Bosphorus database and 97.62% accuracy on the BioID database on the average, which improves the state of the art. We show that the method is not significantly affected by low-resolution images, small rotations, facial expressions, and natural occlusions such as beard and mustache. We further test the goodness of the landmarks in a facial expression recognition application and report landmarking-induced improvement over baseline on two separate databases for video-based expression recognition (Cohn-Kanade and BU-4DFE).
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ DSG 2012 Serial 1853
Permanent link to this record
 

 
Author Francesc Tanarro Marquez; Pau Gratacos Marti; F. Javier Sanchez; Joan Ramon Jimenez Minguell; Coen Antens; Enric Sala i Esteva
Title A device for monitoring condition of a railway supply Type Patent
Year 2012 Publication EP 2 404 777 A1 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract of a railway supply line when the supply line is in contact with a head of a pantograph of a vehicle in order to power said vehicle . The device includes a camera ( for monitoring parameters indicative of operating capability of said supply line.
The device is intended to monitor condition
tive of operating capability of said supply line. The device includes a reflective element. comprising a pattern , intended to be arranged onto the pantograph head . The camera is intended to be arranged on the vehicle (10) so as to register the pattern position regarding a vertical direction.
Address (up)
Corporate Author ALSTOM Transport SA Thesis
Publisher European Patent Office Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MV Approved no
Call Number IAM @ iam @ MMS2012 Serial 1854
Permanent link to this record
 

 
Author Antonio Hernandez; Miguel Angel Bautista; Xavier Perez Sala; Victor Ponce; Sergio Escalera; Xavier Baro; Oriol Pujol; Cecilio Angulo
Title Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D Type Journal Article
Year 2014 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 50 Issue 1 Pages 112-121
Keywords RGB-D; Bag-of-Words; Dynamic Time Warping; Human Gesture Recognition
Abstract PATREC5825
We present a methodology to address the problem of human gesture segmentation and recognition in video and depth image sequences. A Bag-of-Visual-and-Depth-Words (BoVDW) model is introduced as an extension of the Bag-of-Visual-Words (BoVW) model. State-of-the-art RGB and depth features, including a newly proposed depth descriptor, are analysed and combined in a late fusion form. The method is integrated in a Human Gesture Recognition pipeline, together with a novel probability-based Dynamic Time Warping (PDTW) algorithm which is used to perform prior segmentation of idle gestures. The proposed DTW variant uses samples of the same gesture category to build a Gaussian Mixture Model driven probabilistic model of that gesture class. Results of the whole Human Gesture Recognition pipeline in a public data set show better performance in comparison to both standard BoVW model and DTW approach.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MV; 605.203 Approved no
Call Number Admin @ si @ HBP2014 Serial 2353
Permanent link to this record
 

 
Author G.D. Evangelidis; Ferran Diego; Joan Serrat; Antonio Lopez
Title Slice Matching for Accurate Spatio-Temporal Alignment Type Conference Article
Year 2011 Publication In ICCV Workshop on Visual Surveillance Abbreviated Journal
Volume Issue Pages
Keywords video alignment
Abstract Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VS
Notes ADAS Approved no
Call Number Admin @ si @ EDS2011; ADAS @ adas @ eds2011a Serial 1861
Permanent link to this record
 

 
Author G. Roig; Xavier Boix; F. de la Torre; Joan Serrat; C. Vilella
Title Hierarchical CRF with product label spaces for parts-based Models Type Conference Article
Year 2011 Publication IEEE Conference on Automatic Face and Gesture Recognition Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset.
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference FG
Notes ADAS Approved no
Call Number Admin @ si @ RBT2011 Serial 1862
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell
Title Modulating Shape Features by Color Attention for Object Recognition Type Journal Article
Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV
Volume 98 Issue 1 Pages 49-64
Keywords
Abstract Bag-of-words based image representation is a successful approach for object recognition. Generally, the subsequent stages of the process: feature detection,feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, it was found that the combination of different image cues, such as shape and color, often obtains below expected results. This paper presents a novel method for recognizing object categories when using ultiple cues by separately processing the shape and color cues and combining them by modulating the shape features by category specific color attention. Color is used to compute bottom up and top-down attention maps. Subsequently, these color attention maps are used to modulate the weights of the shape features. In regions with higher attention shape features are given more weight than in regions with low attention. We compare our approach with existing methods that combine color and shape cues on five data sets containing varied importance of both cues, namely, Soccer (color predominance), Flower (color and hape parity), PASCAL VOC 2007 and 2009 (shape predominance) and Caltech-101 (color co-interference). The experiments clearly demonstrate that in all five data sets our proposed framework significantly outperforms existing methods for combining color and shape information.
Address (up)
Corporate Author Thesis
Publisher Springer Netherlands Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0920-5691 ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number Admin @ si @ KWV2012 Serial 1864
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell
Title Portmanteau Vocabularies for Multi-Cue Image Representation Type Conference Article
Year 2011 Publication 25th Annual Conference on Neural Information Processing Systems Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation
Address (up)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference NIPS
Notes CIC Approved no
Call Number Admin @ si @ KWB2011 Serial 1865
Permanent link to this record