Xavier Boix. (2009). Learning Conditional Random Fields for Stereo (Vol. 136). Master's thesis, , Bellaterra, Barcelona.
|
Shida Beigpour. (2009). Physics-based Reflectance Estimation Applied to Recoloring (Vol. 137). Master's thesis, , Bellaterra, Barcelona.
|
Jaume Gibert. (2009). Learning structural representations and graph matching paradigms in the context of object recognition (Vol. 143). Master's thesis, , .
|
Jose Carlos Rubio. (2009). Graph matching based on graphical models with application to vehicle tracking and classification at night (Vol. 144). Master's thesis, , Bellaterra, Barcelona.
|
Farshad Nourbakhsh. (2009). Colour logo recognition (Vol. 145). Master's thesis, , Bellaterra, Barcelona.
|
Enric Sala. (2009). Off-line person-dependent signature verification (Vol. 146). Master's thesis, , Bellaterra, Barcelona.
|
Wenjuan Gong. (2009). Action priors for human pose tracking by particle filter. Master's thesis, , Bellaterra, Barcelona.
|
Diego Alejandro Cheda. (2009). Monocular egomotion estimation for ADAS application (Vol. 148). Ph.D. thesis, , Bellaterra, Barcelona.
|
Javier Marin. (2009). Virtual learning for real testing (Vol. 150). Master's thesis, , bell.
|
Fernando Vilariño, Stephan Ameling, Gerard Lacey, Stephen Patchett, & Hugh Mulcahy. (2009). Eye Tracking Search Patterns in Expert and Trainee Colonoscopists: A Novel Method of Assessing Endoscopic Competency? GI - Gastrointestinal Endoscopy, 69(5), 370.
|
Mirko Arnold, Anarta Ghosh, Gerard Lacey, Stephen Patchett, & Hugh Mulcahy. (2009). Indistinct frame detection in colonoscopy videos. In Machine Vision and Image Processing Conference (pp. 47–52).
|
Stefan Ameling, Stephan Wirth, Dietrich Paulus, Gerard Lacey, & Fernando Vilariño. (2009). Texture-based Polyp Detection in Colonoscopy. Proc. BILDVERARBEITUNG FÜR DIE MEDIZIN, .
|
Fernando Vilariño, & Gerard Lacey. (2009). QUALITY ASSESSMENT IN COLONOSCOPY New challenges through computer vision-based systems. In in Proc. 3rd International Conference on Biomedical Electronics and Devices.
|
Fahad Shahbaz Khan, Joost Van de Weijer, & Maria Vanrell. (2009). Top-Down Color Attention for Object Recognition. In 12th International Conference on Computer Vision (pp. 979–986).
Abstract: Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information.
|
Joost Van de Weijer, Cordelia Schmid, Jakob Verbeek, & Diane Larlus. (2009). Learning Color Names for Real-World Applications. TIP - IEEE Transaction in Image Processing, 18(7), 1512–1524.
Abstract: Color names are required in real-world applications such as image retrieval and image annotation. Traditionally, they are learned from a collection of labelled color chips. These color chips are labelled with color names within a well-defined experimental setup by human test subjects. However naming colors in real-world images differs significantly from this experimental setting. In this paper, we investigate how color names learned from color chips compare to color names learned from real-world images. To avoid hand labelling real-world images with color names we use Google Image to collect a data set. Due to limitations of Google Image this data set contains a substantial quantity of wrongly labelled data. We propose several variants of the PLSA model to learn color names from this noisy data. Experimental results show that color names learned from real-world images significantly outperform color names learned from labelled color chips for both image retrieval and image annotation.
|