|   | 
Details
   web
Records
Author Robert Benavente; C. Alejandro Parraga; Maria Vanrell
Title Colour categories boundaries are better defined in contextual conditions Type Journal Article
Year 2009 Publication Perception Abbreviated Journal PER
Volume 38 Issue Pages 36
Keywords
Abstract In a previous experiment [Parraga et al, 2009 Journal of Imaging Science and Technology 53(3)] the boundaries between basic colour categories were measured by asking subjects to categorize colour samples presented in isolation (ie on a dark background) using a YES/NO paradigm. Results showed that some boundaries (eg green – blue) were very diffuse and the subjects' answers presented bimodal distributions, which were attributed to the emergence of non-basic categories in those regions (eg turquoise). To confirm these results we performed a new experiment focussed on the boundaries where bimodal distributions were more evident. In this new experiment rectangular colour samples were presented surrounded by random colour patches to simulate contextual conditions on a calibrated CRT monitor. The names of two neighbouring colours were shown at the bottom of the screen and subjects selected the boundary between these colours by controlling the chromaticity of the central patch, sliding it across these categories' frontier. Results show that in this new experimental paradigm, the formerly uncertain inter-colour category boundaries are better defined and the dispersions (ie the bimodal distributions) that occurred in the previous experiment disappear. These results may provide further support to Berlin and Kay's basic colour terms theory.
Address
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number CAT @ cat @ BPV2009 Serial 1192
Permanent link to this record
 

 
Author C. Alejandro Parraga; Javier Vazquez; Maria Vanrell
Title A new cone activation-based natural images dataset Type Journal Article
Year 2009 Publication Perception Abbreviated Journal PER
Volume 36 Issue Pages 180
Keywords
Abstract We generated a new dataset of digital natural images where each colour plane corresponds to the human LMS (long-, medium-, short-wavelength) cone activations. The images were chosen to represent five different visual environments (eg forest, seaside, mountain snow, urban, motorways) and were taken under natural illumination at different times of day. At the bottom-left corner of each picture there was a matte grey ball of approximately constant spectral reflectance (across the camera's response spectrum,) and nearly Lambertian reflective properties, which allows to compute (and remove, if necessary) the illuminant's colour and intensity. The camera (Sigma Foveon SD10) was calibrated by measuring its sensor's spectral responses using a set of 31 spectrally narrowband interference filters. This allowed conversion of the final camera-dependent RGB colour space into the Smith and Pokorny (1975) cone activation space by means of a polynomial transformation, optimised for a set of 1269 Munsell chip reflectances. This new method is an improvement over the usual 3 × 3 matrix transformation which is only accurate for spectrally-narrowband colours. The camera-to-LMS transformation can be recalculated to consider other non-human visual systems. The dataset is available to download from our website.
Address
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC Approved no
Call Number CAT @ cat @ PVV2009 Serial 1193
Permanent link to this record
 

 
Author Joost Van de Weijer; Cordelia Schmid; Jakob Verbeek; Diane Larlus
Title Learning Color Names for Real-World Applications Type Journal Article
Year 2009 Publication IEEE Transaction in Image Processing Abbreviated Journal TIP
Volume 18 Issue 7 Pages 1512–1524
Keywords
Abstract Color names are required in real-world applications such as image retrieval and image annotation. Traditionally, they are learned from a collection of labelled color chips. These color chips are labelled with color names within a well-defined experimental setup by human test subjects. However naming colors in real-world images differs significantly from this experimental setting. In this paper, we investigate how color names learned from color chips compare to color names learned from real-world images. To avoid hand labelling real-world images with color names we use Google Image to collect a data set. Due to limitations of Google Image this data set contains a substantial quantity of wrongly labelled data. We propose several variants of the PLSA model to learn color names from this noisy data. Experimental results show that color names learned from real-world images significantly outperform color names learned from labelled color chips for both image retrieval and image annotation.
Address
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes Approved no
Call Number CAT @ cat @ WSV2009 Serial 1195
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell
Title Top-Down Color Attention for Object Recognition Type Conference Article
Year 2009 Publication 12th International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 979 - 986
Keywords
Abstract Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information.
Address Kyoto, Japan
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN 978-1-4244-4420-5 Medium
Area Expedition Conference ICCV
Notes CIC Approved no
Call Number CAT @ cat @ SWV2009 Serial 1196
Permanent link to this record
 

 
Author Arjan Gijsenij; Theo Gevers; Joost Van de Weijer
Title Physics-based Edge Evaluation for Improved Color Constancy Type Conference Article
Year 2009 Publication 22nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 581 – 588
Keywords
Abstract Edge-based color constancy makes use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as shadow, geometry, material and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation.
Address Miami, USA
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1063-6919 ISBN 978-1-4244-3992-8 Medium
Area Expedition Conference CVPR
Notes CAT;ISE Approved no
Call Number CAT @ cat @ GGW2009 Serial 1197
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Ferran Diego; Joan Serrat; Antonio Lopez
Title Automatic Ground-truthing using video registration for on-board detection algorithms Type Conference Article
Year 2009 Publication 16th IEEE International Conference on Image Processing Abbreviated Journal
Volume Issue Pages 4389 - 4392
Keywords
Abstract Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate.
Address Cairo, Egypt
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1522-4880 ISBN 978-1-4244-5653-6 Medium
Area Expedition Conference ICIP
Notes ADAS Approved no
Call Number ADAS @ adas @ ADS2009 Serial 1201
Permanent link to this record
 

 
Author Enric Marti; Jaume Rocarias; Ricardo Toledo; Aura Hernandez-Sabate
Title Caronte: plataforma Moodle con gestion flexible de grupos. Primeras experiencias en asignaturas de Ingenieria Informatica Type Miscellaneous
Year 2009 Publication 15th Jornadas de Enseñanza Universitaria de la Informatica Abbreviated Journal
Volume Issue Pages 461–468
Keywords
Abstract En este artículo se presenta Caronte, entorno LMS (Learning Management System) basado en Moodle. Una característica importante del entorno es la gestión flexible de grupos en una asignatura. Entendemos por grupo un conjunto de alumnos que realizan un trabajo y uno de ellos entrega la actividad propuesta (práctica, encuesta, etc.) en representación del grupo. Hemos trabajado en la confección de estos grupos, implementando un sistema de inscripción por contraseña.
Caronte ofrece un conjunto de actividades basadas en este concepto de grupo: encuestas, tareas (entrega de trabajos o prácticas), encuestas de autoevaluación y cuestionarios, entre otras.
Basada en nuestra actividad de encuesta, hemos definido una actividad de Control, que permite un cierto feedback electrónico del profesor sobre la actividad de los alumnos.
Finalmente, se presenta un resumen de las experiencias de uso de Caronte sobre asignaturas de Ingeniería Informática en el curso 2007-08.
Address Barcelona, Spain
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-692-2758-9 Medium
Area Expedition Conference JENUI
Notes IAM;RV;ADAS Approved no
Call Number IAM @ iam @ MRT2009 Serial 1202
Permanent link to this record
 

 
Author Francesco Ciompi; Oriol Pujol; Oriol Rodriguez-Leor; Angel Serrano; J. Mauri; Petia Radeva
Title On in-vitro and in-vivo IVUS data fusion Type Conference Article
Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume 202 Issue Pages 147-156
Keywords
Abstract The design and the validation of an automatic plaque characterization technique based on Intravascular Ultrasound (IVUS) usually requires a data ground-truth. The histological analysis of post-mortem coronary arteries is commonly assumed as the state-of-the-art process for the extraction of a reliable data-set of atherosclerotic plaques. Unfortunately, the amount of data provided by this technique is usually few, due to the difficulties in collecting post-mortem cases and phenomena of tissue spoiling during histological analysis. In this paper we tackle the process of fusing in-vivo and in-vitro IVUS data starting with the analysis of recently proposed approaches for the creation of an enhanced IVUS data-set; furthermore, we propose a new approach, named pLDS, based on semi-supervised learning with a data selection criterion. The enhanced data-set obtained by each one of the analyzed approaches is used to train a classifier for tissue characterization purposes. Finally, the discriminative power of each classifier is quantitatively assessed and compared by classifying a data-set of validated in-vitro IVUS data.
Address Cardona (Spain)
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-60750-061-2 Medium
Area Expedition Conference CCIA
Notes MILAB;HuPBA Approved no
Call Number BCNPCL @ bcnpcl @ CPR2009d Serial 1204
Permanent link to this record
 

 
Author Nicola Bellotto; Eric Sommerlade; Ben Benfold; Charles Bibby; I. Reid; Daniel Roth; Luc Van Gool; Carles Fernandez; Jordi Gonzalez
Title A Distributed Camera System for Multi-Resolution Surveillance Type Conference Article
Year 2009 Publication 3rd ACM/IEEE International Conference on Distributed Smart Cameras Abbreviated Journal
Volume Issue Pages
Keywords 10.1109/ICDSC.2009.5289413
Abstract We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance.
Address Como, Italy
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDSC
Notes Approved no
Call Number ISE @ ise @ BSB2009 Serial 1205
Permanent link to this record
 

 
Author Mikhail Mozerov; Ariel Amato; Xavier Roca
Title Occlusion Handling in Trinocular Stereo using Composite Disparity Space Image Type Conference Article
Year 2009 Publication 19th International Conference on Computer Graphics and Vision Abbreviated Journal
Volume Issue Pages 69–73
Keywords
Abstract In this paper we propose a method that smartly improves occlusion handling in stereo matching using trinocular stereo. The main idea is based on the assumption that any occluded region in a matched stereo pair (middle-left images) in general is not occluded in the opposite matched pair (middle-right images). Then two disparity space images (DSI) can be merged in one composite DSI. The proposed integration differs from the known approach that uses a cumulative cost. A dense disparity map is obtained with a global optimization algorithm using the proposed composite DSI. The experimental results are evaluated on the Middlebury data set, showing high performance of the proposed algorithm especially in the occluded regions. One of the top positions in the rank of the Middlebury website confirms the performance of our method to be competitive with the best stereo matching.
Address Moscow (Russia)
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-5-317-02975-3 Medium
Area Expedition Conference GRAPHICON
Notes ISE Approved no
Call Number ISE @ ise @ MAR2009b Serial 1207
Permanent link to this record
 

 
Author Jordi Gonzalez; Dani Rowe; Javier Varona; Xavier Roca
Title Understanding Dynamic Scenes based on Human Sequence Evaluation Type Journal Article
Year 2009 Publication Image and Vision Computing Abbreviated Journal IMAVIS
Volume 27 Issue 10 Pages 1433–1444
Keywords Image Sequence Evaluation; High-level processing of monitored scenes; Segmentation and tracking in complex scenes; Event recognition in dynamic scenes; Human motion understanding; Human behaviour interpretation; Natural-language text generation; Realistic demonstrators
Abstract In this paper, a Cognitive Vision System (CVS) is presented, which explains the human behaviour of monitored scenes using natural-language texts. This cognitive analysis of human movements recorded in image sequences is here referred to as Human Sequence Evaluation (HSE) which defines a set of transformation modules involved in the automatic generation of semantic descriptions from pixel values. In essence, the trajectories of human agents are obtained to generate textual interpretations of their motion, and also to infer the conceptual relationships of each agent w.r.t. its environment. For this purpose, a human behaviour model based on Situation Graph Trees (SGTs) is considered, which permits both bottom-up (hypothesis generation) and top-down (hypothesis refinement) analysis of dynamic scenes. The resulting system prototype interprets different kinds of behaviour and reports textual descriptions in multiple languages.
Address
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number ISE @ ise @ GRV2009 Serial 1211
Permanent link to this record
 

 
Author Ivan Huerta; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez
Title Detection and Removal of Chromatic Moving Shadows in Surveillance Scenarios Type Conference Article
Year 2009 Publication 12th International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 1499 - 1506
Keywords
Abstract Segmentation in the surveillance domain has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. In this paper we present a novel technique based on gradient and colour models for separating chromatic moving cast shadows from detected moving objects. Firstly, both a chromatic invariant colour cone model and an invariant gradient model are built to perform automatic segmentation while detecting potential shadows. In a second step, regions corresponding to potential shadows are grouped by considering “a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions.
Address Kyoto, Japan
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN 978-1-4244-4420-5 Medium
Area Expedition Conference ICCV
Notes Approved no
Call Number ISE @ ise @ HHM2009 Serial 1213
Permanent link to this record
 

 
Author D. Jayagopi; Bogdan Raducanu; D. Gatica-Perez
Title Characterizing conversational group dynamics using nonverbal behaviour Type Conference Article
Year 2009 Publication 10th IEEE International Conference on Multimedia and Expo Abbreviated Journal
Volume Issue Pages 370–373
Keywords
Abstract This paper addresses the novel problem of characterizing conversational group dynamics. It is well documented in social psychology that depending on the objectives a group, the dynamics are different. For example, a competitive meeting has a different objective from that of a collaborative meeting. We propose a method to characterize group dynamics based on the joint description of a group members' aggregated acoustical nonverbal behaviour to classify two meeting datasets (one being cooperative-type and the other being competitive-type). We use 4.5 hours of real behavioural multi-party data and show that our methodology can achieve a classification rate of upto 100%.
Address New York, USA
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1945-7871 ISBN 978-1-4244-4290-4 Medium
Area Expedition Conference ICME
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ JRG2009 Serial 1217
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu
Title Three-Dimensional Face Pose Detection and Tracking Using Monocular Videos: Tool and Application Type Journal Article
Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal TSMCB
Volume 39 Issue 4 Pages 935–944
Keywords
Abstract Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
Address
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ DoR2009a Serial 1218
Permanent link to this record
 

 
Author Oriol Ramos Terrades; Ernest Valveny; Salvatore Tabbone
Title Optimal Classifier Fusion in a Non-Bayesian Probabilistic Framework Type Journal Article
Year 2009 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 31 Issue 9 Pages 1630–1644
Keywords
Abstract The combination of the output of classifiers has been one of the strategies used to improve classification rates in general purpose classification systems. Some of the most common approaches can be explained using the Bayes' formula. In this paper, we tackle the problem of the combination of classifiers using a non-Bayesian probabilistic framework. This approach permits us to derive two linear combination rules that minimize misclassification rates under some constraints on the distribution of classifiers. In order to show the validity of this approach we have compared it with other popular combination rules from a theoretical viewpoint using a synthetic data set, and experimentally using two standard databases: the MNIST handwritten digit database and the GREC symbol database. Results on the synthetic data set show the validity of the theoretical approach. Indeed, results on real data show that the proposed methods outperform other common combination schemes.
Address
Corporate Author Thesis
Publisher (up) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes DAG Approved no
Call Number DAG @ dag @ RVT2009 Serial 1220
Permanent link to this record