|
Javier Vazquez, C. Alejandro Parraga, & Maria Vanrell. (2009). Ordinal pairwise method for natural images comparison. PER - Perception, 38, 180.
Abstract: 38(Suppl.)ECVP Abstract Supplement
We developed a new psychophysical method to compare different colour appearance models when applied to natural scenes. The method was as follows: two images (processed by different algorithms) were displayed on a CRT monitor and observers were asked to select the most natural of them. The original images were gathered by means of a calibrated trichromatic digital camera and presented one on top of the other on a calibrated screen. The selection was made by pressing on a 6-button IR box, which allowed observers to consider not only the most natural but to rate their selection. The rating system allowed observers to register how much more natural was their chosen image (eg, much more, definitely more, slightly more), which gave us valuable extra information on the selection process. The results were analysed considering both the selection as a binary choice (using Thurstone's law of comparative judgement) and using Bradley-Terry method for ordinal comparison. Our results show a significant difference in the rating scales obtained. Although this method has been used in colour constancy algorithm comparisons, its uses are much wider, eg to compare algorithms of image compression, rendering, recolouring, etc.
|
|
|
Robert Benavente, C. Alejandro Parraga, & Maria Vanrell. (2009). Colour categories boundaries are better defined in contextual conditions. PER - Perception, 38, 36.
Abstract: In a previous experiment [Parraga et al, 2009 Journal of Imaging Science and Technology 53(3)] the boundaries between basic colour categories were measured by asking subjects to categorize colour samples presented in isolation (ie on a dark background) using a YES/NO paradigm. Results showed that some boundaries (eg green – blue) were very diffuse and the subjects' answers presented bimodal distributions, which were attributed to the emergence of non-basic categories in those regions (eg turquoise). To confirm these results we performed a new experiment focussed on the boundaries where bimodal distributions were more evident. In this new experiment rectangular colour samples were presented surrounded by random colour patches to simulate contextual conditions on a calibrated CRT monitor. The names of two neighbouring colours were shown at the bottom of the screen and subjects selected the boundary between these colours by controlling the chromaticity of the central patch, sliding it across these categories' frontier. Results show that in this new experimental paradigm, the formerly uncertain inter-colour category boundaries are better defined and the dispersions (ie the bimodal distributions) that occurred in the previous experiment disappear. These results may provide further support to Berlin and Kay's basic colour terms theory.
|
|
|
C. Alejandro Parraga, Javier Vazquez, & Maria Vanrell. (2009). A new cone activation-based natural images dataset. PER - Perception, 36, 180.
Abstract: We generated a new dataset of digital natural images where each colour plane corresponds to the human LMS (long-, medium-, short-wavelength) cone activations. The images were chosen to represent five different visual environments (eg forest, seaside, mountain snow, urban, motorways) and were taken under natural illumination at different times of day. At the bottom-left corner of each picture there was a matte grey ball of approximately constant spectral reflectance (across the camera's response spectrum,) and nearly Lambertian reflective properties, which allows to compute (and remove, if necessary) the illuminant's colour and intensity. The camera (Sigma Foveon SD10) was calibrated by measuring its sensor's spectral responses using a set of 31 spectrally narrowband interference filters. This allowed conversion of the final camera-dependent RGB colour space into the Smith and Pokorny (1975) cone activation space by means of a polynomial transformation, optimised for a set of 1269 Munsell chip reflectances. This new method is an improvement over the usual 3 × 3 matrix transformation which is only accurate for spectrally-narrowband colours. The camera-to-LMS transformation can be recalculated to consider other non-human visual systems. The dataset is available to download from our website.
|
|
|
Joost Van de Weijer, Cordelia Schmid, Jakob Verbeek, & Diane Larlus. (2009). Learning Color Names for Real-World Applications. TIP - IEEE Transaction in Image Processing, 18(7), 1512–1524.
Abstract: Color names are required in real-world applications such as image retrieval and image annotation. Traditionally, they are learned from a collection of labelled color chips. These color chips are labelled with color names within a well-defined experimental setup by human test subjects. However naming colors in real-world images differs significantly from this experimental setting. In this paper, we investigate how color names learned from color chips compare to color names learned from real-world images. To avoid hand labelling real-world images with color names we use Google Image to collect a data set. Due to limitations of Google Image this data set contains a substantial quantity of wrongly labelled data. We propose several variants of the PLSA model to learn color names from this noisy data. Experimental results show that color names learned from real-world images significantly outperform color names learned from labelled color chips for both image retrieval and image annotation.
|
|
|
Fahad Shahbaz Khan, Joost Van de Weijer, & Maria Vanrell. (2009). Top-Down Color Attention for Object Recognition. In 12th International Conference on Computer Vision (pp. 979–986).
Abstract: Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information.
|
|
|
Arjan Gijsenij, Theo Gevers, & Joost Van de Weijer. (2009). Physics-based Edge Evaluation for Improved Color Constancy. In 22nd IEEE Conference on Computer Vision and Pattern Recognition (581 – 588).
Abstract: Edge-based color constancy makes use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as shadow, geometry, material and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation.
|
|
|
Jose Manuel Alvarez, Ferran Diego, Joan Serrat, & Antonio Lopez. (2009). Automatic Ground-truthing using video registration for on-board detection algorithms. In 16th IEEE International Conference on Image Processing (pp. 4389–4392).
Abstract: Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate.
|
|
|
Enric Marti, Jaume Rocarias, Ricardo Toledo, & Aura Hernandez-Sabate. (2009). Caronte: plataforma Moodle con gestion flexible de grupos. Primeras experiencias en asignaturas de Ingenieria Informatica.
Abstract: En este artículo se presenta Caronte, entorno LMS (Learning Management System) basado en Moodle. Una característica importante del entorno es la gestión flexible de grupos en una asignatura. Entendemos por grupo un conjunto de alumnos que realizan un trabajo y uno de ellos entrega la actividad propuesta (práctica, encuesta, etc.) en representación del grupo. Hemos trabajado en la confección de estos grupos, implementando un sistema de inscripción por contraseña.
Caronte ofrece un conjunto de actividades basadas en este concepto de grupo: encuestas, tareas (entrega de trabajos o prácticas), encuestas de autoevaluación y cuestionarios, entre otras.
Basada en nuestra actividad de encuesta, hemos definido una actividad de Control, que permite un cierto feedback electrónico del profesor sobre la actividad de los alumnos.
Finalmente, se presenta un resumen de las experiencias de uso de Caronte sobre asignaturas de Ingeniería Informática en el curso 2007-08.
|
|
|
Francesco Ciompi, Oriol Pujol, Oriol Rodriguez-Leor, Angel Serrano, J. Mauri, & Petia Radeva. (2009). On in-vitro and in-vivo IVUS data fusion. In 12th International Conference of the Catalan Association for Artificial Intelligence (Vol. 202, pp. 147–156).
Abstract: The design and the validation of an automatic plaque characterization technique based on Intravascular Ultrasound (IVUS) usually requires a data ground-truth. The histological analysis of post-mortem coronary arteries is commonly assumed as the state-of-the-art process for the extraction of a reliable data-set of atherosclerotic plaques. Unfortunately, the amount of data provided by this technique is usually few, due to the difficulties in collecting post-mortem cases and phenomena of tissue spoiling during histological analysis. In this paper we tackle the process of fusing in-vivo and in-vitro IVUS data starting with the analysis of recently proposed approaches for the creation of an enhanced IVUS data-set; furthermore, we propose a new approach, named pLDS, based on semi-supervised learning with a data selection criterion. The enhanced data-set obtained by each one of the analyzed approaches is used to train a classifier for tissue characterization purposes. Finally, the discriminative power of each classifier is quantitatively assessed and compared by classifying a data-set of validated in-vitro IVUS data.
|
|
|
Nicola Bellotto, Eric Sommerlade, Ben Benfold, Charles Bibby, I. Reid, Daniel Roth, et al. (2009). A Distributed Camera System for Multi-Resolution Surveillance. In 3rd ACM/IEEE International Conference on Distributed Smart Cameras.
Abstract: We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance.
Keywords: 10.1109/ICDSC.2009.5289413
|
|
|
Mikhail Mozerov, Ariel Amato, & Xavier Roca. (2009). Occlusion Handling in Trinocular Stereo using Composite Disparity Space Image. In 19th International Conference on Computer Graphics and Vision (69–73).
Abstract: In this paper we propose a method that smartly improves occlusion handling in stereo matching using trinocular stereo. The main idea is based on the assumption that any occluded region in a matched stereo pair (middle-left images) in general is not occluded in the opposite matched pair (middle-right images). Then two disparity space images (DSI) can be merged in one composite DSI. The proposed integration differs from the known approach that uses a cumulative cost. A dense disparity map is obtained with a global optimization algorithm using the proposed composite DSI. The experimental results are evaluated on the Middlebury data set, showing high performance of the proposed algorithm especially in the occluded regions. One of the top positions in the rank of the Middlebury website confirms the performance of our method to be competitive with the best stereo matching.
|
|
|
Mikhail Mozerov, Ignasi Rius, Xavier Roca, & Jordi Gonzalez. (2010). Nonlinear synchronization for automatic learning of 3D pose variability in human motion sequences. EURASIPJ - EURASIP Journal on Advances in Signal Processing, .
Abstract: Article ID 507247
A dense matching algorithm that solves the problem of synchronizing prerecorded human motion sequences, which show different speeds and accelerations, is proposed. The approach is based on minimization of MRF energy and solves the problem by using Dynamic Programming. Additionally, an optimal sequence is automatically selected from the input dataset to be a time-scale pattern for all other sequences. The paper utilizes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. The model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally, statistics about the observed variability of the postures and motion direction are also computed at each time step. The synchronized motion sequences are used to learn a model of human motion for action recognition and full-body tracking purposes.
|
|
|
Jordi Gonzalez, Dani Rowe, Javier Varona, & Xavier Roca. (2009). Understanding Dynamic Scenes based on Human Sequence Evaluation. IMAVIS - Image and Vision Computing, 27(10), 1433–1444.
Abstract: In this paper, a Cognitive Vision System (CVS) is presented, which explains the human behaviour of monitored scenes using natural-language texts. This cognitive analysis of human movements recorded in image sequences is here referred to as Human Sequence Evaluation (HSE) which defines a set of transformation modules involved in the automatic generation of semantic descriptions from pixel values. In essence, the trajectories of human agents are obtained to generate textual interpretations of their motion, and also to infer the conceptual relationships of each agent w.r.t. its environment. For this purpose, a human behaviour model based on Situation Graph Trees (SGTs) is considered, which permits both bottom-up (hypothesis generation) and top-down (hypothesis refinement) analysis of dynamic scenes. The resulting system prototype interprets different kinds of behaviour and reports textual descriptions in multiple languages.
Keywords: Image Sequence Evaluation; High-level processing of monitored scenes; Segmentation and tracking in complex scenes; Event recognition in dynamic scenes; Human motion understanding; Human behaviour interpretation; Natural-language text generation; Realistic demonstrators
|
|
|
Ivan Huerta, Michael Holte, Thomas B. Moeslund, & Jordi Gonzalez. (2009). Detection and Removal of Chromatic Moving Shadows in Surveillance Scenarios. In 12th International Conference on Computer Vision (pp. 1499–1506).
Abstract: Segmentation in the surveillance domain has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. In this paper we present a novel technique based on gradient and colour models for separating chromatic moving cast shadows from detected moving objects. Firstly, both a chromatic invariant colour cone model and an invariant gradient model are built to perform automatic segmentation while detecting potential shadows. In a second step, regions corresponding to potential shadows are grouped by considering “a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions.
|
|
|
D. Jayagopi, Bogdan Raducanu, & D. Gatica-Perez. (2009). Characterizing conversational group dynamics using nonverbal behaviour. In 10th IEEE International Conference on Multimedia and Expo (370–373).
Abstract: This paper addresses the novel problem of characterizing conversational group dynamics. It is well documented in social psychology that depending on the objectives a group, the dynamics are different. For example, a competitive meeting has a different objective from that of a collaborative meeting. We propose a method to characterize group dynamics based on the joint description of a group members' aggregated acoustical nonverbal behaviour to classify two meeting datasets (one being cooperative-type and the other being competitive-type). We use 4.5 hours of real behavioural multi-party data and show that our methodology can achieve a classification rate of upto 100%.
|
|