Fahad Shahbaz Khan, Joost Van de Weijer, & Maria Vanrell. (2009). Top-Down Color Attention for Object Recognition. In 12th International Conference on Computer Vision (pp. 979–986).
Abstract: Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information.
|
Arjan Gijsenij, Theo Gevers, & Joost Van de Weijer. (2009). Physics-based Edge Evaluation for Improved Color Constancy. In 22nd IEEE Conference on Computer Vision and Pattern Recognition (581 – 588).
Abstract: Edge-based color constancy makes use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as shadow, geometry, material and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation.
|
Jose Manuel Alvarez, Ferran Diego, Joan Serrat, & Antonio Lopez. (2009). Automatic Ground-truthing using video registration for on-board detection algorithms. In 16th IEEE International Conference on Image Processing (pp. 4389–4392).
Abstract: Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate.
|
Enric Marti, Jaume Rocarias, Ricardo Toledo, & Aura Hernandez-Sabate. (2009). Caronte: plataforma Moodle con gestion flexible de grupos. Primeras experiencias en asignaturas de Ingenieria Informatica.
Abstract: En este artículo se presenta Caronte, entorno LMS (Learning Management System) basado en Moodle. Una característica importante del entorno es la gestión flexible de grupos en una asignatura. Entendemos por grupo un conjunto de alumnos que realizan un trabajo y uno de ellos entrega la actividad propuesta (práctica, encuesta, etc.) en representación del grupo. Hemos trabajado en la confección de estos grupos, implementando un sistema de inscripción por contraseña.
Caronte ofrece un conjunto de actividades basadas en este concepto de grupo: encuestas, tareas (entrega de trabajos o prácticas), encuestas de autoevaluación y cuestionarios, entre otras.
Basada en nuestra actividad de encuesta, hemos definido una actividad de Control, que permite un cierto feedback electrónico del profesor sobre la actividad de los alumnos.
Finalmente, se presenta un resumen de las experiencias de uso de Caronte sobre asignaturas de Ingeniería Informática en el curso 2007-08.
|
Francesco Ciompi, Oriol Pujol, Oriol Rodriguez-Leor, Angel Serrano, J. Mauri, & Petia Radeva. (2009). On in-vitro and in-vivo IVUS data fusion. In 12th International Conference of the Catalan Association for Artificial Intelligence (Vol. 202, pp. 147–156).
Abstract: The design and the validation of an automatic plaque characterization technique based on Intravascular Ultrasound (IVUS) usually requires a data ground-truth. The histological analysis of post-mortem coronary arteries is commonly assumed as the state-of-the-art process for the extraction of a reliable data-set of atherosclerotic plaques. Unfortunately, the amount of data provided by this technique is usually few, due to the difficulties in collecting post-mortem cases and phenomena of tissue spoiling during histological analysis. In this paper we tackle the process of fusing in-vivo and in-vitro IVUS data starting with the analysis of recently proposed approaches for the creation of an enhanced IVUS data-set; furthermore, we propose a new approach, named pLDS, based on semi-supervised learning with a data selection criterion. The enhanced data-set obtained by each one of the analyzed approaches is used to train a classifier for tissue characterization purposes. Finally, the discriminative power of each classifier is quantitatively assessed and compared by classifying a data-set of validated in-vitro IVUS data.
|
Nicola Bellotto, Eric Sommerlade, Ben Benfold, Charles Bibby, I. Reid, Daniel Roth, et al. (2009). A Distributed Camera System for Multi-Resolution Surveillance. In 3rd ACM/IEEE International Conference on Distributed Smart Cameras.
Abstract: We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance.
Keywords: 10.1109/ICDSC.2009.5289413
|
Pierluigi Casale, Oriol Pujol, & Petia Radeva. (2009). Face-to-face social activity detection using data collected with a wearable device. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524, 56–63). LNCS. Springer Berlin Heidelberg.
Abstract: In this work the feasibility of building a socially aware badge that learns from user activities is explored. A wearable multisensor device has been prototyped for collecting data about user movements and photos of the environment where the user acts. Using motion data, speaking and other activities have been classified. Images have been analysed in order to complement motion data and help for the detection of social behaviours. A face detector and an activity classifier are both used for detecting if users have a social activity in the time they worn the device. Good results encourage the improvement of the system at both hardware and software level
|
Mikhail Mozerov, Ariel Amato, & Xavier Roca. (2009). Occlusion Handling in Trinocular Stereo using Composite Disparity Space Image. In 19th International Conference on Computer Graphics and Vision (69–73).
Abstract: In this paper we propose a method that smartly improves occlusion handling in stereo matching using trinocular stereo. The main idea is based on the assumption that any occluded region in a matched stereo pair (middle-left images) in general is not occluded in the opposite matched pair (middle-right images). Then two disparity space images (DSI) can be merged in one composite DSI. The proposed integration differs from the known approach that uses a cumulative cost. A dense disparity map is obtained with a global optimization algorithm using the proposed composite DSI. The experimental results are evaluated on the Middlebury data set, showing high performance of the proposed algorithm especially in the occluded regions. One of the top positions in the rank of the Middlebury website confirms the performance of our method to be competitive with the best stereo matching.
|
Pau Baiget. (2009). Modeling Human Behavior for Image Sequence Understanding and Generation (Jordi Gonzalez, & Xavier Roca, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: The comprehension of animal behavior, especially human behavior, is one of the most ancient and studied problems since the beginning of civilization. The big list of factors that interact to determine a person action require the collaboration of different disciplines, such as psichology, biology, or sociology. In the last years the analysis of human behavior has received great attention also from the computer vision community, given the latest advances in the acquisition of human motion data from image sequences.
Despite the increasing availability of that data, there still exists a gap towards obtaining a conceptual representation of the obtained observations. Human behavior analysis is based on a qualitative interpretation of the results, and therefore the assignment of concepts to quantitative data is linked to a certain ambiguity.
This Thesis tackles the problem of obtaining a proper representation of human behavior in the contexts of computer vision and animation. On the one hand, a good behavior model should permit the recognition and explanation the observed activity in image sequences. On the other hand, such a model must allow the generation of new synthetic instances, which model the behavior of virtual agents.
First, we propose methods to automatically learn the models from observations. Given a set of quantitative results output by a vision system, a normal behavior model is learnt. This results provides a tool to determine the normality or abnormality of future observations. However, machine learning methods are unable to provide a richer description of the observations. We confront this problem by means of a new method that incorporates prior knowledge about the enviornment and about the expected behaviors. This framework, formed by the reasoning engine FMTL and the modeling tool SGT allows the generation of conceptual descriptions of activity in new image sequences. Finally, we demonstrate the suitability of the proposed framework to simulate behavior of virtual agents, which are introduced into real image sequences and interact with observed real agents, thereby easing the generation of augmented reality sequences.
The set of approaches presented in this Thesis has a growing set of potential applications. The analysis and description of behavior in image sequences has its principal application in the domain of smart video--surveillance, in order to detect suspicious or dangerous behaviors. Other applications include automatic sport commentaries, elderly monitoring, road traffic analysis, and the development of semantic video search engines. Alternatively, behavioral virtual agents allow to simulate accurate real situations, such as fires or crowds. Moreover, the inclusion of virtual agents into real image sequences has been widely deployed in the games and cinema industries.
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2009). Exploiting Natural Language Generation in Scene Interpretation. In Human–Centric Interfaces for Ambient Intelligence (Vol. 4, 71–93). Elsevier Science and Tech.
|
Ivan Huerta, Michael Holte, Thomas B. Moeslund, & Jordi Gonzalez. (2009). Detection and Removal of Chromatic Moving Shadows in Surveillance Scenarios. In 12th International Conference on Computer Vision (pp. 1499–1506).
Abstract: Segmentation in the surveillance domain has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. In this paper we present a novel technique based on gradient and colour models for separating chromatic moving cast shadows from detected moving objects. Firstly, both a chromatic invariant colour cone model and an invariant gradient model are built to perform automatic segmentation while detecting potential shadows. In a second step, regions corresponding to potential shadows are grouped by considering “a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions.
|
Marco Pedersoli, Jordi Gonzalez, & Juan J. Villanueva. (2009). High-Speed Human Detection Using a Multiresolution Cascade of Histograms of Oriented Gradients. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents a new method for human detection based on a multiresolution cascade of Histograms of Oriented Gradients (HOG) that can highly reduce the computational cost of the detection search without affecting accuracy. The method consists of a cascade of sliding window detectors. Each detector is a Support Vector Machine (SVM) composed by features at different resolution, from coarse for the first level to fine for the last one.
Considering that the spatial stride of the sliding window search is affected by the HOG features size, unlike previous methods based on Adaboost cascades, we can adopt a spatial stride inversely proportional to the features resolution. This produces that the speed-up of the cascade is not only due to the low number of features that need to be computed in the first levels, but also to the lower number of detection windows that needs to be evaluated.
Experimental results shows that our method permits a detection rate comparable with the state of the art, but at the same time a gain in the speed of the detection search of 10-20 times depending on the cascade configuration.
|
Bhaskar Chakraborty, Andrew Bagdanov, & Jordi Gonzalez. (2009). Towards Real-Time Human Action Recognition. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: This work presents a novel approach to human detection based action-recognition in real-time. To realize this goal our method first detects humans in different poses using a correlation-based approach. Recognition of actions is done afterward based on the change of the angular values subtended by various body parts. Real-time human detection and action recognition are very challenging, and most state-of-the-art approaches employ complex feature extraction and classification techniques, which ultimately becomes a handicap for real-time recognition. Our correlation-based method, on the other hand, is computationally efficient and uses very simple gradient-based features. For action recognition angular features of body parts are extracted using a skeleton technique. Results for action recognition are comparable with the present state-of-the-art.
|
Murad Al Haj, Andrew Bagdanov, Jordi Gonzalez, & Xavier Roca. (2009). Robust and Efficient Multipose Face Detection Using Skin Color Segmentation. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we describe an efficient technique for detecting faces in arbitrary images and video sequences. The approach is based on segmentation of images or video frames into skin-colored blobs using a pixel-based heuristic. Scale and translation invariant features are then computed from these segmented blobs which are used to perform statistical discrimination between face and non-face classes. We train and evaluate our method on a standard, publicly available database of face images and analyze its performance over a range of statistical pattern classifiers. The generalization of our approach is illustrated by testing on an independent sequence of frames containing many faces and non-faces. These experiments indicate that our proposed approach obtains false positive rates comparable to more complex, state-of-the-art techniques, and that it generalizes better to new data. Furthermore, the use of skin blobs and invariant features requires fewer training samples since significantly fewer non-face candidate regions must be considered when compared to AdaBoost-based approaches.
|
D. Jayagopi, Bogdan Raducanu, & D. Gatica-Perez. (2009). Characterizing conversational group dynamics using nonverbal behaviour. In 10th IEEE International Conference on Multimedia and Expo (370–373).
Abstract: This paper addresses the novel problem of characterizing conversational group dynamics. It is well documented in social psychology that depending on the objectives a group, the dynamics are different. For example, a competitive meeting has a different objective from that of a collaborative meeting. We propose a method to characterize group dynamics based on the joint description of a group members' aggregated acoustical nonverbal behaviour to classify two meeting datasets (one being cooperative-type and the other being competitive-type). We use 4.5 hours of real behavioural multi-party data and show that our methodology can achieve a classification rate of upto 100%.
|