|
Nataliya Shapovalova, Carles Fernandez, Xavier Roca, & Jordi Gonzalez. (2011). Semantics of Human Behavior in Image Sequences. In Albert Ali Salah, & (Ed.), Computer Analysis of Human Behavior (pp. 151–182). Springer London.
Abstract: Human behavior is contextualized and understanding the scene of an action is crucial for giving proper semantics to behavior. In this chapter we present a novel approach for scene understanding. The emphasis of this work is on the particular case of Human Event Understanding. We introduce a new taxonomy to organize the different semantic levels of the Human Event Understanding framework proposed. Such a framework particularly contributes to the scene understanding domain by (i) extracting behavioral patterns from the integrative analysis of spatial, temporal, and contextual evidence and (ii) integrative analysis of bottom-up and top-down approaches in Human Event Understanding. We will explore how the information about interactions between humans and their environment influences the performance of activity recognition, and how this can be extrapolated to the temporal domain in order to extract higher inferences from human events observed in sequences of images.
|
|
|
Murad Al Haj, Carles Fernandez, Zhanwu Xiong, Ivan Huerta, Jordi Gonzalez, & Xavier Roca. (2011). Beyond the Static Camera: Issues and Trends in Active Vision. In Th.B. Moeslund, A. Hilton, V. Krüger, & L. Sigal (Eds.), Visual Analysis of Humans: Looking at People (pp. 11–30). Springer London.
Abstract: Maximizing both the area coverage and the resolution per target is highly desirable in many applications of computer vision. However, with a limited number of cameras viewing a scene, the two objectives are contradictory. This chapter is dedicated to active vision systems, trying to achieve a trade-off between these two aims and examining the use of high-level reasoning in such scenarios. The chapter starts by introducing different approaches to active cameras configurations. Later, a single active camera system to track a moving object is developed, offering the reader first-hand understanding of the issues involved. Another section discusses practical considerations in building an active vision platform, taking as an example a multi-camera system developed for a European project. The last section of the chapter reflects upon the future trends of using semantic factors to drive smartly coordinated active systems.
|
|
|
Juan J. Villanueva. (2008). Visualization, Imaging, and Image Processing,.
|
|
|
Angel Sappa, & George A. Triantafyllid. (2012). Computer Graphics and Imaging.
|
|
|
Antonio Lopez, Atsushi Imiya, Tomas Pajdla, & Jose Manuel Alvarez. (2017). Computer Vision in Vehicle Technology: Land, Sea & Air. John Wiley & Sons, Ltd.
Abstract: Summary This chapter examines different vision-based commercial solutions for real-live problems related to vehicles. It is worth mentioning the recent astonishing performance of deep convolutional neural networks (DCNNs) in difficult visual tasks such as image classification, object recognition/localization/detection, and semantic segmentation. In fact,
different DCNN architectures are already being explored for low-level tasks such as optical flow and disparity computation, and higher level ones such as place recognition.
|
|
|
Antonio Lopez, Atsushi Imiya, Tomas Pajdla, & Jose Manuel Alvarez. Computer Vision in Vehicle Technology: Land, Sea & Air.
Abstract: A unified view of the use of computer vision technology for different types of vehicles
Computer Vision in Vehicle Technology focuses on computer vision as on-board technology, bringing together fields of research where computer vision is progressively penetrating: the automotive sector, unmanned aerial and underwater vehicles. It also serves as a reference for researchers of current developments and challenges in areas of the application of computer vision, involving vehicles such as advanced driver assistance (pedestrian detection, lane departure warning, traffic sign recognition), autonomous driving and robot navigation (with visual simultaneous localization and mapping) or unmanned aerial vehicles (obstacle avoidance, landscape classification and mapping, fire risk assessment).
The overall role of computer vision for the navigation of different vehicles, as well as technology to address on-board applications, is analysed.
|
|
|
Bogdan Raducanu, Jordi Vitria, & D. Gatica-Perez. (2009). You are Fired! Nonverbal Role Analysis in Competitive Meetings. In IEEE International Conference on Audio, Speech and Signal Processing (1949–1952).
Abstract: This paper addresses the problem of social interaction analysis in competitive meetings, using nonverbal cues. For our study, we made use of ldquoThe Apprenticerdquo reality TV show, which features a competition for a real, highly paid corporate job. Our analysis is centered around two tasks regarding a person's role in a meeting: predicting the person with the highest status and predicting the fired candidates. The current study was carried out using nonverbal audio cues. Results obtained from the analysis of a full season of the show, representing around 90 minutes of audio data, are very promising (up to 85.7% of accuracy in the first case and up to 92.8% in the second case). Our approach is based only on the nonverbal interaction dynamics during the meeting without relying on the spoken words.
|
|
|
Jose Manuel Alvarez, Theo Gevers, & Antonio Lopez. (2009). Learning Photometric Invariance from Diversified Color Model Ensembles. In 22nd IEEE Conference on Computer Vision and Pattern Recognition (565–572).
Abstract: Color is a powerful visual cue for many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions affecting negatively the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, those reflection models might be too restricted to model real-world scenes in which different reflectance mechanisms may hold simultaneously. Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is taken on input composed of both color variants and invariants. Then, the proposed method combines and weights these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, the fusion method uses a multi-view approach to minimize the estimation error. In this way, the method is robust to data uncertainty and produces properly diversified color invariant ensembles. Experiments are conducted on three different image datasets to validate the method. From the theoretical and experimental results, it is concluded that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning. Further, the method outperforms state-of- the-art detection techniques in the field of object, skin and road recognition.
Keywords: road detection
|
|
|
Arjan Gijsenij, Theo Gevers, & Joost Van de Weijer. (2009). Physics-based Edge Evaluation for Improved Color Constancy. In 22nd IEEE Conference on Computer Vision and Pattern Recognition (581 – 588).
Abstract: Edge-based color constancy makes use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as shadow, geometry, material and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation.
|
|
|
Sergio Escalera, Eloi Puertas, Petia Radeva, & Oriol Pujol. (2009). Multimodal laughter recognition in video conversations. In 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis (110–115).
Abstract: Laughter detection is an important area of interest in the Affective Computing and Human-computer Interaction fields. In this paper, we propose a multi-modal methodology based on the fusion of audio and visual cues to deal with the laughter recognition problem in face-to-face conversations. The audio features are extracted from the spectogram and the video features are obtained estimating the mouth movement degree and using a smile and laughter classifier. Finally, the multi-modal cues are included in a sequential classifier. Results over videos from the public discussion blog of the New York Times show that both types of features perform better when considered together by the classifier. Moreover, the sequential methodology shows to significantly outperform the results obtained by an Adaboost classifier.
|
|
|
Sergio Escalera, R. M. Martinez, Jordi Vitria, Petia Radeva, & Maria Teresa Anguera. (2009). Dominance Detection in Face-to-face Conversations. In 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis (97–102).
Abstract: Dominance is referred to the level of influence a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on dominance detection from visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers opinion. Moreover, the considered indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analysis shows a high correlation and allows the categorization of dominant people in public discussion video sequences.
|
|
|
Jaume Garcia, Albert Andaluz, Debora Gil, & Francesc Carreras. (2010). Decoupled External Forces in a Predictor-Corrector Segmentation Scheme for LV Contours in Tagged MR Images. In 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (pp. 4805–4808).
Abstract: Computation of functional regional scores requires proper identification of LV contours. On one hand, manual segmentation is robust, but it is time consuming and requires high expertise. On the other hand, the tag pattern in TMR sequences is a problem for automatic segmentation of LV boundaries. We propose a segmentation method based on a predictorcorrector (Active Contours – Shape Models) scheme. Special stress is put in the definition of the AC external forces. First, we introduce a semantic description of the LV that discriminates myocardial tissue by using texture and motion descriptors. Second, in order to ensure convergence regardless of the initial contour, the external energy is decoupled according to the orientation of the edges in the image potential. We have validated the model in terms of error in segmented contours and accuracy of regional clinical scores.
|
|
|
Jose Seabra, F. Javier Sanchez, Francesco Ciompi, & Petia Radeva. (2010). Ultrasonographic Plaque Characterization using a Rayleigh Mixture Model. In 7th IEEE International Symposium on Biomedical Imaging (1–4).
Abstract: From Nano to Macro
A correct modelling of tissue morphology is determinant for the identification of vulnerable plaques. This paper aims at describing the plaque composition by means of a Rayleigh Mixture Model applied to ultrasonic data. The effectiveness of using a mixture of distributions is established through synthetic and real ultrasonic data samples. Furthermore, the proposed mixture model is used in a plaque classification problem in Intravascular Ultrasound (IVUS) images of coronary plaques. A classifier tested on a set of 67 in-vitro plaques, yields an overall accuracy of 86% and sensitivity of 92%, 94% and 82%, for fibrotic, calcified and lipidic tissues, respectively. These results strongly suggest that different plaques types can be distinguished by means of the coefficients and Rayleigh parameters of the mixture distribution.
|
|
|
D. Jayagopi, Bogdan Raducanu, & D. Gatica-Perez. (2009). Characterizing conversational group dynamics using nonverbal behaviour. In 10th IEEE International Conference on Multimedia and Expo (370–373).
Abstract: This paper addresses the novel problem of characterizing conversational group dynamics. It is well documented in social psychology that depending on the objectives a group, the dynamics are different. For example, a competitive meeting has a different objective from that of a collaborative meeting. We propose a method to characterize group dynamics based on the joint description of a group members' aggregated acoustical nonverbal behaviour to classify two meeting datasets (one being cooperative-type and the other being competitive-type). We use 4.5 hours of real behavioural multi-party data and show that our methodology can achieve a classification rate of upto 100%.
|
|
|
Xavier Baro, Sergio Escalera, Petia Radeva, & Jordi Vitria. (2009). Visual Content Layer for Scalable Recognition in Urban Image Databases, Internet Multimedia Search and Mining. In 10th IEEE International Conference on Multimedia and Expo (1616–1619).
Abstract: Rich online map interaction represents a useful tool to get multimedia information related to physical places. With this type of systems, users can automatically compute the optimal route for a trip or to look for entertainment places or hotels near their actual position. Standard maps are defined as a fusion of layers, where each one contains specific data such height, streets, or a particular business location. In this paper we propose the construction of a visual content layer which describes the visual appearance of geographic locations in a city. We captured, by means of a Mobile Mapping system, a huge set of georeferenced images (> 500K) which cover the whole city of Barcelona. For each image, hundreds of region descriptions are computed off-line and described as a hash code. This allows an efficient and scalable way of accessing maps by visual content.
|
|