|
German Ros, Laura Sellart, Gabriel Villalonga, Elias Maidanik, Francisco Molero, Marc Garcia, et al. (2017). Semantic Segmentation of Urban Scenes via Domain Adaptation of SYNTHIA. In Gabriela Csurka (Ed.), Domain Adaptation in Computer Vision Applications (Vol. 12, pp. 227–241). Springer.
Abstract: Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images; thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this chapter, we propose to use a combination of a virtual world to automatically generate realistic synthetic images with pixel-level annotations, and domain adaptation to transfer the models learnt to correctly operate in real scenarios. We address the question of how useful synthetic data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations and object identifiers. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show that combining SYNTHIA with simple domain adaptation techniques in the training stage significantly improves performance on semantic segmentation.
Keywords: SYNTHIA; Virtual worlds; Autonomous Driving
|
|
|
Franck Davoine, & Fadi Dornaika. (2005). Head and facial animation tracking using appearance-adaptive models and particle filters. In V. Pavlovic and T.S. Huang (editors), Real–Time Vision for Human–Computer Interaction.
|
|
|
Francisco Javier Orozco, Jordi Gonzalez, Ignasi Rius, & Xavier Roca. (2007). Hierarchical Eyelid and Face Tracking. In 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:499–506.
|
|
|
Francesc Tous, Maria Vanrell, & Ramon Baldrich. (2005). Relaxed Grey-World: Computational Colour Constancy by Surface Matching. In Pattern Recognition and Image Analysis (IbPRIA 2005), LNCS 3522:192–199.
|
|
|
Francesc Tous, Agnes Borras, Robert Benavente, Ramon Baldrich, Maria Vanrell, & Josep Llados. (2002). Textual Descriptions for Browsing People by Visual Apperance. In Lecture Notes in Artificial Intelligence (Vol. 2504, pp. 419–429). Springer Verlag.
Abstract: This paper presents a first approach to build colour and structural descriptors for information retrieval on a people database. Queries are formulated in terms of their appearance that allows to seek people wearing specific clothes of a given colour name or texture. Descriptors are automatically computed by following three essential steps. A colour naming labelling from pixel properties. A region seg- mentation step based on colour properties of pixels combined with edge information. And a high level step that models the region arrangements in order to build clothes structure. Results are tested on large set of images from real scenes taken at the entrance desk of a building
|
|
|
Fernando Vilariño, & Petia Radeva. (2003). Cardiac Segmentation with Discriminant Active Contours. (211–217). IOS Press.
Abstract: Dynamic tracking of heart moving is one relevant target in medical imag- ing and can be helpful for analyzing heart dynamics in the study of several cardiac diseases. For this aim, a previous segmentation problem of such structures is stated, based on certain relevant features (like edges or intensity levels, textures, etc.) Clas- sical active models have been used, but they fail when overlapping structures or not well-defined contours are present. Automatic feature learning systems may be a pow- erful tool. Discriminant active contours present optimal results in this kind of problem. They are a kind of deformable models that converge to an optimal object segmenta- tion that dynamically adapts to the object contour. The feature space is designed from a filter bank in order to guarantee the search and learning of the set of relevant fea- tures for optimal classification on each part of the object. Tracking of target evolution is obtained through the whole set of images, using information from the actual and previous stages. Feedback systems are implemented to guarantee the minimum well- separable classification set in each segmentation step. Our implementation has been proved with several series of Magnetic Resonance with improved results in segmenta- tion in comparison to previous methods.
|
|
|
Fernando Vilariño, Panagiota Spyridonos, Jordi Vitria, Carolina Malagelada, & Petia Radeva. (2006). Linear Radial Patterns Characterization for Automatic Detection of Tonic Intestinal Contractions. In .F. Mart ́ınez-Trinidad et al (Ed.), 11th Iberoamerican Congress on Pattern Recognition (Vol. 4225, 178–187). LNCS. Berlin Heidelberg: Springer Verlag.
Abstract: This work tackles the categorization of general linear radial patterns by means of the valleys and ridges detection and the use of descriptors of directional information, which are provided by steerable filters in different regions of the image. We successfully apply our proposal in the specific case of automatic detection of tonic contractions in video capsule endoscopy, which represent a paradigmatic example of linear radial patterns.
|
|
|
Fernando Vilariño, Panagiota Spyridonos, Jordi Vitria, Carolina Malagelada, & Petia Radeva. (2006). A Machine Learning framework using SOMs: Applications in the Intestinal Motility Assessment. In J.P. Martinez–Trinidad et al (Ed.), 11th Iberoamerican Congress on Pattern Recognition (Vol. 4225, 188–197). LNCS. Berlin-Heidelberg: Springer Verlag.
Abstract: Small Bowel Motility Assessment by means of Wireless Capsule Video Endoscopy constitutes a novel clinical methodology in which a capsule with a micro-camera attached to it is swallowed by the patient, emitting a RF signal which is recorded as a video of its trip throughout the gut. In order to overcome the main drawbacks associated with this technique -mainly related to the large amount of visualization time required-, our efforts have been focused on the development of a machine learning system, built up in sequential stages, which provides the specialists with the useful part of the video, rejecting those parts not valid for analysis. We successfully used Self Organized Maps in a general semi-supervised framework with the aim of tackling the different learning stages of our system. The analysis of the diverse types of images and the automatic detection of intestinal contractions is performed under the perspective of intestinal motility assessment in a clinical environment.
|
|
|
Fernando Vilariño, Dimosthenis Karatzas, Marcos Catalan, & Alberto Valcarcel. (2015). An horizon for the Public Library as a place for innovation and creativity. The Library Living Lab in Volpelleres. In The White Book on Public Library Network from Diputació de Barcelona.
|
|
|
Fernando Vilariño, Debora Gil, & Petia Radeva. (2004). A Novel FLDA Formulation for Numerical Stability Analysis. In P. R. and I. A. J. Vitrià (Ed.), Recent Advances in Artificial Intelligence Research and Development (Vol. 113, pp. 77–84). IOS Press.
Abstract: Fisher Linear Discriminant Analysis (FLDA) is one of the most popular techniques used in classification applying dimensional reduction. The numerical scheme involves the inversion of the within-class scatter matrix, which makes FLDA potentially ill-conditioned when it becomes singular. In this paper we present a novel explicit formulation of FLDA in terms of the eccentricity ratio and eigenvector orientations of the within-class scatter matrix. An analysis of this function will characterize those situations where FLDA response is not reliable because of numerical instability. This can solve common situations of poor classification performance in computer vision.
Keywords: Supervised Learning; Linear Discriminant Analysis; Numerical Stability; Computer Vision
|
|
|
Felipe Lumbreras, Ramon Baldrich, Maria Vanrell, Joan Serrat, & Juan J. Villanueva. (1999). Multiresolution texture classification of ceramic tiles. In Recent Research developments in optical engineering, Research Signpost, 2: 213–228.
|
|
|
Fadi Dornaika, Francisco Javier Orozco, & Jordi Gonzalez. (2006). Combined Head, Lips, Eyebrows, and Eyelids Tracking Using Adaptive Appearance Models. In IV Conference on Articulated Motion and Deformable Objects (AMDO´06), LNCS 4069: 110–119.
|
|
|
Fadi Dornaika, Bogdan Raducanu, & Alireza Bosaghzadeh. (2015). Facial expression recognition based on multi observations with application to social robotics. In Bruce Flores (Ed.), Emotional and Facial Expressions: Recognition, Developmental Differences and Social Importance (pp. 153–166). Nova Science publishers.
Abstract: Human-robot interaction is a hot topic nowadays in the social robotics
community. One crucial aspect is represented by the affective communication
which comes encoded through the facial expressions. In this chapter, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, viewand texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression.
|
|
|
Fadi Dornaika, & Bogdan Raducanu. (2008). Facial Expression Recognition for HCI Applications. In Rabuñal (Ed.), Encyclopedia of Artificial Intelligence (Vol. II, 625–631). IGI–Global Publisher.
|
|
|
Fadi Dornaika, & Bogdan Raducanu. (2011). Subtle Facial Expression Recognition in Still Images and Videos. In Yu-Jin Zhang (Ed.), Advances in Face Image Analysis: Techniques and Technologies (pp. 259–277). New York, USA: IGI-Global.
Abstract: This chapter addresses the recognition of basic facial expressions. It has three main contributions. First, the authors introduce a view- and texture independent schemes that exploits facial action parameters estimated by an appearance-based 3D face tracker. they represent the learned facial actions associated with different facial expressions by time series. Two dynamic recognition schemes are proposed: (1) the first is based on conditional predictive models and on an analysis-synthesis scheme, and (2) the second is based on examples allowing straightforward use of machine learning approaches. Second, the authors propose an efficient recognition scheme based on the detection of keyframes in videos. Third, the authors compare the dynamic scheme with a static one based on analyzing individual snapshots and show that in general the former performs better than the latter. The authors then provide evaluations of performance using Linear Discriminant Analysis (LDA), Non parametric Discriminant Analysis (NDA), and Support Vector Machines (SVM).
|
|