Angel Sappa, Niki Aifanti, Sotiris Malassiotis, & Michael G. Strintzis. (2004). 3D Gait Estimation from Monoscopic Video.
|
Guillem Martinez, Maya Aghaei, Martin Dijkstra, Bhalaji Nagarajan, Femke Jaarsma, Jaap van de Loosdrecht, et al. (2022). Hyper-Spectral Imaging for Overlapping Plastic Flakes Segmentation. In 47th International Conference on Acoustics, Speech, and Signal Processing.
Abstract: In this paper, we propose a deformable convolution-based generative adversarial network (DCNGAN) for perceptual quality enhancement of compressed videos. DCNGAN is also adaptive to the quantization parameters (QPs). Compared with optical flows, deformable convolutions are more effective and efficient to align frames. Deformable convolutions can operate on multiple frames, thus leveraging more temporal information, which is beneficial for enhancing the perceptual quality of compressed videos. Instead of aligning frames in a pairwise manner, the deformable convolution can process multiple frames simultaneously, which leads to lower computational complexity. Experimental results demonstrate that the proposed DCNGAN outperforms other state-of-the-art compressed video quality enhancement algorithms.
Keywords: Hyper-spectral imaging; plastic sorting; multi-label segmentation; bitfield encoding
|
Bogdan Raducanu, Alireza Bosaghzadeh, & Fadi Dornaika. (2014). Facial Expression Recognition based on Multi-view Observations with Application to Social Robotics. In 1st Workshop on Computer Vision for Affective Computing (pp. 1–8).
Abstract: Human-robot interaction is a hot topic nowadays in the social robotics community. One crucial aspect is represented by the affective communication which comes encoded through the facial expressions. In this paper, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, view- and texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression.
|
D. Rincon, E. Frumento, R. Fogliardi, & M. Angel Viñas. (2000). Carmen/Carolin: Description and Results of an International Experience of Telemedicine..
|
Laura Lopez-Fuentes, Alessandro Farasin, Harald Skinnemoen, & Paolo Garza. (2018). Deep Learning models for passability detection of flooded roads. In MediaEval 2018 Multimedia Benchmark Workshop (Vol. 2283).
Abstract: In this paper we study and compare several approaches to detect floods and evidence for passability of roads by conventional means in Twitter. We focus on tweets containing both visual information (a picture shared by the user) and metadata, a combination of text and related extra information intrinsic to the Twitter API. This work has been done in the context of the MediaEval 2018 Multimedia Satellite Task.
|
Muhammad Muzzamil Luqman, Thierry Brouard, Jean-Yves Ramel, & Josep Llados. (2010). Vers une approche foue of encapsulation de graphes: application a la reconnaissance de symboles. In Colloque International Francophone sur l'Écrit et le Document (pp. 169–184).
Abstract: We present a new methodology for symbol recognition, by employing a structural approach for representing visual associations in symbols and a statistical classifier for recognition. A graphic symbol is vectorized, its topological and geometrical details are encoded by an attributed relational graph and a signature is computed for it. Data adapted fuzzy intervals have been introduced for addressing the sensitivity of structural representations to noise. The joint probability distribution of signatures is encoded by a Bayesian network, which serves as a mechanism for pruning irrelevant features and choosing a subset of interesting features from structural signatures of underlying symbol set, and is deployed in a supervised learning scenario for recognizing query symbols. Experimental results on pre-segmented 2D linear architectural and electronic symbols from GREC databases are presented.
Keywords: Fuzzy interval; Graph embedding; Bayesian network; Symbol recognition
|
Herve Locteau, Sebastien Mace, Ernest Valveny, & Salvatore Tabbone. (2010). Extraction des pieces de un plan de habitation. In Colloque Internacional Francophone de l´Ecrit et le Document (1–12).
Abstract: In this article, a method to extract the rooms of an architectural floor plan image is described. We first present a line detection algorithm to extract long lines in the image. Those lines are analyzed to identify the existing walls. From this point, room extraction can be seen as a classical segmentation task for which each region corresponds to a room. The chosen resolution strategy consists in recursively decomposing the image until getting nearly convex regions. The notion of convexity is difficult to quantify, and the selection of separation lines can also be rough. Thus, we take advantage of knowledge associated to architectural floor plans in order to obtain mainly rectangular rooms. Preliminary tests on a set of real documents show promising results.
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2006). Face Verification using External Features.
|
Murad Al Haj, Andrew Bagdanov, Jordi Gonzalez, & Xavier Roca. (2009). Robust and Efficient Multipose Face Detection Using Skin Color Segmentation. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we describe an efficient technique for detecting faces in arbitrary images and video sequences. The approach is based on segmentation of images or video frames into skin-colored blobs using a pixel-based heuristic. Scale and translation invariant features are then computed from these segmented blobs which are used to perform statistical discrimination between face and non-face classes. We train and evaluate our method on a standard, publicly available database of face images and analyze its performance over a range of statistical pattern classifiers. The generalization of our approach is illustrated by testing on an independent sequence of frames containing many faces and non-faces. These experiments indicate that our proposed approach obtains false positive rates comparable to more complex, state-of-the-art techniques, and that it generalizes better to new data. Furthermore, the use of skin blobs and invariant features requires fewer training samples since significantly fewer non-face candidate regions must be considered when compared to AdaBoost-based approaches.
|
David Vazquez, Antonio Lopez, Daniel Ponsa, & David Geronimo. (2013). Interactive Training of Human Detectors. In Multiodal Interaction in Image and Video Applications (Vol. 48, pp. 169–182). Springer Berlin Heidelberg.
Abstract: Image based human detection remains as a challenging problem. Most promising detectors rely on classifiers trained with labelled samples. However, labelling is a manual labor intensive step. To overcome this problem we propose to collect images of pedestrians from a virtual city, i.e., with automatic labels, and train a pedestrian detector with them, which works fine when such virtual-world data are similar to testing one, i.e., real-world pedestrians in urban areas. When testing data is acquired in different conditions than training one, e.g., human detection in personal photo albums, dataset shift appears. In previous work, we cast this problem as one of domain adaptation and solve it with an active learning procedure. In this work, we focus on the same problem but evaluating a different set of faster to compute features, i.e., Haar, EOH and their combination. In particular, we train a classifier with virtual-world data, using such features and Real AdaBoost as learning machine. This classifier is applied to real-world training images. Then, a human oracle interactively corrects the wrong detections, i.e., few miss detections are manually annotated and some false ones are pointed out too. A low amount of manual annotation is fixed as restriction. Real- and virtual-world difficult samples are combined within what we call cool world and we retrain the classifier with this data. Our experiments show that this adapted classifier is equivalent to the one trained with only real-world data but requiring 90% less manual annotations.
Keywords: Pedestrian Detection; Virtual World; AdaBoost; Domain Adaptation
|
Josep Llados. (2007). Advances in Graphics Recognition. In Digital Document Processing, Major Directions and Recent Advances, Advances in Pattern Recognition, B.B. Chaudhuri, ed., 281–304.
|
Misael Rosales, Petia Radeva, J. Mauri, & Oriol Pujol. (2004). Simulation Model of Intravascular Ultrasound Images.
|
Fadi Dornaika, & Angel Sappa. (2007). SFM for Planar Scenes: a Direct and Robust Approach. In book chapter: Informatics in Control, Automation and Robotics II, Ed. J. Filipe, J. Ferrier, J. Cetto and M. Carvalho, pp. 129–136. (best papers ICINCO 2005).
|
Jaume Amores, & Petia Radeva. (2003). Non-rigid Registration of Vessel Structures in IVUS Images.
|
Agnes Borras, Francesc Tous, Josep Llados, & Maria Vanrell. (2003). High-Level Clothes Description Based on Color-Texture and Structural Features. In Lecture Notes in Computer Science (Vol. 2652, 108–116).
Abstract: This work is a part of a surveillance system where content- based image retrieval is done in terms of people appearance. Given an image of a person, our work provides an automatic description of his clothing according to the colour, texture and structural composition of its garments. We present a two-stage process composed by image segmentation and a region-based interpretation. We segment an image by modelling it due to an attributed graph and applying a hybrid method that follows a split-and-merge strategy. We propose the interpretation of five cloth combinations that are modelled in a graph structure in terms of region features. The interpretation is viewed as a graph matching with an associated cost between the segmentation and the cloth models. Fi- nally, we have tested the process with a ground-truth of one hundred images.
|