|
Sergio Escalera, Jordi Gonzalez, Xavier Baro, Miguel Reyes, Oscar Lopes, Isabelle Guyon, et al. (2013). Multi-modal Gesture Recognition Challenge 2013: Dataset and Results. In 15th ACM International Conference on Multimodal Interaction (pp. 445–452).
Abstract: The recognition of continuous natural gestures is a complex and challenging problem due to the multi-modal nature of involved visual cues (e.g. fingers and lips movements, subtle facial expressions, body pose, etc.), as well as technical limitations such as spatial and temporal resolution and unreliable
depth cues. In order to promote the research advance on this field, we organized a challenge on multi-modal gesture recognition. We made available a large video database of 13; 858 gestures from a lexicon of 20 Italian gesture categories recorded with a KinectTM camera, providing the audio, skeletal model, user mask, RGB and depth images. The focus of the challenge was on user independent multiple gesture learning. There are no resting positions and the gestures are performed in continuous sequences lasting 1-2 minutes, containing between 8 and 20 gesture instances in each sequence. As a result, the dataset contains around 1:720:800 frames. In addition to the 20 main gesture categories, ‘distracter’ gestures are included, meaning that additional audio
and gestures out of the vocabulary are included. The final evaluation of the challenge was defined in terms of the Levenshtein edit distance, where the goal was to indicate the real order of gestures within the sequence. 54 international teams participated in the challenge, and outstanding results
were obtained by the first ranked participants.
|
|
|
Onur Ferhat, & Fernando Vilariño. (2013). A Cheap Portable Eye-Tracker Solution for Common Setups. In 17th European Conference on Eye Movements.
Abstract: We analyze the feasibility of a cheap eye-tracker where the hardware consists of a single webcam and a Raspberry Pi device. Our aim is to discover the limits of such a system and to see whether it provides an acceptable performance. We base our work on the open source Opengazer (Zielinski, 2013) and we propose several improvements to create a robust, real-time system. After assessing the accuracy of our eye-tracker in elaborated experiments involving 18 subjects under 4 different system setups, we developed a simple game to see how it performs in practice and we also installed it on a Raspberry Pi to create a portable stand-alone eye-tracker which achieves 1.62° horizontal accuracy with 3 fps refresh rate for a building cost of 70 Euros.
Keywords: Low cost; eye-tracker; software; webcam; Raspberry Pi
|
|
|
David Vazquez, & Enrique Cabello. (2007). Empleo de sistemas biométricos faciales aplicados al reconocimiento de personas en aeropuertos. Bachelor's thesis, , .
Abstract: El presente proyecto se desarrolló a lo largo del año 2005 y 2006, probando un prototipo de un sistema de verificación facial con imágenes extraídas de las cámaras de video-vigilancia del aeropuerto de Barajas. Se diseñaron varios experimentos, agrupados en dos clases. En el primer tipo, el sistema es entre- nado con imágenes obtenidas en condiciones de laboratorio y luego probado con imágenes extraídas de las cámaras de video-vigilancia del aeropuerto de Barajas. En el segundo caso, tanto las imágenes de entrenamiento como las de prueba corresponden a imágenes extraídas de Barajas.
Se ha desarrollado un sistema completo, que incluye adquisición y digitalización de las imágenes, localización y recorte de las caras en escena, verificación de sujetos y obtención de resultados. Los resultados muestran que, en general, un sistema de verificación facial basado en imágenes puede ser una valiosa ayuda a un operario que deba estar vigilando amplias zonas.
Keywords: Surveillance; Face detection; Face recognition
|
|
|
David Roche, Debora Gil, & Jesus Giraldo. (2011). An inference model for analyzing termination conditions of Evolutionary Algorithms. In 14th Congrès Català en Intel·ligencia Artificial (pp. 216–225).
Abstract: In real-world problems, it is mandatory to design a termination condition for Evolutionary Algorithms (EAs) ensuring stabilization close to the unknown optimum. Distribution-based quantities are good candidates as far as suitable parameters are used. A main limitation for application to real-world problems is that such parameters strongly depend on the topology of the objective function, as well as, the EA paradigm used.
We claim that the termination problem would be fully solved if we had a model measuring to what extent a distribution-based quantity asymptotically behaves like the solution accuracy. We present a regression-prediction model that relates any two given quantities and reports if they can be statistically swapped as termination conditions. Our framework is applied to two issues. First, exploring if the parameters involved in the computation of distribution-based quantities influence their asymptotic behavior. Second, to what extent existing distribution-based quantities can be asymptotically exchanged for the accuracy of the EA solution.
Keywords: Evolutionary Computation Convergence, Termination Conditions, Statistical Inference
|
|
|
David Roche, Debora Gil, & Jesus Giraldo. (2011). Using statistical inference for designing termination conditions ensuring convergence of Evolutionary Algorithms. In 11th European Conference on Artificial Life.
Abstract: A main challenge in Evolutionary Algorithms (EAs) is determining a termination condition ensuring stabilization close to the optimum in real-world applications. Although for known test functions distribution-based quantities are good candidates (as far as suitable parameters are used), in real-world problems an open question still remains unsolved. How can we estimate an upper-bound for the termination condition value ensuring a given accuracy for the (unknown) EA solution?
We claim that the termination problem would be fully solved if we defined a quantity (depending only on the EA output) behaving like the solution accuracy. The open question would be, then, satisfactorily answered if we had a model relating both quantities, since accuracy could be predicted from the alternative quantity. We present a statistical inference framework addressing two topics: checking the correlation between the two quantities and defining a regression model for predicting (at a given confidence level) accuracy values from the EA output.
|
|
|
Ferran Poveda, Debora Gil, Albert Andaluz, & Enric Marti. (2011). Multiscale Tractography for Representing Heart Muscular Architecture. In In MICCAI 2011 Workshop on Computational Diffusion MRI.
Abstract: Deep understanding of myocardial structure of the heart would unravel crucial knowledge for clinical and medical procedures. Although the muscular architecture of the heart has been debated by countless researchers, the controversy is still alive. Diffusion Tensor MRI, DT-MRI, is a unique imaging technique for computational validation of the muscular structure of the heart. By the complex arrangement of myocites, existing techniques can not provide comprehensive descriptions of the global muscular architecture. In this paper we introduce a multiresolution reconstruction technique based on DT-MRI streamlining for simplified global myocardial model generation. Our reconstructions can restore the most complex myocardial structures and indicate a global helical organization
|
|
|
Patricia Marquez, Debora Gil, & Aura Hernandez-Sabate. (2011). A Confidence Measure for Assessing Optical Flow Accuracy in the Absence of Ground Truth. In IEEE International Conference on Computer Vision – Workshops (pp. 2042–2049). Barcelona (Spain): IEEE.
Abstract: Optical flow is a valuable tool for motion analysis in autonomous navigation systems. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in real world sequences. This paper introduces a measure of optical flow accuracy for Lucas-Kanade based flows in terms of the numerical stability of the data-term. We call this measure optical flow condition number. A statistical analysis over ground-truth data show a good statistical correlation between the condition number and optical flow error. Experiments on driving sequences illustrate its potential for autonomous navigation systems.
Keywords: IEEE International Conference on Computer Vision – Workshops
|
|
|
David Vazquez, Antonio Lopez, Daniel Ponsa, & Javier Marin. (2011). Virtual Worlds and Active Learning for Human Detection. In 13th International Conference on Multimodal Interaction (pp. 393–400). New York, NY, USA, USA: ACM DL.
Abstract: Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.
Keywords: Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning
|
|
|
Aura Hernandez-Sabate, & Debora Gil. (2012). The Benefits of IVUS Dynamics for Retrieving Stable Models of Arteries. In Yasuhiro Honda (Ed.), Intravascular Ultrasound (pp. 185–206). Intech.
|
|
|
Andrew Nolan, Daniel Serrano, Aura Hernandez-Sabate, Daniel Ponsa, & Antonio Lopez. (2013). Obstacle mapping module for quadrotors on outdoor Search and Rescue operations. In International Micro Air Vehicle Conference and Flight Competition.
Abstract: Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments.
Keywords: UAV
|
|
|
Anastasios Doulamis, Nikolaos Doulamis, Marco Bertini, Jordi Gonzalez, & Thomas B. Moeslund. (2013). Analysis and Retrieval of Tracked Events and Motion in Imagery Streams.
|
|
|
Naveen Onkarappa, & Angel Sappa. (2011). Space Variant Representations for Mobile Platform Vision Applications. In W. Kropatsch A. Berciano H. Molina D. D. P. Real (Ed.), 14th International Conference on Computer Analysis of Images and Patterns (Vol. 6855, pp. 146–154). Springer Berlin Heidelberg.
Abstract: The log-polar space variant representation, motivated by biological vision, has been widely studied in the literature. Its data reduction and invariance properties made it useful in many vision applications. However, due to its nature, it fails in preserving features in the periphery. In the current work, as an attempt to overcome this problem, we propose a novel space-variant representation. It is evaluated and proved to be better than the log-polar representation in preserving the peripheral information, crucial for on-board mobile vision applications. The evaluation is performed by comparing log-polar and the proposed representation once they are used for estimating dense optical flow.
|
|
|
Lluis Pere de las Heras, Ahmed Sheraz, Marcus Liwicki, Ernest Valveny, & Gemma Sanchez. (2014). Statistical Segmentation and Structural Recognition for Floor Plan Interpretation. IJDAR - International Journal on Document Analysis and Recognition, 17(3), 221–237.
Abstract: A generic method for floor plan analysis and interpretation is presented in this article. The method, which is mainly inspired by the way engineers draw and interpret floor plans, applies two recognition steps in a bottom-up manner. First, basic building blocks, i.e., walls, doors, and windows are detected using a statistical patch-based segmentation approach. Second, a graph is generated, and structural pattern recognition techniques are applied to further locate the main entities, i.e., rooms of the building. The proposed approach is able to analyze any type of floor plan regardless of the notation used. We have evaluated our method on different publicly available datasets of real architectural floor plans with different notations. The overall detection and recognition accuracy is about 95 %, which is significantly better than any other state-of-the-art method. Our approach is generic enough such that it could be easily adopted to the recognition and interpretation of any other printed machine-generated structured documents.
|
|
|
H. Emrah Tasli, Cevahir Çigla, Theo Gevers, & A. Aydin Alatan. (2013). Super pixel extraction via convexity induced boundary adaptation. In 14th IEEE International Conference on Multimedia and Expo (pp. 1–6).
Abstract: This study presents an efficient super-pixel extraction algorithm with major contributions to the state-of-the-art in terms of accuracy and computational complexity. Segmentation accuracy is improved through convexity constrained geodesic distance utilization; while computational efficiency is achieved by replacing complete region processing with boundary adaptation idea. Starting from the uniformly distributed rectangular equal-sized super-pixels, region boundaries are adapted to intensity edges iteratively by assigning boundary pixels to the most similar neighboring super-pixels. At each iteration, super-pixel regions are updated and hence progressively converging to compact pixel groups. Experimental results with state-of-the-art comparisons, validate the performance of the proposed technique in terms of both accuracy and speed.
|
|
|
H. Emrah Tasli, Jan van Gemert, & Theo Gevers. (2013). Spot the differences: from a photograph burst to the single best picture. In 21ST ACM International Conference on Multimedia (pp. 729–732).
Abstract: With the rise of the digital camera, people nowadays typically take several near-identical photos of the same scene to maximize the chances of a good shot. This paper proposes a user-friendly tool for exploring a personal photo gallery for selecting or even creating the best shot of a scene between its multiple alternatives. This functionality is realized through a graphical user interface where the best viewpoint can be selected from a generated panorama of the scene. Once the viewpoint is selected, the user is able to go explore possible alternatives coming from the other images. Using this tool, one can explore a photo gallery efficiently. Moreover, additional compositions from other images are also possible. With such additional compositions, one can go from a burst of photographs to the single best one. Even funny compositions of images, where you can duplicate a person in the same image, are possible with our proposed tool.
|
|