Josep Llados, Jaime Lopez-Krahe, Gemma Sanchez, & Enric Marti. (2000). Interprétation de cartes et plans par mise en correspondance de graphes de attributs. In 12 Congrès Francophone AFRIF–AFIA (Vol. 3, pp. 225–234).
|
Clement Guerin, Christophe Rigaud, Karell Bertet, Jean-Christophe Burie, Arnaud Revel, & Jean-Marc Ogier. (2014). Réduction de l’espace de recherche pour les personnages de bandes dessinées. In 19th National Congress Reconnaissance de Formes et l'Intelligence Artificielle.
Abstract: Les bandes dessinées représentent un patrimoine culturel important dans de nombreux pays et leur numérisation massive offre la possibilité d'effectuer des recherches dans le contenu des images. À ce jour, ce sont principalement les structures des pages et leurs contenus textuels qui ont été étudiés, peu de travaux portent sur le contenu graphique. Nous proposons de nous appuyer sur des éléments déjà étudiés tels que la position des cases et des bulles, pour réduire l'espace de recherche et localiser les personnages en fonction de la queue des bulles. L'évaluation de nos différentes contributions à partir de la base eBDtheque montre un taux de détection des queues de bulle de 81.2%, de localisation des personnages allant jusqu'à 85% et un gain d'espace de recherche de plus de 50%.
Keywords: contextual search; document analysis; comics characters
|
Fernando Barrera, Felipe Lumbreras, Cristhian Aguilera, & Angel Sappa. (2012). Planar-Based Multispectral Stereo. In 11th Quantitative InfraRed Thermography.
|
Cristhian Aguilera, Fernando Barrera, Angel Sappa, & Ricardo Toledo. (2012). A Novel SIFT-Like-Based Approach for FIR-VS Images Registration. In 11th Quantitative InfraRed Thermography.
|
Felipe Lumbreras, Xavier Roca, Daniel Ponsa, Robert Benavente, Judit Martinez, Silvia Sanchez, et al. (2001). Visual Inspection of Safety Belts. In International Conference on Quality Control by Artificial Vision (Vol. 2, 526–531).
|
Victor Campmany, Sergio Silva, Juan Carlos Moure, Antoni Espinosa, David Vazquez, & Antonio Lopez. (2015). GPU-based pedestrian detection for autonomous driving. In Programming and Tunning Massive Parallel Systems. PUMPS.
Abstract: Pedestrian detection for autonomous driving has gained a lot of prominence during the last few years. Besides the fact that it is one of the hardest tasks within computer vision, it involves huge computational costs. The real-time constraints in the field are tight, and regular processors are not able to handle the workload obtaining an acceptable ratio of frames per second (fps). Moreover, multiple cameras are required to obtain accurate results, so the need to speed up the process is even higher. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system. Further, we introduce significant algorithmic adjustments and optimizations to adapt the problem to the GPU architecture. The aim is to provide a system capable of running in real-time obtaining reliable results.
Keywords: Autonomous Driving; ADAS; CUDA; Pedestrian Detection
|
Sergio Silva, Victor Campmany, Laura Sellart, Juan Carlos Moure, Antoni Espinosa, David Vazquez, et al. (2015). Autonomous GPU-based Driving. In Programming and Tunning Massive Parallel Systems.
Abstract: Human factors cause most driving accidents; this is why nowadays is common to hear about autonomous driving as an alternative. Autonomous driving will not only increase safety, but also will develop a system of cooperative self-driving cars that will reduce pollution and congestion. Furthermore, it will provide more freedom to handicapped people, elderly or kids.
Autonomous Driving requires perceiving and understanding the vehicle environment (e.g., road, traffic signs, pedestrians, vehicles) using sensors (e.g., cameras, lidars, sonars, and radars), selflocalization (requiring GPS, inertial sensors and visual localization in precise maps), controlling the vehicle and planning the routes. These algorithms require high computation capability, and thanks to NVIDIA GPU acceleration this starts to become feasible.
NVIDIA® is developing a new platform for boosting the Autonomous Driving capabilities that is able of managing the vehicle via CAN-Bus: the Drive™ PX. It has 8 ARM cores with dual accelerated Tegra® X1 chips. It has 12 synchronized camera inputs for 360º vehicle perception, 4G and Wi-Fi capabilities allowing vehicle communications and GPS and inertial sensors inputs for self-localization.
Our research group has been selected for testing Drive™ PX. Accordingly, we are developing a Drive™ PX based autonomous car. Currently, we are porting our previous CPU based algorithms (e.g., Lane Departure Warning, Collision Warning, Automatic Cruise Control, Pedestrian Protection, or Semantic Segmentation) for running in the GPU.
Keywords: Autonomous Driving; ADAS; CUDA
|
Daniel Hernandez, Alejandro Chacon, Antonio Espinosa, David Vazquez, Juan Carlos Moure, & Antonio Lopez. (2016). Stereo Matching using SGM on the GPU.
Abstract: Dense, robust and real-time computation of depth information from stereo-camera systems is a computationally demanding requirement for robotics, advanced driver assistance systems (ADAS) and autonomous vehicles. Semi-Global Matching (SGM) is a widely used algorithm that propagates consistency constraints along several paths across the image. This work presents a real-time system producing reliable disparity estimation results on the new embedded energy efficient GPU devices. Our design runs on a Tegra X1 at 42 frames per second (fps) for an image size of 640x480, 128 disparity levels, and using 4 path directions for the SGM method.
Keywords: CUDA; Stereo; Autonomous Vehicle
|
Alejandro Tabas, Emili Balaguer-Ballester, & Laura Igual. (2014). Spatial Discriminant ICA for RS-fMRI characterisation. In 4th International Workshop on Pattern Recognition in Neuroimaging (pp. 1–4).
Abstract: Resting-State fMRI (RS-fMRI) is a brain imaging technique useful for exploring functional connectivity. A major point of interest in RS-fMRI analysis is to isolate connectivity patterns characterising disorders such as for instance ADHD. Such characterisation is usually performed in two steps: first, all connectivity patterns in the data are extracted by means of Independent Component Analysis (ICA); second, standard statistical tests are performed over the extracted patterns to find differences between control and clinical groups. In this work we introduce a novel, single-step, approach for this problem termed Spatial Discriminant ICA. The algorithm can efficiently isolate networks of functional connectivity characterising a clinical group by combining ICA and a new variant of the Fisher’s Linear Discriminant also introduced in this work. As the characterisation is carried out in a single step, it potentially provides for a richer characterisation of inter-class differences. The algorithm is tested using synthetic and real fMRI data, showing promising results in both experiments.
|
Josep Llados, Ernest Valveny, Gemma Sanchez, & Enric Marti. (2003). A Case Study of Pattern Recognition: Symbol Recognition in Graphic Documentsa. In Proceedings of Pattern Recognition in Information Systems (pp. 1–13). ICEIS Press.
|
N. Serrano, L. Tarazon, D. Perez, Oriol Ramos Terrades, & S. Juan. (2010). The GIDOC Prototype. In 10th International Workshop on Pattern Recognition in Information Systems (pp. 82–89).
Abstract: Transcription of handwritten text in (old) documents is an important, time-consuming task for digital libraries. It might be carried out by first processing all document images off-line, and then manually supervising system transcriptions to edit incorrect parts. However, current techniques for automatic page layout analysis, text line detection and handwriting recognition are still far from perfect, and thus post-editing system output is not clearly better than simply ignoring it.
A more effective approach to transcribe old text documents is to follow an interactive- predictive paradigm in which both, the system is guided by the user, and the user is assisted by the system to complete the transcription task as efficiently as possible. Following this approach, a system prototype called GIDOC (Gimp-based Interactive transcription of old text DOCuments) has been developed to provide user-friendly, integrated support for interactive-predictive layout analysis, line detection and handwriting transcription.
GIDOC is designed to work with (large) collections of homogeneous documents, that is, of similar structure and writing styles. They are annotated sequentially, by (par- tially) supervising hypotheses drawn from statistical models that are constantly updated with an increasing number of available annotated documents. And this is done at different annotation levels. For instance, at the level of page layout analysis, GIDOC uses a novel text block detection method in which conventional, memoryless techniques are improved with a “history” model of text block positions. Similarly, at the level of text line image transcription, GIDOC includes a handwriting recognizer which is steadily improved with a growing number of (partially) supervised transcriptions.
|
Miguel Oliveira, V.Santos, & Angel Sappa. (2012). Short term path planning using a multiple hypothesis evaluation approach for an autonomous driving competition. In IEEE 4th Workshop on Planning, Perception and Navigation for Intelligent Vehicles.
|
Fernando Vilariño. (2016). Dissemination, creation and education from archives: Case study of the collection of Digitized Visual Poems from Joan Brossa Foundation. In International Workshop on Poetry: Archives, Poetries and Receptions.
|
German Barquero, Johnny Nuñez, Sergio Escalera, Zhen Xu, Wei-Wei Tu, & Isabelle Guyon. (2022). Didn’t see that coming: a survey on non-verbal social human behavior forecasting. In Understanding Social Behavior in Dyadic and Small Group Interactions (Vol. 173, pp. 139–178).
Abstract: Non-verbal social human behavior forecasting has increasingly attracted the interest of the research community in recent years. Its direct applications to human-robot interaction and socially-aware human motion generation make it a very attractive field. In this survey, we define the behavior forecasting problem for multiple interactive agents in a generic way that aims at unifying the fields of social signals prediction and human motion forecasting, traditionally separated. We hold that both problem formulations refer to the same conceptual problem, and identify many shared fundamental challenges: future stochasticity, context awareness, history exploitation, etc. We also propose a taxonomy that comprises
methods published in the last 5 years in a very informative way and describes the current main concerns of the community with regard to this problem. In order to promote further research on this field, we also provide a summarized and friendly overview of audiovisual datasets featuring non-acted social interactions. Finally, we describe the most common metrics used in this task and their particular issues.
|
Adam Fodor, Rachid R. Saboundji, Julio C. S. Jacques Junior, Sergio Escalera, David Gallardo Pujol, & Andras Lorincz. (2022). Multimodal Sentiment and Personality Perception Under Speech: A Comparison of Transformer-based Architectures. In Understanding Social Behavior in Dyadic and Small Group Interactions (Vol. 173, pp. 218–241).
Abstract: Human-machine, human-robot interaction, and collaboration appear in diverse fields, from homecare to Cyber-Physical Systems. Technological development is fast, whereas real-time methods for social communication analysis that can measure small changes in sentiment and personality states, including visual, acoustic and language modalities are lagging, particularly when the goal is to build robust, appearance invariant, and fair methods. We study and compare methods capable of fusing modalities while satisfying real-time and invariant appearance conditions. We compare state-of-the-art transformer architectures in sentiment estimation and introduce them in the much less explored field of personality perception. We show that the architectures perform differently on automatic sentiment and personality perception, suggesting that each task may be better captured/modeled by a particular method. Our work calls attention to the attractive properties of the linear versions of the transformer architectures. In particular, we show that the best results are achieved by fusing the different architectures{’} preprocessing methods. However, they pose extreme conditions in computation power and energy consumption for real-time computations for quadratic transformers due to their memory requirements. In turn, linear transformers pave the way for quantifying small changes in sentiment estimation and personality perception for real-time social communications for machines and robots.
|