|
Arnau Ramisa, Adriana Tapus, David Aldavert, Ricardo Toledo, & Ramon Lopez de Mantaras. (2009). Robust Vision-Based Localization using Combinations of Local Feature Regions Detectors. AR - Autonomous Robots, 27(4), 373–385.
Abstract: This paper presents a vision-based approach for mobile robot localization. The model of the environment is topological. The new approach characterizes a place using a signature. This signature consists of a constellation of descriptors computed over different types of local affine covariant regions extracted from an omnidirectional image acquired rotating a standard camera with a pan-tilt unit. This type of representation permits a reliable and distinctive environment modelling. Our objectives were to validate the proposed method in indoor environments and, also, to find out if the combination of complementary local feature region detectors improves the localization versus using a single region detector. Our experimental results show that if false matches are effectively rejected, the combination of different covariant affine region detectors increases notably the performance of the approach by combining the different strengths of the individual detectors. In order to reduce the localization time, two strategies are evaluated: re-ranking the map nodes using a global similarity measure and using standard perspective view field of 45°.
In order to systematically test topological localization methods, another contribution proposed in this work is a novel method to see the degradation in localization performance as the robot moves away from the point where the original signature was acquired. This allows to know the robustness of the proposed signature. In order for this to be effective, it must be done in several, variated, environments that test all the possible situations in which the robot may have to perform localization.
|
|
|
Arnau Ramisa, Adriana Tapus, Ramon Lopez de Mantaras, & Ricardo Toledo. (2008). Mobile Robot Localization using Panoramic Vision and Combination of Feature Region Detectors. In IEEE International Conference on Robotics and Automation, (538–543).
|
|
|
Arnau Ramisa, Alex Goldhoorn, David Aldavert, Ricardo Toledo, & Ramon Lopez de Mantaras. (2011). Combining Invariant Features and the ALV Homing Method for Autonomous Robot Navigation Based on Panoramas. JIRC - Journal of Intelligent and Robotic Systems, 64(3-4), 625–649.
Abstract: Biologically inspired homing methods, such as the Average Landmark Vector, are an interesting solution for local navigation due to its simplicity. However, usually they require a modification of the environment by placing artificial landmarks in order to work reliably. In this paper we combine the Average Landmark Vector with invariant feature points automatically detected in panoramic images to overcome this limitation. The proposed approach has been evaluated first in simulation and, as promising results are found, also in two data sets of panoramas from real world environments.
|
|
|
Arnau Ramisa, David Aldavert, Shrihari Vasudevan, Ricardo Toledo, & Ramon Lopez de Mantaras. (2012). Evaluation of Three Vision Based Object Perception Methods for a Mobile Robot. JIRC - Journal of Intelligent and Robotic Systems, 68(2), 185–208.
Abstract: This paper addresses visual object perception applied to mobile robotics. Being able to perceive household objects in unstructured environments is a key capability in order to make robots suitable to perform complex tasks in home environments. However, finding a solution for this task is daunting: it requires the ability to handle the variability in image formation in a moving camera with tight time constraints. The paper brings to attention some of the issues with applying three state of the art object recognition and detection methods in a mobile robotics scenario, and proposes methods to deal with windowing/segmentation. Thus, this work aims at evaluating the state-of-the-art in object perception in an attempt to develop a lightweight solution for mobile robotics use/research in typical indoor settings.
|
|
|
Arnau Ramisa, David Aldavert, Shrihari Vasudevan, Ricardo Toledo, & Ramon Lopez de Mantaras. (2011). The IIIA30 MObile Robot Object Recognition Datset. In 11th Portuguese Robotics Open.
Abstract: Object perception is a key feature in order to make mobile robots able to perform high-level tasks. However, research aimed at addressing the constraints and limitations encountered in a mobile robotics scenario, like low image resolution, motion blur or tight computational constraints, is still very scarce. In order to facilitate future research in this direction, in this work we present an object detection and recognition dataset acquired using a mobile robotic platform. As a baseline for the dataset, we evaluated the cascade of weak classifiers object detection method from Viola and Jones.
|
|
|
Arnau Ramisa, Ramon Lopez de Mantaras, & Ricardo Toledo. (2007). Comparing Combinations of Feature Regions for Panoramic VSLAM. In 4th International Conference on Informatics in Control, Automation and Robotics (292–297).
|
|
|
Arnau Ramisa, Shrihari Vasudevan, David Aldavert, Ricardo Toledo, & Ramon Lopez de Mantaras. (2009). Evaluation of the SIFT Object Recognition Method in Mobile Robots: Frontiers in Artificial Intelligence and Applications. In 12th International Conference of the Catalan Association for Artificial Intelligence (Vol. 202, pp. 9–18).
Abstract: General object recognition in mobile robots is of primary importance in order to enhance the representation of the environment that robots will use for their reasoning processes. Therefore, we contribute reduce this gap by evaluating the SIFT Object Recognition method in a challenging dataset, focusing on issues relevant to mobile robotics. Resistance of the method to the robotics working conditions was found, but it was limited mainly to well-textured objects.
|
|
|
Artur Xarles, Sergio Escalera, Thomas B. Moeslund, & Albert Clapes. (2023). ASTRA: An Action Spotting TRAnsformer for Soccer Videos. In Proceedings of the 6th International Workshop on Multimedia Content Analysis in Sports (93–102).
Abstract: In this paper, we introduce ASTRA, a Transformer-based model designed for the task of Action Spotting in soccer matches. ASTRA addresses several challenges inherent in the task and dataset, including the requirement for precise action localization, the presence of a long-tail data distribution, non-visibility in certain actions, and inherent label noise. To do so, ASTRA incorporates (a) a Transformer encoder-decoder architecture to achieve the desired output temporal resolution and to produce precise predictions, (b) a balanced mixup strategy to handle the long-tail distribution of the data, (c) an uncertainty-aware displacement head to capture the label variability, and (d) input audio signal to enhance detection of non-visible actions. Results demonstrate the effectiveness of ASTRA, achieving a tight Average-mAP of 66.82 on the test set. Moreover, in the SoccerNet 2023 Action Spotting challenge, we secure the 3rd position with an Average-mAP of 70.21 on the challenge set.
|
|
|
Arturo Fuentes, F. Javier Sanchez, Thomas Voncina, & Jorge Bernal. (2021). LAMV: Learning to Predict Where Spectators Look in Live Music Performances. In 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 5, pp. 500–507).
Abstract: The advent of artificial intelligence has supposed an evolution on how different daily work tasks are performed. The analysis of cultural content has seen a huge boost by the development of computer-assisted methods that allows easy and transparent data access. In our case, we deal with the automation of the production of live shows, like music concerts, aiming to develop a system that can indicate the producer which camera to show based on what each of them is showing. In this context, we consider that is essential to understand where spectators look and what they are interested in so the computational method can learn from this information. The work that we present here shows the results of a first preliminary study in which we compare areas of interest defined by human beings and those indicated by an automatic system. Our system is based on the extraction of motion textures from dynamic Spatio-Temporal Volumes (STV) and then analyzing the patterns by means of texture analysis techniques. We validate our approach over several video sequences that have been labeled by 16 different experts. Our method is able to match those relevant areas identified by the experts, achieving recall scores higher than 80% when a distance of 80 pixels between method and ground truth is considered. Current performance shows promise when detecting abnormal peaks and movement trends.
|
|
|
Arya Farkhondeh, Cristina Palmero, Simone Scardapane, & Sergio Escalera. (2022). Towards Self-Supervised Gaze Estimation.
Abstract: Recent joint embedding-based self-supervised methods have surpassed standard supervised approaches on various image recognition tasks such as image classification. These self-supervised methods aim at maximizing agreement between features extracted from two differently transformed views of the same image, which results in learning an invariant representation with respect to appearance and geometric image transformations. However, the effectiveness of these approaches remains unclear in the context of gaze estimation, a structured regression task that requires equivariance under geometric transformations (e.g., rotations, horizontal flip). In this work, we propose SwAT, an equivariant version of the online clustering-based self-supervised approach SwAV, to learn more informative representations for gaze estimation. We demonstrate that SwAT, with ResNet-50 and supported with uncurated unlabeled face images, outperforms state-of-the-art gaze estimation methods and supervised baselines in various experiments. In particular, we achieve up to 57% and 25% improvements in cross-dataset and within-dataset evaluation tasks on existing benchmarks (ETH-XGaze, Gaze360, and MPIIFaceGaze).
|
|
|
Asma Bensalah, Alicia Fornes, Cristina Carmona_Duarte, & Josep Llados. (2022). Easing Automatic Neurorehabilitation via Classification and Smoothness Analysis. In Intertwining Graphonomics with Human Movements. 20th International Conference of the International Graphonomics Society, IGS 2022 (Vol. 13424, pp. 336–348). LNCS.
Abstract: Assessing the quality of movements for post-stroke patients during the rehabilitation phase is vital given that there is no standard stroke rehabilitation plan for all the patients. In fact, it depends basically on the patient’s functional independence and its progress along the rehabilitation sessions. To tackle this challenge and make neurorehabilitation more agile, we propose an automatic assessment pipeline that starts by recognising patients’ movements by means of a shallow deep learning architecture, then measuring the movement quality using jerk measure and related measures. A particularity of this work is that the dataset used is clinically relevant, since it represents movements inspired from Fugl-Meyer a well common upper-limb clinical stroke assessment scale for stroke patients. We show that it is possible to detect the contrast between healthy and patients movements in terms of smoothness, besides achieving conclusions about the patients’ progress during the rehabilitation sessions that correspond to the clinicians’ findings about each case.
Keywords: Neurorehabilitation; Upper-lim; Movement classification; Movement smoothness; Deep learning; Jerk
|
|
|
Asma Bensalah, Antonio Parziale, Giuseppe De Gregorio, Angelo Marcelli, Alicia Fornes, & Josep Llados. (2023). I Can’t Believe It’s Not Better: In-air Movement for Alzheimer Handwriting Synthetic Generation. In 21st International Graphonomics Conference (136–148).
Abstract: During recent years, there here has been a boom in terms of deep learning use for handwriting analysis and recognition. One main application for handwriting analysis is early detection and diagnosis in the health field. Unfortunately, most real case problems still suffer a scarcity of data, which makes difficult the use of deep learning-based models. To alleviate this problem, some works resort to synthetic data generation. Lately, more works are directed towards guided data synthetic generation, a generation that uses the domain and data knowledge to generate realistic data that can be useful to train deep learning models. In this work, we combine the domain knowledge about the Alzheimer’s disease for handwriting and use it for a more guided data generation. Concretely, we have explored the use of in-air movements for synthetic data generation.
|
|
|
Asma Bensalah, Jialuo Chen, Alicia Fornes, Cristina Carmona_Duarte, Josep Llados, & Miguel A. Ferrer. (2020). Towards Stroke Patients' Upper-limb Automatic Motor Assessment Using Smartwatches. In International Workshop on Artificial Intelligence for Healthcare Applications (Vol. 12661, pp. 476–489).
Abstract: Assessing the physical condition in rehabilitation scenarios is a challenging problem, since it involves Human Activity Recognition (HAR) and kinematic analysis methods. In addition, the difficulties increase in unconstrained rehabilitation scenarios, which are much closer to the real use cases. In particular, our aim is to design an upper-limb assessment pipeline for stroke patients using smartwatches. We focus on the HAR task, as it is the first part of the assessing pipeline. Our main target is to automatically detect and recognize four key movements inspired by the Fugl-Meyer assessment scale, which are performed in both constrained and unconstrained scenarios. In addition to the application protocol and dataset, we propose two detection and classification baseline methods. We believe that the proposed framework, dataset and baseline results will serve to foster this research field.
|
|
|
Asma Bensalah, Pau Riba, Alicia Fornes, & Josep Llados. (2019). Shoot less and Sketch more: An Efficient Sketch Classification via Joining Graph Neural Networks and Few-shot Learning. In 13th IAPR International Workshop on Graphics Recognition (pp. 80–85).
Abstract: With the emergence of the touchpad devices and drawing tablets, a new era of sketching started afresh. However, the recognition of sketches is still a tough task due to the variability of the drawing styles. Moreover, in some application scenarios there is few labelled data available for training,
which imposes a limitation for deep learning architectures. In addition, in many cases there is a need to generate models able to adapt to new classes. In order to cope with these limitations, we propose a method based on few-shot learning and graph neural networks for classifying sketches aiming for an efficient neural model. We test our approach with several databases of
sketches, showing promising results.
Keywords: Sketch classification; Convolutional Neural Network; Graph Neural Network; Few-shot learning
|
|
|
Aura Hernandez-Sabate. (2009). Exploring Arterial Dynamics and Structures in IntraVascular Ultrasound Sequences (Debora Gil, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Cardiovascular diseases are a leading cause of death in developed countries. Most of them are caused by arterial (specially coronary) diseases, mainly caused by plaque accumulation. Such pathology narrows blood flow (stenosis) and affects artery bio- mechanical elastic properties (atherosclerosis). In the last decades, IntraVascular UltraSound (IVUS) has become a usual imaging technique for the diagnosis and follow up of arterial diseases. IVUS is a catheter-based imaging technique which shows a sequence of cross sections of the artery under study. Inspection of a single image gives information about the percentage of stenosis. Meanwhile, inspection of longitudinal views provides information about artery bio-mechanical properties, which can prevent a fatal outcome of the cardiovascular disease. On one hand, dynamics of arteries (due to heart pumping among others) is a major artifact for exploring tissue bio-mechanical properties. On the other one, manual stenosis measurements require a manual tracing of vessel borders, which is a time-consuming task and might suffer from inter-observer variations. This PhD thesis proposes several image processing tools for exploring vessel dy- namics and structures. We present a physics-based model to extract, analyze and correct vessel in-plane rigid dynamics and to retrieve cardiac phase. Furthermore, we introduce a deterministic-statistical method for automatic vessel borders detection. In particular, we address adventitia layer segmentation. An accurate validation pro- tocol to ensure reliable clinical applicability of the methods is a crucial step in any proposal of an algorithm. In this thesis we take special care in designing a valida- tion protocol for each approach proposed and we contribute to the in vivo dynamics validation with a quantitative and objective score to measure the amount of motion suppressed.
|
|