Patricia Marquez, Debora Gil, Aura Hernandez-Sabate, & Daniel Kondermann. (2013). When Is A Confidence Measure Good Enough? In 9th International Conference on Computer Vision Systems (Vol. 7963, pp. 344–353). LNCS. Springer Link.
Abstract: Confidence estimation has recently become a hot topic in image processing and computer vision.Yet, several definitions exist of the term “confidence” which are sometimes used interchangeably. This is a position paper, in which we aim to give an overview on existing definitions,
thereby clarifying the meaning of the used terms to facilitate further research in this field. Based on these clarifications, we develop a theory to compare confidence measures with respect to their quality.
Keywords: Optical flow, confidence measure, performance evaluation
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2008). Recognition of Multi-oriented Touching Characters in Graphical Documents. In Computer Vision, Graphics & Image Processing, 2008. Sixth Indian Conference on, (Vol. 16, 297–304).
|
M. Ivasic-Kos, M. Pobar, & Jordi Gonzalez. (2019). Active Player Detection in Handball Videos Using Optical Flow and STIPs Based Measures. In 13th International Conference on Signal Processing and Communication Systems.
Abstract: In handball videos recorded during the training, multiple players are present in the scene at the same time. Although they all might move and interact, not all players contribute to the currently relevant exercise nor practice the given handball techniques. The goal of this experiment is to automatically determine players on training footage that perform given handball techniques and are therefore considered active. It is a very challenging task for which a precise object detector is needed that can handle cluttered scenes with poor illumination, with many players present in different sizes and distances from the camera, partially occluded, moving fast. To determine which of the detected players are active, additional information is needed about the level of player activity. Since many handball actions are characterized by considerable changes in speed, position, and variations in the player's appearance, we propose using spatio-temporal interest points (STIPs) and optical flow (OF). Therefore, we propose an active player detection method combining the YOLO object detector and two activity measures based on STIPs and OF. The performance of the proposed method and activity measures are evaluated on a custom handball video dataset acquired during handball training lessons.
|
Pau Rodriguez, Jordi Gonzalez, Josep M. Gonfaus, & Xavier Roca. (2019). Towards Visual Personality Questionnaires based on Deep Learning and Social Media. In 21st International Conference on Social Influence and Social Psychology.
|
Naveen Onkarappa, Sujay M. Veerabhadrappa, & Angel Sappa. (2012). Optical Flow in Onboard Applications: A Study on the Relationship Between Accuracy and Scene Texture. In 4th International Conference on Signal and Image Processing (Vol. 221, pp. 257–267).
Abstract: Optical flow has got a major role in making advanced driver assistance systems (ADAS) a reality. ADAS applications are expected to perform efficiently in all kinds of environments, those are highly probable, that one can drive the vehicle in different kinds of roads, times and seasons. In this work, we study the relationship of optical flow with different roads, that is by analyzing optical flow accuracy on different road textures. Texture measures such as TeX , TeX and TeX are evaluated for this purpose. Further, the relation of regularization weight to the flow accuracy in the presence of different textures is also analyzed. Additionally, we present a framework to generate synthetic sequences of different textures in ADAS scenarios with ground-truth optical flow.
|
Monica Piñol, Angel Sappa, & Ricardo Toledo. (2012). MultiTable Reinforcement for Visual Object Recognition. In 4th International Conference on Signal and Image Processing (Vol. 221, pp. 469–480). LNCS. Springer India.
Abstract: This paper presents a bag of feature based method for visual object recognition. Our contribution is focussed on the selection of the best feature descriptor. It is implemented by using a novel multi-table reinforcement learning method that selects among five of classical descriptors (i.e., Spin, SIFT, SURF, C-SIFT and PHOW) the one that best describes each image. Experimental results and comparisons are provided showing the improvements achieved with the proposed approach.
|
Mariona Caros, Maite Garolera, Petia Radeva, & Xavier Giro. (2020). Automatic Reminiscence Therapy for Dementia. In 10th ACM International Conference on Multimedia Retrieval (pp. 383–387).
Abstract: With people living longer than ever, the number of cases with dementia such as Alzheimer's disease increases steadily. It affects more than 46 million people worldwide, and it is estimated that in 2050 more than 100 million will be affected. While there are not effective treatments for these terminal diseases, therapies such as reminiscence, that stimulate memories from the past are recommended. Currently, reminiscence therapy takes place in care homes and is guided by a therapist or a carer. In this work, we present an AI-based solution to automatize the reminiscence therapy, which consists in a dialogue system that uses photos as input to generate questions. We run a usability case study with patients diagnosed of mild cognitive impairment that shows they found the system very entertaining and challenging. Overall, this paper presents how reminiscence therapy can be automatized by using machine learning, and deployed to smartphones and laptops, making the therapy more accessible to every person affected by dementia.
|
Elvina Motard, Bogdan Raducanu, Viviane Cadenat, & Jordi Vitria. (2007). Incremental On-Line Topological Map Learning for A Visual Homing Application. In IEEE International Conference on Robotics and Automation (2049–2054).
|
Hugo Berti, Angel Sappa, & Osvaldo Agamennoni. (2007). Autonomous robot navigation with a global and asymptotic convergence. In IEEE International Conference on Robotics and Automation (2712–2717).
|
Fadi Dornaika, & Bogdan Raducanu. (2008). Detecting and Tracking of 3D Face Pose for Human-Robot Interaction. In IEEE International Conference on Robotics and Automation, (1716–1721).
|
Arnau Ramisa, Adriana Tapus, Ramon Lopez de Mantaras, & Ricardo Toledo. (2008). Mobile Robot Localization using Panoramic Vision and Combination of Feature Region Detectors. In IEEE International Conference on Robotics and Automation, (538–543).
|
Bogdan Raducanu, & Fadi Dornaika. (2010). Dynamic Facial Expression Recognition Using Laplacian Eigenmaps-Based Manifold Learning. In IEEE International Conference on Robotics and Automation (156–161).
Abstract: In this paper, we propose an integrated framework for tracking, modelling and recognition of facial expressions. The main contributions are: (i) a view- and texture independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker; (ii) the complexity of the non-linear facial expression space is modelled through a manifold, whose structure is learned using Laplacian Eigenmaps. The projected facial expressions are afterwards recognized based on Nearest Neighbor classifier; (iii) with the proposed approach, we developed an application for an AIBO robot, in which it mirrors the perceived facial expression.
|
German Ros, J. Guerrero, Angel Sappa, & Antonio Lopez. (2013). VSLAM pose initialization via Lie groups and Lie algebras optimization. In Proceedings of IEEE International Conference on Robotics and Automation (pp. 5740–5747).
Abstract: We present a novel technique for estimating initial 3D poses in the context of localization and Visual SLAM problems. The presented approach can deal with noise, outliers and a large amount of input data and still performs in real time in a standard CPU. Our method produces solutions with an accuracy comparable to those produced by RANSAC but can be much faster when the percentage of outliers is high or for large amounts of input data. On the current work we propose to formulate the pose estimation as an optimization problem on Lie groups, considering their manifold structure as well as their associated Lie algebras. This allows us to perform a fast and simple optimization at the same time that conserve all the constraints imposed by the Lie group SE(3). Additionally, we present several key design concepts related with the cost function and its Jacobian; aspects that are critical for the good performance of the algorithm.
Keywords: SLAM
|
Jiaolong Xu, David Vazquez, Krystian Mikolajczyk, & Antonio Lopez. (2016). Hierarchical online domain adaptation of deformable part-based models. In IEEE International Conference on Robotics and Automation (pp. 5536–5541).
Abstract: We propose an online domain adaptation method for the deformable part-based model (DPM). The online domain adaptation is based on a two-level hierarchical adaptation tree, which consists of instance detectors in the leaf nodes and a category detector at the root node. Moreover, combined with a multiple object tracking procedure (MOT), our proposal neither requires target-domain annotated data nor revisiting the source-domain data for performing the source-to-target domain adaptation of the DPM. From a practical point of view this means that, given a source-domain DPM and new video for training on a new domain without object annotations, our procedure outputs a new DPM adapted to the domain represented by the video. As proof-of-concept we apply our proposal to the challenging task of pedestrian detection. In this case, each instance detector is an exemplar classifier trained online with only one pedestrian per frame. The pedestrian instances are collected by MOT and the hierarchical model is constructed dynamically according to the pedestrian trajectories. Our experimental results show that the adapted detector achieves the accuracy of recent supervised domain adaptation methods (i.e., requiring manually annotated targetdomain data), and improves the source detector more than 10 percentage points.
Keywords: Domain Adaptation; Pedestrian Detection
|
Felipe Codevilla, Matthias Muller, Antonio Lopez, Vladlen Koltun, & Alexey Dosovitskiy. (2018). End-to-end Driving via Conditional Imitation Learning. In IEEE International Conference on Robotics and Automation (pp. 4693–4700).
Abstract: Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at this https URL
|