|   | 
Details
   web
Records
Author Eduardo Tusa; Arash Akbarinia; Raquel Gil Rodriguez; Corina Barbalata
Title Real-Time Face Detection and Tracking Utilising OpenMP and ROS Type Conference Article
Year 2015 Publication 3rd Asia-Pacific Conference on Computer Aided System Engineering Abbreviated Journal
Volume Issue Pages 179 - 184
Keywords RGB-D; Kinect; Human Detection and Tracking; ROS; OpenMP
Abstract The first requisite of a robot to succeed in social interactions is accurate human localisation, i.e. subject detection and tracking. Later, it is estimated whether an interaction partner seeks attention, for example by interpreting the position and orientation of the body. In computer vision, these cues usually are obtained in colour images, whose qualities are degraded in ill illuminated social scenes. In these scenarios depth sensors offer a richer representation. Therefore, it is important to combine colour and depth information. The
second aspect that plays a fundamental role in the acceptance of social robots is their real-time-ability. Processing colour and depth images is computationally demanding. To overcome this we propose a parallelisation strategy of face detection and tracking based on two different architectures: message passing and shared memory. Our results demonstrate high accuracy in
low computational time, processing nine times more number of frames in a parallel implementation. This provides a real-time social robot interaction.
Address Quito; Ecuador; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference APCASE
Notes NEUROBIT Approved no
Call Number Admin @ si @ TAG2015 Serial (down) 2659
Permanent link to this record
 

 
Author Huamin Ren; Weifeng Liu; Soren Ingvor Olsen; Sergio Escalera; Thomas B. Moeslund
Title Unsupervised Behavior-Specific Dictionary Learning for Abnormal Event Detection Type Conference Article
Year 2015 Publication 26th British Machine Vision Conference Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Swansea; uk; September 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ RLO2015 Serial (down) 2658
Permanent link to this record
 

 
Author Victor Ponce; Hugo Jair Escalante; Sergio Escalera; Xavier Baro
Title Gesture and Action Recognition by Evolved Dynamic Subgestures Type Conference Article
Year 2015 Publication 26th British Machine Vision Conference Abbreviated Journal
Volume Issue Pages 129.1-129.13
Keywords
Abstract This paper introduces a framework for gesture and action recognition based on the evolution of temporal gesture primitives, or subgestures. Our work is inspired on the principle of producing genetic variations within a population of gesture subsequences, with the goal of obtaining a set of gesture units that enhance the generalization capability of standard gesture recognition approaches. In our context, gesture primitives are evolved over time using dynamic programming and generative models in order to recognize complex actions. In few generations, the proposed subgesture-based representation
of actions and gestures outperforms the state of the art results on the MSRDaily3D and MSRAction3D datasets.
Address Swansea; uk; September 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes HuPBA;MV Approved no
Call Number Admin @ si @ PEE2015 Serial (down) 2657
Permanent link to this record
 

 
Author Isabelle Guyon; Kristin Bennett; Gavin Cawley; Hugo Jair Escalante; Sergio Escalera; Tin Kam Ho; Nuria Macia; Bisakha Ray; Mehreen Saeed; Alexander Statnikov; Evelyne Viegas
Title AutoML Challenge 2015: Design and First Results Type Conference Article
Year 2015 Publication 32nd International Conference on Machine Learning, ICML workshop, JMLR proceedings ICML15 Abbreviated Journal
Volume Issue Pages 1-8
Keywords AutoML Challenge; machine learning; model selection; meta-learning; repre- sentation learning; active learning
Abstract ChaLearn is organizing the Automatic Machine Learning (AutoML) contest 2015, which challenges participants to solve classi cation and regression problems without any human intervention. Participants' code is automatically run on the contest servers to train and test learning machines. However, there is no obligation to submit code; half of the prizes can be won by submitting prediction results only. Datasets of progressively increasing diculty are introduced throughout the six rounds of the challenge. (Participants can
enter the competition in any round.) The rounds alternate phases in which learners are tested on datasets participants have not seen (AutoML), and phases in which participants have limited time to tweak their algorithms on those datasets to improve performance (Tweakathon). This challenge will push the state of the art in fully automatic machine learning on a wide range of real-world problems. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML.
Address Lille; France; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICML
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ GBC2015c Serial (down) 2656
Permanent link to this record
 

 
Author Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera; Albert Clapes; Kamal Nasrollahi; Michael Holte; Thomas B. Moeslund
Title Keep it Accurate and Diverse: Enhancing Action Recognition Performance by Ensemble Learning Type Conference Article
Year 2015 Publication IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal
Volume Issue Pages 22-29
Keywords
Abstract The performance of different action recognition techniques has recently been studied by several computer vision researchers. However, the potential improvement in classification through classifier fusion by ensemble-based methods has remained unattended. In this work, we evaluate the performance of an ensemble of action learning techniques, each performing the recognition task from a different perspective.
The underlying idea is that instead of aiming a very sophisticated and powerful representation/learning technique, we can learn action categories using a set of relatively simple and diverse classifiers, each trained with different feature set. In addition, combining the outputs of several learners can reduce the risk of an unfortunate selection of a learner on an unseen action recognition scenario.
This leads to having a more robust and general-applicable framework. In order to improve the recognition performance, a powerful combination strategy is utilized based on the Dempster-Shafer theory, which can effectively make use
of diversity of base learners trained on different sources of information. The recognition results of the individual classifiers are compared with those obtained from fusing the classifiers’ output, showing enhanced performance of the proposed methodology.
Address Boston; EEUU; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ BGE2015 Serial (down) 2655
Permanent link to this record
 

 
Author Ramin Irani; Kamal Nasrollahi; Chris Bahnsen; D.H. Lundtoft; Thomas B. Moeslund; Marc O. Simon; Ciprian Corneanu; Sergio Escalera; Tanja L. Pedersen; Maria-Louise Klitgaard; Laura Petrini
Title Spatio-temporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition Type Conference Article
Year 2015 Publication 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal
Volume Issue Pages 88-95
Keywords
Abstract Pain is a vital sign of human health and its automatic detection can be of crucial importance in many different contexts, including medical scenarios. While most available computer vision techniques are based on RGB, in this paper, we investigate the effect of combining RGB, depth, and thermal
facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain and recognizes between three intensity levels in 82% of the analyzed frames improving more than 6% over RGB only analysis in similar conditions.
Address Boston; EEUU; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ INB2015 Serial (down) 2654
Permanent link to this record
 

 
Author Andres Traumann; Sergio Escalera; Gholamreza Anbarjafari
Title A New Retexturing Method for Virtual Fitting Room Using Kinect 2 Camera Type Conference Article
Year 2015 Publication 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal
Volume Issue Pages 75-79
Keywords
Abstract
Address Boston; EEUU; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ TEA2015 Serial (down) 2653
Permanent link to this record
 

 
Author Xavier Baro; Jordi Gonzalez; Junior Fabian; Miguel Angel Bautista; Marc Oliu; Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera
Title ChaLearn Looking at People 2015 challenges: action spotting and cultural event recognition Type Conference Article
Year 2015 Publication 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal
Volume Issue Pages 1-9
Keywords
Abstract Following previous series on Looking at People (LAP) challenges [6, 5, 4], ChaLearn ran two competitions to be presented at CVPR 2015: action/interaction spotting and cultural event recognition in RGB data. We ran a second round on human activity recognition on RGB data sequences. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes the two performed challenges and obtained results. Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/.
Address Boston; EEUU; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes HuPBA;MV Approved no
Call Number Serial (down) 2652
Permanent link to this record
 

 
Author Gerard Canal; Cecilio Angulo; Sergio Escalera
Title Gesture based Human Multi-Robot interaction Type Conference Article
Year 2015 Publication IEEE International Joint Conference on Neural Networks IJCNN2015 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract The emergence of robot applications for nontechnical users implies designing new ways of interaction between robotic platforms and users. The main goal of this work is the development of a gestural interface to interact with robots
in a similar way as humans do, allowing the user to provide information of the task with non-verbal communication. The gesture recognition application has been implemented using the Microsoft’s KinectTM v2 sensor. Hence, a real-time algorithm based on skeletal features is described to deal with both, static
gestures and dynamic ones, being the latter recognized using a weighted Dynamic Time Warping method. The gesture recognition application has been implemented in a multi-robot case.

A NAO humanoid robot is in charge of interacting with the users and respond to the visual signals they produce. Moreover, a wheeled Wifibot robot carries both the sensor and the NAO robot, easing navigation when necessary. A broad set of user tests have been carried out demonstrating that the system is, indeed, a
natural approach to human robot interaction, with a fast response and easy to use, showing high gesture recognition rates.
Address Killarney; Ireland; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IJCNN
Notes HuPBA;MILAB Approved no
Call Number CAE2015a Serial (down) 2651
Permanent link to this record
 

 
Author Isabelle Guyon; Kristin Bennett; Gavin Cawley; Hugo Jair Escalante; Sergio Escalera
Title The AutoML challenge on codalab Type Conference Article
Year 2015 Publication IEEE International Joint Conference on Neural Networks IJCNN2015 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Killarney; Ireland; July 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IJCNN
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ GBC2015b Serial (down) 2650
Permanent link to this record
 

 
Author Kamal Nasrollahi; Sergio Escalera; P. Rasti; Gholamreza Anbarjafari; Xavier Baro; Hugo Jair Escalante; Thomas B. Moeslund
Title Deep Learning based Super-Resolution for Improved Action Recognition Type Conference Article
Year 2015 Publication 5th International Conference on Image Processing Theory, Tools and Applications IPTA2015 Abbreviated Journal
Volume Issue Pages 67 - 72
Keywords
Abstract Action recognition systems mostly work with videos of proper quality and resolution. Even most challenging benchmark databases for action recognition, hardly include videos of low-resolution from, e.g., surveillance cameras. In videos recorded by such cameras, due to the distance between people and cameras, people are pictured very small and hence challenge action recognition algorithms. Simple upsampling methods, like bicubic interpolation, cannot retrieve all the detailed information that can help the recognition. To deal with this problem, in this paper we combine results of bicubic interpolation with results of a state-ofthe-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate of a state-of-the-art action recognition system for handling low-resolution videos.
Address Orleans; France; November 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IPTA
Notes HuPBA;MV Approved no
Call Number Admin @ si @ NER2015 Serial (down) 2648
Permanent link to this record
 

 
Author Andres Traumann; Gholamreza Anbarjafari; Sergio Escalera
Title Accurate 3D Measurement Using Optical Depth Information Type Journal Article
Year 2015 Publication Electronic Letters Abbreviated Journal EL
Volume 51 Issue 18 Pages 1420-1422
Keywords
Abstract A novel three-dimensional measurement technique is proposed. The methodology consists in mapping from the screen coordinates reported by the optical camera to the real world, and integrating distance gradients from the beginning to the end point, while also minimising the error through fitting pixel locations to a smooth curve. The results demonstrate accuracy of less than half a centimetre using Microsoft Kinect II.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ TAE2015 Serial (down) 2647
Permanent link to this record
 

 
Author Onur Ferhat; Arcadi Llanza; Fernando Vilariño
Title A Feature-Based Gaze Estimation Algorithm for Natural Light Scenarios Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 569-576
Keywords Eye tracking; Gaze estimation; Natural light; Webcam
Abstract We present an eye tracking system that works with regular webcams. We base our work on open source CVC Eye Tracker [7] and we propose a number of improvements and a novel gaze estimation method. The new method uses features extracted from iris segmentation and it does not fall into the traditional categorization of appearance–based/model–based methods. Our experiments show that our approach reduces the gaze estimation errors by 34 % in the horizontal direction and by 12 % in the vertical direction compared to the baseline system.
Address Santiago de Compostela; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MV;SIAI Approved no
Call Number Admin @ si @ FLV2015a Serial (down) 2646
Permanent link to this record
 

 
Author Sergio Silva; Victor Campmany; Laura Sellart; Juan Carlos Moure; Antoni Espinosa; David Vazquez; Antonio Lopez
Title Autonomous GPU-based Driving Type Abstract
Year 2015 Publication Programming and Tunning Massive Parallel Systems Abbreviated Journal PUMPS
Volume Issue Pages
Keywords Autonomous Driving; ADAS; CUDA
Abstract Human factors cause most driving accidents; this is why nowadays is common to hear about autonomous driving as an alternative. Autonomous driving will not only increase safety, but also will develop a system of cooperative self-driving cars that will reduce pollution and congestion. Furthermore, it will provide more freedom to handicapped people, elderly or kids.

Autonomous Driving requires perceiving and understanding the vehicle environment (e.g., road, traffic signs, pedestrians, vehicles) using sensors (e.g., cameras, lidars, sonars, and radars), selflocalization (requiring GPS, inertial sensors and visual localization in precise maps), controlling the vehicle and planning the routes. These algorithms require high computation capability, and thanks to NVIDIA GPU acceleration this starts to become feasible.

NVIDIA® is developing a new platform for boosting the Autonomous Driving capabilities that is able of managing the vehicle via CAN-Bus: the Drive™ PX. It has 8 ARM cores with dual accelerated Tegra® X1 chips. It has 12 synchronized camera inputs for 360º vehicle perception, 4G and Wi-Fi capabilities allowing vehicle communications and GPS and inertial sensors inputs for self-localization.

Our research group has been selected for testing Drive™ PX. Accordingly, we are developing a Drive™ PX based autonomous car. Currently, we are porting our previous CPU based algorithms (e.g., Lane Departure Warning, Collision Warning, Automatic Cruise Control, Pedestrian Protection, or Semantic Segmentation) for running in the GPU.
Address Barcelona; Spain
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference PUMPS
Notes ADAS; 600.076; 600.082; 600.085 Approved no
Call Number ADAS @ adas @ SCS2015 Serial (down) 2645
Permanent link to this record
 

 
Author Victor Campmany; Sergio Silva; Juan Carlos Moure; Antoni Espinosa; David Vazquez; Antonio Lopez
Title GPU-based pedestrian detection for autonomous driving Type Abstract
Year 2015 Publication Programming and Tunning Massive Parallel Systems Abbreviated Journal PUMPS
Volume Issue Pages
Keywords Autonomous Driving; ADAS; CUDA; Pedestrian Detection
Abstract Pedestrian detection for autonomous driving has gained a lot of prominence during the last few years. Besides the fact that it is one of the hardest tasks within computer vision, it involves huge computational costs. The real-time constraints in the field are tight, and regular processors are not able to handle the workload obtaining an acceptable ratio of frames per second (fps). Moreover, multiple cameras are required to obtain accurate results, so the need to speed up the process is even higher. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system. Further, we introduce significant algorithmic adjustments and optimizations to adapt the problem to the GPU architecture. The aim is to provide a system capable of running in real-time obtaining reliable results.
Address Barcelona; Spain
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title PUMPS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference PUMPS
Notes ADAS; 600.076; 600.082; 600.085 Approved no
Call Number ADAS @ adas @ CSM2015 Serial (down) 2644
Permanent link to this record