toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Andres Traumann; Gholamreza Anbarjafari; Sergio Escalera edit  doi
openurl 
  Title Accurate 3D Measurement Using Optical Depth Information Type Journal Article
  Year 2015 Publication Electronic Letters Abbreviated Journal EL  
  Volume 51 Issue 18 Pages 1420-1422  
  Keywords  
  Abstract A novel three-dimensional measurement technique is proposed. The methodology consists in mapping from the screen coordinates reported by the optical camera to the real world, and integrating distance gradients from the beginning to the end point, while also minimising the error through fitting pixel locations to a smooth curve. The results demonstrate accuracy of less than half a centimetre using Microsoft Kinect II.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ TAE2015 Serial 2647  
Permanent link to this record
 

 
Author Kamal Nasrollahi; Sergio Escalera; P. Rasti; Gholamreza Anbarjafari; Xavier Baro; Hugo Jair Escalante; Thomas B. Moeslund edit   pdf
doi  openurl
  Title Deep Learning based Super-Resolution for Improved Action Recognition Type Conference Article
  Year 2015 Publication 5th International Conference on Image Processing Theory, Tools and Applications IPTA2015 Abbreviated Journal  
  Volume Issue Pages 67 - 72  
  Keywords  
  Abstract Action recognition systems mostly work with videos of proper quality and resolution. Even most challenging benchmark databases for action recognition, hardly include videos of low-resolution from, e.g., surveillance cameras. In videos recorded by such cameras, due to the distance between people and cameras, people are pictured very small and hence challenge action recognition algorithms. Simple upsampling methods, like bicubic interpolation, cannot retrieve all the detailed information that can help the recognition. To deal with this problem, in this paper we combine results of bicubic interpolation with results of a state-ofthe-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate of a state-of-the-art action recognition system for handling low-resolution videos.  
  Address Orleans; France; November 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IPTA  
  Notes HuPBA;MV Approved no  
  Call Number Admin @ si @ NER2015 Serial 2648  
Permanent link to this record
 

 
Author Isabelle Guyon; Kristin Bennett; Gavin Cawley; Hugo Jair Escalante; Sergio Escalera edit   pdf
url  openurl
  Title The AutoML challenge on codalab Type Conference Article
  Year 2015 Publication IEEE International Joint Conference on Neural Networks IJCNN2015 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Killarney; Ireland; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IJCNN  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ GBC2015b Serial 2650  
Permanent link to this record
 

 
Author Gerard Canal; Cecilio Angulo; Sergio Escalera edit   pdf
url  doi
openurl 
  Title Gesture based Human Multi-Robot interaction Type Conference Article
  Year 2015 Publication IEEE International Joint Conference on Neural Networks IJCNN2015 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The emergence of robot applications for nontechnical users implies designing new ways of interaction between robotic platforms and users. The main goal of this work is the development of a gestural interface to interact with robots
in a similar way as humans do, allowing the user to provide information of the task with non-verbal communication. The gesture recognition application has been implemented using the Microsoft’s KinectTM v2 sensor. Hence, a real-time algorithm based on skeletal features is described to deal with both, static
gestures and dynamic ones, being the latter recognized using a weighted Dynamic Time Warping method. The gesture recognition application has been implemented in a multi-robot case.

A NAO humanoid robot is in charge of interacting with the users and respond to the visual signals they produce. Moreover, a wheeled Wifibot robot carries both the sensor and the NAO robot, easing navigation when necessary. A broad set of user tests have been carried out demonstrating that the system is, indeed, a
natural approach to human robot interaction, with a fast response and easy to use, showing high gesture recognition rates.
 
  Address Killarney; Ireland; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IJCNN  
  Notes HuPBA;MILAB Approved no  
  Call Number CAE2015a Serial 2651  
Permanent link to this record
 

 
Author Xavier Baro; Jordi Gonzalez; Junior Fabian; Miguel Angel Bautista; Marc Oliu; Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera edit  doi
openurl 
  Title ChaLearn Looking at People 2015 challenges: action spotting and cultural event recognition Type Conference Article
  Year 2015 Publication 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal  
  Volume Issue Pages 1-9  
  Keywords  
  Abstract Following previous series on Looking at People (LAP) challenges [6, 5, 4], ChaLearn ran two competitions to be presented at CVPR 2015: action/interaction spotting and cultural event recognition in RGB data. We ran a second round on human activity recognition on RGB data sequences. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes the two performed challenges and obtained results. Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/.  
  Address Boston; EEUU; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes HuPBA;MV Approved no  
  Call Number Serial 2652  
Permanent link to this record
 

 
Author Andres Traumann; Sergio Escalera; Gholamreza Anbarjafari edit   pdf
doi  openurl
  Title A New Retexturing Method for Virtual Fitting Room Using Kinect 2 Camera Type Conference Article
  Year 2015 Publication 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal  
  Volume Issue Pages 75-79  
  Keywords  
  Abstract  
  Address Boston; EEUU; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ TEA2015 Serial 2653  
Permanent link to this record
 

 
Author Ramin Irani; Kamal Nasrollahi; Chris Bahnsen; D.H. Lundtoft; Thomas B. Moeslund; Marc O. Simon; Ciprian Corneanu; Sergio Escalera; Tanja L. Pedersen; Maria-Louise Klitgaard; Laura Petrini edit   pdf
doi  openurl
  Title Spatio-temporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition Type Conference Article
  Year 2015 Publication 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal  
  Volume Issue Pages 88-95  
  Keywords  
  Abstract Pain is a vital sign of human health and its automatic detection can be of crucial importance in many different contexts, including medical scenarios. While most available computer vision techniques are based on RGB, in this paper, we investigate the effect of combining RGB, depth, and thermal
facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain and recognizes between three intensity levels in 82% of the analyzed frames improving more than 6% over RGB only analysis in similar conditions.
 
  Address Boston; EEUU; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ INB2015 Serial 2654  
Permanent link to this record
 

 
Author Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera; Albert Clapes; Kamal Nasrollahi; Michael Holte; Thomas B. Moeslund edit  url
doi  openurl
  Title Keep it Accurate and Diverse: Enhancing Action Recognition Performance by Ensemble Learning Type Conference Article
  Year 2015 Publication IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal  
  Volume Issue Pages 22-29  
  Keywords  
  Abstract The performance of different action recognition techniques has recently been studied by several computer vision researchers. However, the potential improvement in classification through classifier fusion by ensemble-based methods has remained unattended. In this work, we evaluate the performance of an ensemble of action learning techniques, each performing the recognition task from a different perspective.
The underlying idea is that instead of aiming a very sophisticated and powerful representation/learning technique, we can learn action categories using a set of relatively simple and diverse classifiers, each trained with different feature set. In addition, combining the outputs of several learners can reduce the risk of an unfortunate selection of a learner on an unseen action recognition scenario.
This leads to having a more robust and general-applicable framework. In order to improve the recognition performance, a powerful combination strategy is utilized based on the Dempster-Shafer theory, which can effectively make use
of diversity of base learners trained on different sources of information. The recognition results of the individual classifiers are compared with those obtained from fusing the classifiers’ output, showing enhanced performance of the proposed methodology.
 
  Address Boston; EEUU; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ BGE2015 Serial 2655  
Permanent link to this record
 

 
Author Isabelle Guyon; Kristin Bennett; Gavin Cawley; Hugo Jair Escalante; Sergio Escalera; Tin Kam Ho; Nuria Macia; Bisakha Ray; Mehreen Saeed; Alexander Statnikov; Evelyne Viegas edit  url
doi  openurl
  Title AutoML Challenge 2015: Design and First Results Type Conference Article
  Year 2015 Publication 32nd International Conference on Machine Learning, ICML workshop, JMLR proceedings ICML15 Abbreviated Journal  
  Volume Issue Pages 1-8  
  Keywords AutoML Challenge; machine learning; model selection; meta-learning; repre- sentation learning; active learning  
  Abstract ChaLearn is organizing the Automatic Machine Learning (AutoML) contest 2015, which challenges participants to solve classi cation and regression problems without any human intervention. Participants' code is automatically run on the contest servers to train and test learning machines. However, there is no obligation to submit code; half of the prizes can be won by submitting prediction results only. Datasets of progressively increasing diculty are introduced throughout the six rounds of the challenge. (Participants can
enter the competition in any round.) The rounds alternate phases in which learners are tested on datasets participants have not seen (AutoML), and phases in which participants have limited time to tweak their algorithms on those datasets to improve performance (Tweakathon). This challenge will push the state of the art in fully automatic machine learning on a wide range of real-world problems. The platform will remain available beyond the termination of the challenge: http://codalab.org/AutoML.
 
  Address Lille; France; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICML  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ GBC2015c Serial 2656  
Permanent link to this record
 

 
Author Victor Ponce; Hugo Jair Escalante; Sergio Escalera; Xavier Baro edit   pdf
url  doi
openurl 
  Title Gesture and Action Recognition by Evolved Dynamic Subgestures Type Conference Article
  Year 2015 Publication 26th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages 129.1-129.13  
  Keywords  
  Abstract This paper introduces a framework for gesture and action recognition based on the evolution of temporal gesture primitives, or subgestures. Our work is inspired on the principle of producing genetic variations within a population of gesture subsequences, with the goal of obtaining a set of gesture units that enhance the generalization capability of standard gesture recognition approaches. In our context, gesture primitives are evolved over time using dynamic programming and generative models in order to recognize complex actions. In few generations, the proposed subgesture-based representation
of actions and gestures outperforms the state of the art results on the MSRDaily3D and MSRAction3D datasets.
 
  Address Swansea; uk; September 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes HuPBA;MV Approved no  
  Call Number Admin @ si @ PEE2015 Serial 2657  
Permanent link to this record
 

 
Author Huamin Ren; Weifeng Liu; Soren Ingvor Olsen; Sergio Escalera; Thomas B. Moeslund edit   pdf
openurl 
  Title Unsupervised Behavior-Specific Dictionary Learning for Abnormal Event Detection Type Conference Article
  Year 2015 Publication 26th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Swansea; uk; September 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ RLO2015 Serial 2658  
Permanent link to this record
 

 
Author Eduardo Tusa; Arash Akbarinia; Raquel Gil Rodriguez; Corina Barbalata edit   pdf
url  doi
openurl 
  Title Real-Time Face Detection and Tracking Utilising OpenMP and ROS Type Conference Article
  Year 2015 Publication 3rd Asia-Pacific Conference on Computer Aided System Engineering Abbreviated Journal  
  Volume Issue Pages 179 - 184  
  Keywords RGB-D; Kinect; Human Detection and Tracking; ROS; OpenMP  
  Abstract The first requisite of a robot to succeed in social interactions is accurate human localisation, i.e. subject detection and tracking. Later, it is estimated whether an interaction partner seeks attention, for example by interpreting the position and orientation of the body. In computer vision, these cues usually are obtained in colour images, whose qualities are degraded in ill illuminated social scenes. In these scenarios depth sensors offer a richer representation. Therefore, it is important to combine colour and depth information. The
second aspect that plays a fundamental role in the acceptance of social robots is their real-time-ability. Processing colour and depth images is computationally demanding. To overcome this we propose a parallelisation strategy of face detection and tracking based on two different architectures: message passing and shared memory. Our results demonstrate high accuracy in
low computational time, processing nine times more number of frames in a parallel implementation. This provides a real-time social robot interaction.
 
  Address Quito; Ecuador; July 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference APCASE  
  Notes NEUROBIT Approved no  
  Call Number Admin @ si @ TAG2015 Serial 2659  
Permanent link to this record
 

 
Author Arash Akbarinia; C. Alejandro Parraga edit   pdf
url  openurl
  Title Biologically Plausible Colour Naming Model Type Conference Article
  Year 2015 Publication European Conference on Visual Perception ECVP2015 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Poster  
  Address Liverpool; UK; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECVP  
  Notes NEUROBIT; 600.068 Approved no  
  Call Number Admin @ si @ AkP2015 Serial 2660  
Permanent link to this record
 

 
Author Dennis G.Romero; Anselmo Frizera; Angel Sappa; Boris X. Vintimilla; Teodiano F.Bastos edit   pdf
url  doi
isbn  openurl
  Title A predictive model for human activity recognition by observing actions and context Type Conference Article
  Year 2015 Publication Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015 Abbreviated Journal  
  Volume 9386 Issue Pages 323-333  
  Keywords  
  Abstract This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.  
  Address Catania; Italy; October 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-25902-4 Medium  
  Area Expedition Conference ACIVS  
  Notes ADAS; 600.076 Approved no  
  Call Number Admin @ si @ RFS2015 Serial 2661  
Permanent link to this record
 

 
Author Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias edit   pdf
doi  openurl
  Title Scene Representations for Autonomous Driving: an approach based on polygonal primitives Type Conference Article
  Year 2015 Publication 2nd Iberian Robotics Conference ROBOT2015 Abbreviated Journal  
  Volume 417 Issue Pages 503-515  
  Keywords Scene reconstruction; Point cloud; Autonomous vehicles  
  Abstract In this paper, we present a novel methodology to compute a 3D scene
representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
 
  Address Lisboa; Portugal; November 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor (up) Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ROBOT  
  Notes ADAS; 600.076; 600.086 Approved no  
  Call Number Admin @ si @ OSS2015a Serial 2662  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: