toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Gerard Canal; Sergio Escalera; Cecilio Angulo edit   pdf
doi  openurl
  Title A Real-time Human-Robot Interaction system based on gestures for assistive scenarios Type Journal Article
  Year 2016 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 149 Issue Pages 65-77  
  Keywords Gesture recognition; Human Robot Interaction; Dynamic Time Warping; Pointing location estimation  
  Abstract Natural and intuitive human interaction with robotic systems is a key point to develop robots assisting people in an easy and effective way. In this paper, a Human Robot Interaction (HRI) system able to recognize gestures usually employed in human non-verbal communication is introduced, and an in-depth study of its usability is performed. The system deals with dynamic gestures such as waving or nodding which are recognized using a Dynamic Time Warping approach based on gesture specific features computed from depth maps. A static gesture consisting in pointing at an object is also recognized. The pointed location is then estimated in order to detect candidate objects the user may refer to. When the pointed object is unclear for the robot, a disambiguation procedure by means of either a verbal or gestural dialogue is performed. This skill would lead to the robot picking an object in behalf of the user, which could present difficulties to do it by itself. The overall system — which is composed by a NAO and Wifibot robots, a KinectTM v2 sensor and two laptops — is firstly evaluated in a structured lab setup. Then, a broad set of user tests has been completed, which allows to assess correct performance in terms of recognition rates, easiness of use and response times.  
  Address  
  Corporate Author Thesis  
  Publisher (up) Elsevier B.V. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB; Approved no  
  Call Number Admin @ si @ CEA2016 Serial 2768  
Permanent link to this record
 

 
Author Maria Salamo; Sergio Escalera edit  openurl
  Title Increasing Retrieval Quality in Conversational Recommenders Type Journal Article
  Year 2011 Publication IEEE Transactions on Knowledge and Data Engineering Abbreviated Journal TKDE  
  Volume 99 Issue Pages 1-1  
  Keywords  
  Abstract IF JCR CCIA 2.286 2009 24/103
JCR Impact Factor 2010: 1.851
A major task of research in conversational recommender systems is personalization. Critiquing is a common and powerful form of feedback, where a user can express her feature preferences by applying a series of directional critiques over the recommendations instead of providing specific preference values. Incremental Critiquing is a conversational recommender system that uses critiquing as a feedback to efficiently personalize products. The expectation is that in each cycle the system retrieves the products that best satisfy the user’s soft product preferences from a minimal information input. In this paper, we present a novel technique that increases retrieval quality based on a combination of compatibility and similarity scores. Under the hypothesis that a user learns Turing the recommendation process, we propose two novel exponential reinforcement learning approaches for compatibility that take into account both the instant at which the user makes a critique and the number of satisfied critiques. Moreover, we consider that the impact of features on the similarity differs according to the preferences manifested by the user. We propose a global weighting approach that uses a common weight for nearest cases in order to focus on groups of relevant products. We show that our methodology significantly improves recommendation efficiency in four data sets of different sizes in terms of session length in comparison with state-of-the-art approaches. Moreover, our recommender shows higher robustness against noisy user data when compared to classical approaches
 
  Address  
  Corporate Author Thesis  
  Publisher (up) IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1041-4347 ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; HuPBA Approved no  
  Call Number Admin @ si @ SaE2011 Serial 1713  
Permanent link to this record
 

 
Author Ajian Liu; Chenxu Zhao; Zitong Yu; Jun Wan; Anyang Su; Xing Liu; Zichang Tan; Sergio Escalera; Junliang Xing; Yanyan Liang; Guodong Guo; Zhen Lei; Stan Z. Li; Shenshen Du edit  doi
openurl 
  Title Contrastive Context-Aware Learning for 3D High-Fidelity Mask Face Presentation Attack Detection Type Journal Article
  Year 2022 Publication IEEE Transactions on Information Forensics and Security Abbreviated Journal TIForensicSEC  
  Volume 17 Issue Pages 2497 - 2507  
  Keywords  
  Abstract Face presentation attack detection (PAD) is essential to secure face recognition systems primarily from high-fidelity mask attacks. Most existing 3D mask PAD benchmarks suffer from several drawbacks: 1) a limited number of mask identities, types of sensors, and a total number of videos; 2) low-fidelity quality of facial masks. Basic deep models and remote photoplethysmography (rPPG) methods achieved acceptable performance on these benchmarks but still far from the needs of practical scenarios. To bridge the gap to real-world applications, we introduce a large-scale Hi gh- Fi delity Mask dataset, namely HiFiMask . Specifically, a total amount of 54,600 videos are recorded from 75 subjects with 225 realistic masks by 7 new kinds of sensors. Along with the dataset, we propose a novel C ontrastive C ontext-aware L earning (CCL) framework. CCL is a new training methodology for supervised PAD tasks, which is able to learn by leveraging rich contexts accurately (e.g., subjects, mask material and lighting) among pairs of live faces and high-fidelity mask attacks. Extensive experimental evaluations on HiFiMask and three additional 3D mask datasets demonstrate the effectiveness of our method. The codes and dataset will be released soon.  
  Address  
  Corporate Author Thesis  
  Publisher (up) IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA Approved no  
  Call Number Admin @ si @ LZY2022 Serial 3778  
Permanent link to this record
 

 
Author Sergio Escalera; Xavier Baro; Jordi Vitria; Petia Radeva; Bogdan Raducanu edit   pdf
doi  openurl
  Title Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction Type Journal Article
  Year 2012 Publication Sensors Abbreviated Journal SENS  
  Volume 12 Issue 2 Pages 1702-1719  
  Keywords  
  Abstract IF=1.77 (2010)
Social interactions are a very important component in peopleís lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Timesí Blogging Heads opinion blog.
The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The linksí weights are a measure of the ìinfluenceî a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.
 
  Address  
  Corporate Author Thesis  
  Publisher (up) Molecular Diversity Preservation International Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; OR;HuPBA;MV Approved no  
  Call Number Admin @ si @ EBV2012 Serial 1885  
Permanent link to this record
 

 
Author Penny Tarling; Mauricio Cantor; Albert Clapes; Sergio Escalera edit  doi
openurl 
  Title Deep learning with self-supervision and uncertainty regularization to count fish in underwater images Type Journal Article
  Year 2022 Publication PloS One Abbreviated Journal Plos  
  Volume 17 Issue 5 Pages e0267759  
  Keywords  
  Abstract Effective conservation actions require effective population monitoring. However, accurately counting animals in the wild to inform conservation decision-making is difficult. Monitoring populations through image sampling has made data collection cheaper, wide-reaching and less intrusive but created a need to process and analyse this data efficiently. Counting animals from such data is challenging, particularly when densely packed in noisy images. Attempting this manually is slow and expensive, while traditional computer vision methods are limited in their generalisability. Deep learning is the state-of-the-art method for many computer vision tasks, but it has yet to be properly explored to count animals. To this end, we employ deep learning, with a density-based regression approach, to count fish in low-resolution sonar images. We introduce a large dataset of sonar videos, deployed to record wild Lebranche mullet schools (Mugil liza), with a subset of 500 labelled images. We utilise abundant unlabelled data in a self-supervised task to improve the supervised counting task. For the first time in this context, by introducing uncertainty quantification, we improve model training and provide an accompanying measure of prediction uncertainty for more informed biological decision-making. Finally, we demonstrate the generalisability of our proposed counting framework through testing it on a recent benchmark dataset of high-resolution annotated underwater images from varying habitats (DeepFish). From experiments on both contrasting datasets, we demonstrate our network outperforms the few other deep learning models implemented for solving this task. By providing an open-source framework along with training data, our study puts forth an efficient deep learning template for crowd counting aquatic animals thereby contributing effective methods to assess natural populations from the ever-increasing visual data.  
  Address  
  Corporate Author Thesis  
  Publisher (up) Public Library of Science Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA Approved no  
  Call Number Admin @ si @ TCC2022 Serial 3743  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: