Home | << 1 2 3 4 5 6 7 8 9 10 >> |
Records | |||||
---|---|---|---|---|---|
Author | Baiyu Chen; Sergio Escalera; Isabelle Guyon; Victor Ponce; N. Shah; Marc Oliu | ||||
Title | Overcoming Calibration Problems in Pattern Labeling with Pairwise Ratings: Application to Personality Traits | Type | Conference Article | ||
Year | 2016 | Publication | 14th European Conference on Computer Vision Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Calibration of labels; Label bias; Ordinal labeling; Variance Models; Bradley-Terry-Luce model; Continuous labels; Regression; Personality traits; Crowd-sourced labels | ||||
Abstract | We address the problem of calibration of workers whose task is to label patterns with continuous variables, which arises for instance in labeling images of videos of humans with continuous traits. Worker bias is particularly dicult to evaluate and correct when many workers contribute just a few labels, a situation arising typically when labeling is crowd-sourced. In the scenario of labeling short videos of people facing a camera with personality traits, we evaluate the feasibility of the pairwise ranking method to alleviate bias problems. Workers are exposed to pairs of videos at a time and must order by preference. The variable levels are reconstructed by fitting a Bradley-Terry-Luce model with maximum likelihood. This method may at first sight, seem prohibitively expensive because for N videos, p = N (N-1)/2 pairs must be potentially processed by workers rather that N videos. However, by performing extensive simulations, we determine an empirical law for the scaling of the number of pairs needed as a function of the number of videos in order to achieve a given accuracy of score reconstruction and show that the pairwise method is a ordable. We apply the method to the labeling of a large scale dataset of 10,000 videos used in the ChaLearn Apparent Personality Trait challenge. | ||||
Address | Amsterdam; The Netherlands; October 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ CEG2016 | Serial | 2829 | ||
Permanent link to this record | |||||
Author | Marc Bolaños; Petia Radeva | ||||
Title | Simultaneous Food Localization and Recognition | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | CoRR abs/1604.07953
The development of automatic nutrition diaries, which would allow to keep track objectively of everything we eat, could enable a whole new world of possibilities for people concerned about their nutrition patterns. With this purpose, in this paper we propose the first method for simultaneous food localization and recognition. Our method is based on two main steps, which consist in, first, produce a food activation map on the input image (i.e. heat map of probabilities) for generating bounding boxes proposals and, second, recognize each of the food types or food-related objects present in each bounding box. We demonstrate that our proposal, compared to the most similar problem nowadays – object localization, is able to obtain high precision and reasonable recall levels with only a few bounding boxes. Furthermore, we show that it is applicable to both conventional and egocentric images. |
||||
Address | Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ BoR2016 | Serial | 2834 | ||
Permanent link to this record | |||||
Author | Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva | ||||
Title | With Whom Do I Interact? Detecting Social Interactions in Egocentric Photo-streams | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams. | ||||
Address | Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ ADR2016d | Serial | 2835 | ||
Permanent link to this record | |||||
Author | Fatemeh Noroozi; Marina Marjanovic; Angelina Njegus; Sergio Escalera; Gholamreza Anbarjafari | ||||
Title | Fusion of Classifier Predictions for Audio-Visual Emotion Recognition | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this paper is presented a novel multimodal emotion recognition system which is based on the analysis of audio and visual cues. MFCC-based features are extracted from the audio channel and facial landmark geometric relations are
computed from visual data. Both sets of features are learnt separately using state-of-the-art classifiers. In addition, we summarise each emotion video into a reduced set of key-frames, which are learnt in order to visually discriminate emotions by means of a Convolutional Neural Network. Finally, confidence outputs of all classifiers from all modalities are used to define a new feature space to be learnt for final emotion prediction, in a late fusion/stacking fashion. The conducted experiments on eNTERFACE’05 database show significant performance improvements of our proposed system in comparison to state-of-the-art approaches. |
||||
Address | Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPRW | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ NMN2016 | Serial | 2839 | ||
Permanent link to this record | |||||
Author | Iiris Lusi; Sergio Escalera; Gholamreza Anbarjafari | ||||
Title | SASE: RGB-Depth Database for Human Head Pose Estimation | Type | Conference Article | ||
Year | 2016 | Publication | 14th European Conference on Computer Vision Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Slides | ||||
Address | Amsterdam; The Netherlands; October 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ LEA2016a | Serial | 2840 | ||
Permanent link to this record | |||||
Author | Pejman Rasti; Tonis Uiboupin; Sergio Escalera; Gholamreza Anbarjafari | ||||
Title | Convolutional Neural Network Super Resolution for Face Recognition in Surveillance Monitoring | Type | Conference Article | ||
Year | 2016 | Publication | 9th Conference on Articulated Motion and Deformable Objects | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Palma de Mallorca; Spain; July 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AMDO | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ RUE2016 | Serial | 2846 | ||
Permanent link to this record | |||||
Author | Dennis H. Lundtoft; Kamal Nasrollahi; Thomas B. Moeslund; Sergio Escalera | ||||
Title | Spatiotemporal Facial Super-Pixels for Pain Detection | Type | Conference Article | ||
Year | 2016 | Publication | 9th Conference on Articulated Motion and Deformable Objects | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Facial images; Super-pixels; Spatiotemporal filters; Pain detection | ||||
Abstract | Best student paper award.
Pain detection using facial images is of critical importance in many Health applications. Since pain is a spatiotemporal process, recent works on this topic employ facial spatiotemporal features to detect pain. These systems extract such features from the entire area of the face. In this paper, we show that by employing super-pixels we can divide the face into three regions, in a way that only one of these regions (about one third of the face) contributes to the pain estimation and the other two regions can be discarded. The experimental results on the UNBCMcMaster database show that the proposed system using this single region outperforms state-of-the-art systems in detecting no-pain scenarios, while it reaches comparable results in detecting weak and severe pain scenarios. |
||||
Address | Palma de Mallorca; Spain; July 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AMDO | ||
Notes | HUPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ LNM2016 | Serial | 2847 | ||
Permanent link to this record | |||||
Author | Mark Philip Philipsen; Anders Jorgensen; Thomas B. Moeslund; Sergio Escalera | ||||
Title | RGB-D Segmentation of Poultry Entrails | Type | Conference Article | ||
Year | 2016 | Publication | 9th Conference on Articulated Motion and Deformable Objects | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Best commercial paper award. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AMDO | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ PJM2016 | Serial | 2848 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Mercedes Torres-Torres; Brais Martinez; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon; Georgios Tzimiropoulos; Ciprian Corneanu; Marc Oliu Simón; Mohammad Ali Bagheri; Michel Valstar | ||||
Title | ChaLearn Looking at People and Faces of the World: Face AnalysisWorkshop and Challenge 2016 | Type | Conference Article | ||
Year | 2016 | Publication | 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | We present the 2016 ChaLearn Looking at People and Faces of the World Challenge and Workshop, which ran three competitions on the common theme of face analysis from still images. The first one, Looking at People, addressed age estimation, while the second and third competitions, Faces of the World, addressed accessory classification and smile and gender classification, respectively. We present two crowd-sourcing methodologies used to collect manual annotations. A custom-build application was used to collect and label data about the apparent age of people (as opposed to the real age). For the Faces of the World data, the citizen-science Zooniverse platform was used. This paper summarizes the three challenges and the data used, as well as the results achieved by the participants of the competitions. Details of the ChaLearn LAP FotW competitions can be found at http://gesture.chalearn.org. | ||||
Address | Las Vegas; USA; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | HuPBA;MV; | Approved | no | ||
Call Number | ETM2016 | Serial | 2849 | ||
Permanent link to this record | |||||
Author | Antonio Esteban Lansaque; Carles Sanchez; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil | ||||
Title | Stable Airway Center Tracking for Bronchoscopic Navigation | Type | Conference Article | ||
Year | 2016 | Publication | 28th Conference of the international Society for Medical Innovation and Technology | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Bronchoscopists use X‐ray fluoroscopy to guide bronchoscopes to the lesion to be biopsied without any kind of incisions. Reducing exposure to X‐ray is important for both patients and doctors but alternatives like electromagnetic navigation require specific equipment and increase the cost of the clinical procedure. We propose a guiding system based on the extraction of airway centers from intra‐operative videos. Such anatomical landmarks could be
matched to the airway centerline extracted from a pre‐planned CT to indicate the best path to the lesion. We present an extraction of lumen centers from intra‐operative videos based on tracking of maximal stable regions of energy maps. |
||||
Address | Delft; Rotterdam; Leiden; The Netherlands; October 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SMIT | ||
Notes | IAM; | Approved | no | ||
Call Number | Admin @ si @ LSB2016a | Serial | 2856 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Jordi Gonzalez; Xavier Baro; Fernando Alonso; Martha Mackay | ||||
Title | Care Respite: a remote monitoring eHealth system for improving ambient assisted living | Type | Conference Article | ||
Year | 2016 | Publication | Human Motion Analysis for Healthcare Applications | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Advances in technology that capture human motion have been quite remarkable during the last five years. New sensors have been developed, such as the Microsoft Kinect, Asus Xtion Pro live, PrimeSense Carmine and Leap Motion. Their main advantages are their non-intrusive nature, low cost and widely available support for developers offered by large corporations or Open Communities. Although they were originally developed for computer games, they have inspired numerous healthcare related ideas and projects in areas such as Medical Disorder Diagnosis, Assisted Living, Rehabilitation and Surgery.
In Assisted Living, human motion analysis allows continuous monitoring of elderly and vulnerable people and their activities to potentially detect life-threatening events such as falls. Human motion analysis in rehabilitation provides the opportunity for motivating patients through gamification, evaluating prescribed programmes of exercises and assessing patients’ progress. In operating theatres, surgeons may use a gesture-based interface to access medical information or control a tele-surgery system. Human motion analysis may also be used to diagnose a range of mental and physical diseases and conditions. This event will discuss recent advances in human motion sensing and provide an application to healthcare for networking and exploring potential synergies and collaborations. |
||||
Address | Savoy Place; London; uk; May 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | HMAHA | ||
Notes | HuPBA; ISE; | Approved | no | ||
Call Number | Admin @ si @ EGB2016 | Serial | 2852 | ||
Permanent link to this record | |||||
Author | Jose Ramirez Moreno; Juan R Revilla; Miguel Reyes; Sergio Escalera | ||||
Title | Validación del Software ADIBAS asociado al sensor Kinect de Microsoft para la evaluación de la posición corporal | Type | Conference Article | ||
Year | 2016 | Publication | 4th Congreso WCPT-SAR | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Buenos Aires; Argentina; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WCPT-SAR | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ RRR2016 | Serial | 2853 | ||
Permanent link to this record | |||||
Author | Fernando Alonso; Xavier Baro; Sergio Escalera; Jordi Gonzalez; Martha Mackay; Anna Serrahima | ||||
Title | CARE RESPITE: TAKING CARE OF THE CAREGIVERS, Theme 5 The Strategic use of Mobile and Digital Health and Care Solutions | Type | Conference Article | ||
Year | 2016 | Publication | 16th International Conference for Integrated Care | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Poster | ||||
Address | Barcelona; Spain; May 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICIC | ||
Notes | HuPBA; ISE;MV | Approved | no | ||
Call Number | Admin @ si @ ABE2016 | Serial | 2855 | ||
Permanent link to this record | |||||
Author | Antonio Esteban Lansaque; Carles Sanchez; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil | ||||
Title | Stable Anatomical Structure Tracking for video-bronchoscopy Navigation | Type | Conference Article | ||
Year | 2016 | Publication | 19th International Conference on Medical Image Computing and Computer Assisted Intervention Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Lung cancer diagnosis; video-bronchoscopy; airway lumen detection; region tracking | ||||
Abstract | Bronchoscopy allows to examine the patient airways for detection of lesions and sampling of tissues without surgery. A main drawback in lung cancer diagnosis is the diculty to check whether the exploration is following the correct path to the nodule that has to be biopsied. The most extended guidance uses uoroscopy which implies repeated radiation of clinical sta and patients. Alternatives such as virtual bronchoscopy or electromagnetic navigation are very expensive and not completely robust to blood, mocus or deformations as to be extensively used. We propose a method that extracts and tracks stable lumen regions at dierent levels of the bronchial tree. The tracked regions are stored in a tree that encodes the anatomical structure of the scene which can be useful to retrieve the path to the lesion that the clinician should follow to do the biopsy. We present a multi-expert validation of our anatomical landmark extraction in 3 intra-operative ultrathin explorations. | ||||
Address | Athens; Greece; October 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAIW | ||
Notes | IAM; 600.075 | Approved | no | ||
Call Number | Admin @ si @ LSB2016b | Serial | 2857 | ||
Permanent link to this record | |||||
Author | German Ros | ||||
Title | Visual Scene Understanding for Autonomous Vehicles: Understanding Where and What | Type | Book Whole | ||
Year | 2016 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Making Ground Autonomous Vehicles (GAVs) a reality as a service for the society is one of the major scientific and technological challenges of this century. The potential benefits of autonomous vehicles include reducing accidents, improving traffic congestion and better usage of road infrastructures, among others. These vehicles must operate in our cities, towns and highways, dealing with many different types of situations while respecting traffic rules and protecting human lives. GAVs are expected to deal with all types of scenarios and situations, coping with an uncertain and chaotic world.
Therefore, in order to fulfill these demanding requirements GAVs need to be endowed with the capability of understanding their surrounding at many different levels, by means of affordable sensors and artificial intelligence. This capacity to understand the surroundings and the current situation that the vehicle is involved in is called scene understanding. In this work we investigate novel techniques to bring scene understanding to autonomous vehicles by combining the use of cameras as the main source of information—due to their versatility and affordability—and algorithms based on computer vision and machine learning. We investigate different degrees of understanding of the scene, starting from basic geometric knowledge about where is the vehicle within the scene. A robust and efficient estimation of the vehicle location and pose with respect to a map is one of the most fundamental steps towards autonomous driving. We study this problem from the point of view of robustness and computational efficiency, proposing key insights to improve current solutions. Then we advance to higher levels of abstraction to discover what is in the scene, by recognizing and parsing all the elements present on a driving scene, such as roads, sidewalks, pedestrians, etc. We investigate this problem known as semantic segmentation, proposing new approaches to improve recognition accuracy and computational efficiency. We cover these points by focusing on key aspects such as: (i) how to leverage computation moving semantics to an offline process, (ii) how to train compact architectures based on deconvolutional networks to achieve their maximum potential, (iii) how to use virtual worlds in combination with domain adaptation to produce accurate models in a cost-effective fashion, and (iv) how to use transfer learning techniques to prepare models to new situations. We finally extend the previous level of knowledge enabling systems to reasoning about what has change in a scene with respect to a previous visit, which in return allows for efficient and cost-effective map updating. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Angel Sappa;Julio Guerrero;Antonio Lopez | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-945373-1-8 | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ Ros2016 | Serial | 2860 | ||
Permanent link to this record |