|
Records |
Links |
|
Author |
Maria Elena Meza-de-Luna; Juan Ramon Terven Salinas; Bogdan Raducanu; Joaquin Salas |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Assessing the Influence of Mirroring on the Perception of Professional Competence using Wearable Technology |
Type |
Journal Article |
|
Year |
2016 |
Publication |
IEEE Transactions on Affective Computing |
Abbreviated Journal |
TAC |
|
|
Volume |
9 |
Issue |
2 |
Pages |
161-175 |
|
|
Keywords |
Mirroring; Nodding; Competence; Perception; Wearable Technology |
|
|
Abstract |
Nonverbal communication is an intrinsic part in daily face-to-face meetings. A frequently observed behavior during social interactions is mirroring, in which one person tends to mimic the attitude of the counterpart. This paper shows that a computer vision system could be used to predict the perception of competence in dyadic interactions through the automatic detection of mirroring
events. To prove our hypothesis, we developed: (1) A social assistant for mirroring detection, using a wearable device which includes a video camera and (2) an automatic classifier for the perception of competence, using the number of nodding gestures and mirroring events as predictors. For our study, we used a mixed-method approach in an experimental design where 48 participants acting as customers interacted with a confederated psychologist. We found that the number of nods or mirroring events has a significant influence on the perception of competence. Our results suggest that: (1) Customer mirroring is a better predictor than psychologist mirroring; (2) the number of psychologist’s nods is a better predictor than the number of customer’s nods; (3) except for the psychologist mirroring, the computer vision algorithm we used worked about equally well whether it was acquiring images from wearable smartglasses or fixed cameras. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.072; |
Approved |
no |
|
|
Call Number |
Admin @ si @ MTR2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2826 |
|
Permanent link to this record |
|
|
|
|
Author |
Hugo Jair Escalante; Victor Ponce; Jun Wan; Michael A. Riegler; Baiyu Chen; Albert Clapes; Sergio Escalera; Isabelle Guyon; Xavier Baro; Pal Halvorsen; Henning Muller; Martha Larson |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
ChaLearn Joint Contest on Multimedia Challenges Beyond Visual Analysis: An Overview |
Type |
Conference Article |
|
Year |
2016 |
Publication |
23rd International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This paper provides an overview of the Joint Contest on Multimedia Challenges Beyond Visual Analysis. We organized an academic competition that focused on four problems that require effective processing of multimodal information in order to be solved. Two tracks were devoted to gesture spotting and recognition from RGB-D video, two fundamental problems for human computer interaction. Another track was devoted to a second round of the first impressions challenge of which the goal was to develop methods to recognize personality traits from
short video clips. For this second round we adopted a novel collaborative-competitive (i.e., coopetition) setting. The fourth track was dedicated to the problem of video recommendation for improving user experience. The challenge was open for about 45 days, and received outstanding participation: almost
200 participants registered to the contest, and 20 teams sent predictions in the final stage. The main goals of the challenge were fulfilled: the state of the art was advanced considerably in the four tracks, with novel solutions to the proposed problems (mostly relying on deep learning). However, further research is still required. The data of the four tracks will be available to
allow researchers to keep making progress in the four tracks. |
|
|
Address |
Cancun; Mexico; December 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
HuPBA; 602.143;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ EPW2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2827 |
|
Permanent link to this record |
|
|
|
|
Author |
Victor Ponce; Baiyu Chen; Marc Oliu; Ciprian Corneanu; Albert Clapes; Isabelle Guyon; Xavier Baro; Hugo Jair Escalante; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
ChaLearn LAP 2016: First Round Challenge on First Impressions – Dataset and Results |
Type |
Conference Article |
|
Year |
2016 |
Publication |
14th European Conference on Computer Vision Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Behavior Analysis; Personality Traits; First Impressions |
|
|
Abstract |
This paper summarizes the ChaLearn Looking at People 2016 First Impressions challenge data and results obtained by the teams in the rst round of the competition. The goal of the competition was to automatically evaluate ve \apparent“ personality traits (the so-called \Big Five”) from videos of subjects speaking in front of a camera, by using human judgment. In this edition of the ChaLearn challenge, a novel data set consisting of 10,000 shorts clips from YouTube videos has been made publicly available. The ground truth for personality traits was obtained from workers of Amazon Mechanical Turk (AMT). To alleviate calibration problems between workers, we used pairwise comparisons between videos, and variable levels were reconstructed by tting a Bradley-Terry-Luce model with maximum likelihood. The CodaLab open source
platform was used for submission of predictions and scoring. The competition attracted, over a period of 2 months, 84 participants who are grouped in several teams. Nine teams entered the nal phase. Despite the diculty of the task, the teams made great advances in this round of the challenge. |
|
|
Address |
Amsterdam; The Netherlands; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
HuPBA;MV; 600.063 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PCP2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2828 |
|
Permanent link to this record |
|
|
|
|
Author |
Baiyu Chen; Sergio Escalera; Isabelle Guyon; Victor Ponce; N. Shah; Marc Oliu |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Overcoming Calibration Problems in Pattern Labeling with Pairwise Ratings: Application to Personality Traits |
Type |
Conference Article |
|
Year |
2016 |
Publication |
14th European Conference on Computer Vision Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Calibration of labels; Label bias; Ordinal labeling; Variance Models; Bradley-Terry-Luce model; Continuous labels; Regression; Personality traits; Crowd-sourced labels |
|
|
Abstract |
We address the problem of calibration of workers whose task is to label patterns with continuous variables, which arises for instance in labeling images of videos of humans with continuous traits. Worker bias is particularly dicult to evaluate and correct when many workers contribute just a few labels, a situation arising typically when labeling is crowd-sourced. In the scenario of labeling short videos of people facing a camera with personality traits, we evaluate the feasibility of the pairwise ranking method to alleviate bias problems. Workers are exposed to pairs of videos at a time and must order by preference. The variable levels are reconstructed by fitting a Bradley-Terry-Luce model with maximum likelihood. This method may at first sight, seem prohibitively expensive because for N videos, p = N (N-1)/2 pairs must be potentially processed by workers rather that N videos. However, by performing extensive simulations, we determine an empirical law for the scaling of the number of pairs needed as a function of the number of videos in order to achieve a given accuracy of score reconstruction and show that the pairwise method is a ordable. We apply the method to the labeling of a large scale dataset of 10,000 videos used in the ChaLearn Apparent Personality Trait challenge. |
|
|
Address |
Amsterdam; The Netherlands; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ CEG2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2829 |
|
Permanent link to this record |
|
|
|
|
Author |
Sumit K. Banchhor; Tadashi Araki; Narendra D. Londhe; Nobutaka Ikeda; Petia Radeva; Ayman El-Baz; Luca Saba; Andrew Nicolaides; Shoaib Shafique; John R. Laird; Jasjit S. Suri |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Five multiresolution-based calcium volume measurement techniques from coronary IVUS videos: A comparative approach |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Computer Methods and Programs in Biomedicine |
Abbreviated Journal |
CMPB |
|
|
Volume |
134 |
Issue |
|
Pages |
237-258 |
|
|
Keywords |
|
|
|
Abstract |
BACKGROUND AND OBJECTIVE:
Fast intravascular ultrasound (IVUS) video processing is required for calcium volume computation during the planning phase of percutaneous coronary interventional (PCI) procedures. Nonlinear multiresolution techniques are generally applied to improve the processing time by down-sampling the video frames.
METHODS:
This paper presents four different segmentation methods for calcium volume measurement, namely Threshold-based, Fuzzy c-Means (FCM), K-means, and Hidden Markov Random Field (HMRF) embedded with five different kinds of multiresolution techniques (bilinear, bicubic, wavelet, Lanczos, and Gaussian pyramid). This leads to 20 different kinds of combinations. IVUS image data sets consisting of 38,760 IVUS frames taken from 19 patients were collected using 40 MHz IVUS catheter (Atlantis® SR Pro, Boston Scientific®, pullback speed of 0.5 mm/sec.). The performance of these 20 systems is compared with and without multiresolution using the following metrics: (a) computational time; (b) calcium volume; (c) image quality degradation ratio; and (d) quality assessment ratio.
RESULTS:
Among the four segmentation methods embedded with five kinds of multiresolution techniques, FCM segmentation combined with wavelet-based multiresolution gave the best performance. FCM and wavelet experienced the highest percentage mean improvement in computational time of 77.15% and 74.07%, respectively. Wavelet interpolation experiences the highest mean precision-of-merit (PoM) of 94.06 ± 3.64% and 81.34 ± 16.29% as compared to other multiresolution techniques for volume level and frame level respectively. Wavelet multiresolution technique also experiences the highest Jaccard Index and Dice Similarity of 0.7 and 0.8, respectively. Multiresolution is a nonlinear operation which introduces bias and thus degrades the image. The proposed system also provides a bias correction approach to enrich the system, giving a better mean calcium volume similarity for all the multiresolution-based segmentation methods. After including the bias correction, bicubic interpolation gives the largest increase in mean calcium volume similarity of 4.13% compared to the rest of the multiresolution techniques. The system is automated and can be adapted in clinical settings.
CONCLUSIONS:
We demonstrated the time improvement in calcium volume computation without compromising the quality of IVUS image. Among the 20 different combinations of multiresolution with calcium volume segmentation methods, the FCM embedded with wavelet-based multiresolution gave the best performance. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ BAL2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2830 |
|
Permanent link to this record |
|
|
|
|
Author |
Petia Radeva |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Can Deep Learning and Egocentric Vision for Visual Lifelogging Help Us Eat Better? |
Type |
Conference Article |
|
Year |
2016 |
Publication |
19th International Conference of the Catalan Association for Artificial Intelligence |
Abbreviated Journal |
|
|
|
Volume |
4 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Barcelona; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CCIA |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ Rad2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2832 |
|
Permanent link to this record |
|
|
|
|
Author |
Alvaro Peris; Marc Bolaños; Petia Radeva; Francisco Casacuberta |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Video Description Using Bidirectional Recurrent Neural Networks |
Type |
Conference Article |
|
Year |
2016 |
Publication |
25th International Conference on Artificial Neural Networks |
Abbreviated Journal |
|
|
|
Volume |
2 |
Issue |
|
Pages |
3-11 |
|
|
Keywords |
Video description; Neural Machine Translation; Birectional Recurrent Neural Networks; LSTM; Convolutional Neural Networks |
|
|
Abstract |
Although traditionally used in the machine translation field, the encoder-decoder framework has been recently applied for the generation of video and image descriptions. The combination of Convolutional and Recurrent Neural Networks in these models has proven to outperform the previous state of the art, obtaining more accurate video descriptions. In this work we propose pushing further this model by introducing two contributions into the encoding stage. First, producing richer image representations by combining object and location information from Convolutional Neural Networks and second, introducing Bidirectional Recurrent Neural Networks for capturing both forward and backward temporal relationships in the input frames. |
|
|
Address |
Barcelona; September 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICANN |
|
|
Notes |
MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ PBR2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2833 |
|
Permanent link to this record |
|
|
|
|
Author |
Marc Bolaños; Petia Radeva |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Simultaneous Food Localization and Recognition |
Type |
Conference Article |
|
Year |
2016 |
Publication |
23rd International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
CoRR abs/1604.07953
The development of automatic nutrition diaries, which would allow to keep track objectively of everything we eat, could enable a whole new world of possibilities for people concerned about their nutrition patterns. With this purpose, in this paper we propose the first method for simultaneous food localization and recognition. Our method is based on two main steps, which consist in, first, produce a food activation map on the input image (i.e. heat map of probabilities) for generating bounding boxes proposals and, second, recognize each of the food types or food-related objects present in each bounding box. We demonstrate that our proposal, compared to the most similar problem nowadays – object localization, is able to obtain high precision and reasonable recall levels with only a few bounding boxes. Furthermore, we show that it is applicable to both conventional and egocentric images. |
|
|
Address |
Cancun; Mexico; December 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ BoR2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2834 |
|
Permanent link to this record |
|
|
|
|
Author |
Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
With Whom Do I Interact? Detecting Social Interactions in Egocentric Photo-streams |
Type |
Conference Article |
|
Year |
2016 |
Publication |
23rd International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams. |
|
|
Address |
Cancun; Mexico; December 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ ADR2016d |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2835 |
|
Permanent link to this record |
|
|
|
|
Author |
Santiago Segui; Michal Drozdzal; Guillem Pascual; Petia Radeva; Carolina Malagelada; Fernando Azpiroz; Jordi Vitria |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Generic Feature Learning for Wireless Capsule Endoscopy Analysis |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Computers in Biology and Medicine |
Abbreviated Journal |
CBM |
|
|
Volume |
79 |
Issue |
|
Pages |
163-172 |
|
|
Keywords |
Wireless capsule endoscopy; Deep learning; Feature learning; Motility analysis |
|
|
Abstract |
The interpretation and analysis of wireless capsule endoscopy (WCE) recordings is a complex task which requires sophisticated computer aided decision (CAD) systems to help physicians with video screening and, finally, with the diagnosis. Most CAD systems used in capsule endoscopy share a common system design, but use very different image and video representations. As a result, each time a new clinical application of WCE appears, a new CAD system has to be designed from the scratch. This makes the design of new CAD systems very time consuming. Therefore, in this paper we introduce a system for small intestine motility characterization, based on Deep Convolutional Neural Networks, which circumvents the laborious step of designing specific features for individual motility events. Experimental results show the superiority of the learned features over alternative classifiers constructed using state-of-the-art handcrafted features. In particular, it reaches a mean classification accuracy of 96% for six intestinal motility events, outperforming the other classifiers by a large margin (a 14% relative performance increase). |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR; MILAB;MV; |
Approved |
no |
|
|
Call Number |
Admin @ si @ SDP2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2836 |
|
Permanent link to this record |
|
|
|
|
Author |
Pedro Herruzo; Marc Bolaños; Petia Radeva |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Can a CNN Recognize Catalan Diet? |
Type |
Book Chapter |
|
Year |
2016 |
Publication |
AIP Conference Proceedings |
Abbreviated Journal |
|
|
|
Volume |
1773 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
CoRR abs/1607.08811
Nowadays, we can find several diseases related to the unhealthy diet habits of the population, such as diabetes, obesity, anemia, bulimia and anorexia. In many cases, these diseases are related to the food consumption of people. Mediterranean diet is scientifically known as a healthy diet that helps to prevent many metabolic diseases. In particular, our work focuses on the recognition of Mediterranean food and dishes. The development of this methodology would allow to analise the daily habits of users with wearable cameras, within the topic of lifelogging. By using automatic mechanisms we could build an objective tool for the analysis of the patient’s behavior, allowing specialists to discover unhealthy food patterns and understand the user’s lifestyle.
With the aim to automatically recognize a complete diet, we introduce a challenging multi-labeled dataset related to Mediter-ranean diet called FoodCAT. The first type of label provided consists of 115 food classes with an average of 400 images per dish, and the second one consists of 12 food categories with an average of 3800 pictures per class. This dataset will serve as a basis for the development of automatic diet recognition. In this context, deep learning and more specifically, Convolutional Neural Networks (CNNs), currently are state-of-the-art methods for automatic food recognition. In our work, we compare several architectures for image classification, with the purpose of diet recognition. Applying the best model for recognising food categories, we achieve a top-1 accuracy of 72.29%, and top-5 of 97.07%. In a complete diet recognition of dishes from Mediterranean diet, enlarged with the Food-101 dataset for international dishes recognition, we achieve a top-1 accuracy of 68.07%, and top-5 of 89.53%, for a total of 115+101 food classes. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ HBR2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2837 |
|
Permanent link to this record |
|
|
|
|
Author |
Marc Oliu; Ciprian Corneanu; Laszlo A. Jeni; Jeffrey F. Cohn; Takeo Kanade; Sergio Escalera |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Continuous Supervised Descent Method for Facial Landmark Localisation |
Type |
Conference Article |
|
Year |
2016 |
Publication |
13th Asian Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
10112 |
Issue |
|
Pages |
121-135 |
|
|
Keywords |
|
|
|
Abstract |
Recent methods for facial landmark location perform well on close-to-frontal faces but have problems in generalising to large head rotations. In order to address this issue we propose a second order linear regression method that is both compact and robust against strong rotations. We provide a closed form solution, making the method fast to train. We test the method’s performance on two challenging datasets. The first has been intensely used by the community. The second has been specially generated from a well known 3D face dataset. It is considerably more challenging, including a high diversity of rotations and more samples than any other existing public dataset. The proposed method is compared against state-of-the-art approaches, including RCPR, CGPRT, LBF, CFSS, and GSDM. Results upon both datasets show that the proposed method offers state-of-the-art performance on near frontal view data, improves state-of-the-art methods on more challenging head rotation problems and keeps a compact model size. |
|
|
Address |
Taipei; Taiwan; November 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ACCV |
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ OCJ2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2838 |
|
Permanent link to this record |
|
|
|
|
Author |
Fatemeh Noroozi; Marina Marjanovic; Angelina Njegus; Sergio Escalera; Gholamreza Anbarjafari |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Fusion of Classifier Predictions for Audio-Visual Emotion Recognition |
Type |
Conference Article |
|
Year |
2016 |
Publication |
23rd International Conference on Pattern Recognition Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
In this paper is presented a novel multimodal emotion recognition system which is based on the analysis of audio and visual cues. MFCC-based features are extracted from the audio channel and facial landmark geometric relations are
computed from visual data. Both sets of features are learnt separately using state-of-the-art classifiers. In addition, we summarise each emotion video into a reduced set of key-frames, which are learnt in order to visually discriminate emotions by means of a Convolutional Neural Network. Finally, confidence
outputs of all classifiers from all modalities are used to define a new feature space to be learnt for final emotion prediction, in a late fusion/stacking fashion. The conducted experiments on eNTERFACE’05 database show significant performance improvements of our proposed system in comparison to state-of-the-art approaches. |
|
|
Address |
Cancun; Mexico; December 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPRW |
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ NMN2016 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2839 |
|
Permanent link to this record |
|
|
|
|
Author |
Iiris Lusi; Sergio Escalera; Gholamreza Anbarjafari |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
SASE: RGB-Depth Database for Human Head Pose Estimation |
Type |
Conference Article |
|
Year |
2016 |
Publication |
14th European Conference on Computer Vision Workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Slides |
|
|
Address |
Amsterdam; The Netherlands; October 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ LEA2016a |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2840 |
|
Permanent link to this record |
|
|
|
|
Author |
Xavier Perez Sala; Fernando De la Torre; Laura Igual; Sergio Escalera; Cecilio Angulo |
![goto web page url](img/www.gif)
|
|
Title |
Subspace Procrustes Analysis |
Type |
Journal Article |
|
Year |
2017 |
Publication |
International Journal of Computer Vision |
Abbreviated Journal |
IJCV |
|
|
Volume |
121 |
Issue |
3 |
Pages |
327–343 |
|
|
Keywords |
|
|
|
Abstract |
Procrustes Analysis (PA) has been a popular technique to align and build 2-D statistical models of shapes. Given a set of 2-D shapes PA is applied to remove rigid transformations. Then, a non-rigid 2-D model is computed by modeling (e.g., PCA) the residual. Although PA has been widely used, it has several limitations for modeling 2-D shapes: occluded landmarks and missing data can result in local minima solutions, and there is no guarantee that the 2-D shapes provide a uniform sampling of the 3-D space of rotations for the object. To address previous issues, this paper proposes Subspace PA (SPA). Given several
instances of a 3-D object, SPA computes the mean and a 2-D subspace that can simultaneously model all rigid and non-rigid deformations of the 3-D object. We propose a discrete (DSPA) and continuous (CSPA) formulation for SPA, assuming that 3-D samples of an object are provided. DSPA extends the traditional PA, and produces unbiased 2-D models by uniformly sampling different views of the 3-D object. CSPA provides a continuous approach to uniformly sample the space of 3-D rotations, being more efficient in space and time. Experiments using SPA to learn 2-D models of bodies from motion capture data illustrate the benefits of our approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; HuPBA; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ PTI2017 |
Serial ![sorted by Serial field, ascending order (up)](img/sort_asc.gif) |
2841 |
|
Permanent link to this record |