|
Records |
Links |
|
Author |
Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
|
|
Title |
Deteccion automatica de la dominancia en conversaciones diadicas |
Type |
Journal Article |
|
Year |
2010 |
Publication |
Escritos de Psicologia |
Abbreviated Journal |
EP |
|
|
Volume |
3 |
Issue |
2 |
Pages |
41–45 |
|
|
Keywords ![sorted by Keywords field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
Dominance detection; Non-verbal communication; Visual features |
|
|
Abstract |
Dominance is referred to the level of influence that a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on the dominance detection of visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers' opinion. Moreover, these indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analyses showed a high correlation and allows the categorization of dominant people in public discussion video sequences. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1989-3809 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; OR; MILAB;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ EMV2010 |
Serial |
1315 |
|
Permanent link to this record |
|
|
|
|
Author |
Alejandro Cartas; Juan Marin; Petia Radeva; Mariella Dimiccoli |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Batch-based activity recognition from egocentric photo-streams revisited |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Pattern Analysis and Applications |
Abbreviated Journal |
PAA |
|
|
Volume |
21 |
Issue |
4 |
Pages |
953–965 |
|
|
Keywords ![sorted by Keywords field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
Egocentric vision; Lifelogging; Activity recognition; Deep learning; Recurrent neural networks |
|
|
Abstract |
Wearable cameras can gather large amounts of image data that provide rich visual information about the daily activities of the wearer. Motivated by the large number of health applications that could be enabled by the automatic recognition of daily activities, such as lifestyle characterization for habit improvement, context-aware personal assistance and tele-rehabilitation services, we propose a system to classify 21 daily activities from photo-streams acquired by a wearable photo-camera. Our approach combines the advantages of a late fusion ensemble strategy relying on convolutional neural networks at image level with the ability of recurrent neural networks to account for the temporal evolution of high-level features in photo-streams without relying on event boundaries. The proposed batch-based approach achieved an overall accuracy of 89.85%, outperforming state-of-the-art end-to-end methodologies. These results were achieved on a dataset consists of 44,902 egocentric pictures from three persons captured during 26 days in average. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ CMR2018 |
Serial |
3186 |
|
Permanent link to this record |
|
|
|
|
Author |
Marc Bolaños; Alvaro Peris; Francisco Casacuberta; Sergi Solera; Petia Radeva |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Egocentric video description based on temporally-linked sequences |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Journal of Visual Communication and Image Representation |
Abbreviated Journal |
JVCIR |
|
|
Volume |
50 |
Issue |
|
Pages |
205-216 |
|
|
Keywords ![sorted by Keywords field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
egocentric vision; video description; deep learning; multi-modal learning |
|
|
Abstract |
Egocentric vision consists in acquiring images along the day from a first person point-of-view using wearable cameras. The automatic analysis of this information allows to discover daily patterns for improving the quality of life of the user. A natural topic that arises in egocentric vision is storytelling, that is, how to understand and tell the story relying behind the pictures.
In this paper, we tackle storytelling as an egocentric sequences description problem. We propose a novel methodology that exploits information from temporally neighboring events, matching precisely the nature of egocentric sequences. Furthermore, we present a new method for multimodal data fusion consisting on a multi-input attention recurrent network. We also release the EDUB-SegDesc dataset. This is the first dataset for egocentric image sequences description, consisting of 1,339 events with 3,991 descriptions, from 55 days acquired by 11 people. Finally, we prove that our proposal outperforms classical attentional encoder-decoder methods for video description. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ BPC2018 |
Serial |
3109 |
|
Permanent link to this record |
|
|
|
|
Author |
Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
A Genetic-based Subspace Analysis Method for Improving Error-Correcting Output Coding |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
46 |
Issue |
10 |
Pages |
2830-2839 |
|
|
Keywords ![sorted by Keywords field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
Error Correcting Output Codes; Evolutionary computation; Multiclass classification; Feature subspace; Ensemble classification |
|
|
Abstract |
Two key factors affecting the performance of Error Correcting Output Codes (ECOC) in multiclass classification problems are the independence of binary classifiers and the problem-dependent coding design. In this paper, we propose an evolutionary algorithm-based approach to the design of an application-dependent codematrix in the ECOC framework. The central idea of this work is to design a three-dimensional codematrix, where the third dimension is the feature space of the problem domain. In order to do that, we consider the feature space in the design process of the codematrix with the aim of improving the independence and accuracy of binary classifiers. The proposed method takes advantage of some basic concepts of ensemble classification, such as diversity of classifiers, and also benefits from the evolutionary approach for optimizing the three-dimensional codematrix, taking into account the problem domain. We provide a set of experimental results using a set of benchmark datasets from the UCI Machine Learning Repository, as well as two real multiclass Computer Vision problems. Both sets of experiments are conducted using two different base learners: Neural Networks and Decision Trees. The results show that the proposed method increases the classification accuracy in comparison with the state-of-the-art ECOC coding techniques. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0031-3203 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ BGE2013a |
Serial |
2247 |
|
Permanent link to this record |
|
|
|
|
Author |
Ciprian Corneanu; Marc Oliu; Jeffrey F. Cohn; Sergio Escalera |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Survey on RGB, 3D, Thermal, and Multimodal Approaches for Facial Expression Recognition: History |
Type |
Journal Article |
|
Year |
2016 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
|
|
Volume |
28 |
Issue |
8 |
Pages |
1548-1568 |
|
|
Keywords ![sorted by Keywords field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
Facial expression; affect; emotion recognition; RGB; 3D; thermal; multimodal |
|
|
Abstract |
Facial expressions are an important way through which humans interact socially. Building a system capable of automatically recognizing facial expressions from images and video has been an intense field of study in recent years. Interpreting such expressions remains challenging and much research is needed about the way they relate to human affect. This paper presents a general overview of automatic RGB, 3D, thermal and multimodal facial expression analysis. We define a new taxonomy for the field, encompassing all steps from face detection to facial expression recognition, and describe and classify the state of the art methods accordingly. We also present the important datasets and the bench-marking of most influential methods. We conclude with a general discussion about trends, important questions and future lines of research. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MILAB; |
Approved |
no |
|
|
Call Number |
Admin @ si @ COC2016 |
Serial |
2718 |
|
Permanent link to this record |