|
Records |
Links |
|
Author |
Sergio Escalera; Oriol Pujol; Petia Radeva; Jordi Vitria; Maria Teresa Anguera |

|
|
Title  |
Automatic Detection of Dominance and Expected Interest |
Type |
Journal Article |
|
Year |
2010 |
Publication |
EURASIP Journal on Advances in Signal Processing |
Abbreviated Journal |
EURASIPJ |
|
|
Volume |
|
Issue |
|
Pages |
12 |
|
|
Keywords |
|
|
|
Abstract |
Article ID 491819
Social Signal Processing is an emergent area of research that focuses on the analysis of social constructs. Dominance and interest are two of these social constructs. Dominance refers to the level of influence a person has in a conversation. Interest, when referred in terms of group interactions, can be defined as the degree of engagement that the members of a group collectively display during their interaction. In this paper, we argue that only using behavioral motion information, we are able to predict the interest of observers when looking at face-to-face interactions as well as the dominant people. First, we propose a simple set of movement-based features from body, face, and mouth activity in order to define a higher set of interaction indicators. The considered indicators are manually annotated by observers. Based on the opinions obtained, we define an automatic binary dominance detection problem and a multiclass interest quantification problem. Error-Correcting Output Codes framework is used to learn to rank the perceived observer's interest in face-to-face interactions meanwhile Adaboost is used to solve the dominant detection problem. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers in both dominance and interest detection problems. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1110-8657 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MILAB;HUPBA;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ EPR2010d |
Serial |
1283 |
|
Permanent link to this record |
|
|
|
|
Author |
Laura Igual; Joan Carles Soliva; Sergio Escalera; Roger Gimeno; Oscar Vilarroya; Petia Radeva |


|
|
Title  |
Automatic Brain Caudate Nuclei Segmentation and Classification in Diagnostic of Attention-Deficit/Hyperactivity Disorder |
Type |
Journal Article |
|
Year |
2012 |
Publication |
Computerized Medical Imaging and Graphics |
Abbreviated Journal |
CMIG |
|
|
Volume |
36 |
Issue |
8 |
Pages |
591-600 |
|
|
Keywords |
Automatic caudate segmentation; Attention-Deficit/Hyperactivity Disorder; Diagnostic test; Machine learning; Decision stumps; Dissociated dipoles |
|
|
Abstract |
We present a fully automatic diagnostic imaging test for Attention-Deficit/Hyperactivity Disorder diagnosis assistance based on previously found evidences of caudate nucleus volumetric abnormalities. The proposed method consists of different steps: a new automatic method for external and internal segmentation of caudate based on Machine Learning methodologies; the definition of a set of new volume relation features, 3D Dissociated Dipoles, used for caudate representation and classification. We separately validate the contributions using real data from a pediatric population and show precise internal caudate segmentation and discrimination power of the diagnostic test, showing significant performance improvements in comparison to other state-of-the-art methods. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR; HuPBA; MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ ISE2012 |
Serial |
2143 |
|
Permanent link to this record |
|
|
|
|
Author |
Marina Alberti; Simone Balocco; Carlo Gatta; Francesco Ciompi; Oriol Pujol; Joana Silva; Xavier Carrillo; Petia Radeva |


|
|
Title  |
Automatic Bifurcation Detection in Coronary IVUS Sequences |
Type |
Journal Article |
|
Year |
2012 |
Publication |
IEEE Transactions on Biomedical Engineering |
Abbreviated Journal |
TBME |
|
|
Volume |
59 |
Issue |
4 |
Pages |
1022-2031 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we present a fully automatic method which identifies every bifurcation in an intravascular ultrasound (IVUS) sequence, the corresponding frames, the angular orientation with respect to the IVUS acquisition, and the extension. This goal is reached using a two-level classification scheme: first, a classifier is applied to a set of textural features extracted from each image of a sequence. A comparison among three state-of-the-art discriminative classifiers (AdaBoost, random forest, and support vector machine) is performed to identify the most suitable method for the branching detection task. Second, the results are improved by exploiting contextual information using a multiscale stacked sequential learning scheme. The results are then successively refined using a-priori information about branching dimensions and geometry. The proposed approach provides a robust tool for the quick review of pullback sequences, facilitating the evaluation of the lesion at bifurcation sites. The proposed method reaches an F-Measure score of 86.35%, while the F-Measure scores for inter- and intraobserver variability are 71.63% and 76.18%, respectively. The obtained results are positive. Especially, considering the branching detection task is very challenging, due to high variability in bifurcation dimensions and appearance. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0018-9294 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB;HuPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ ABG2012 |
Serial |
1996 |
|
Permanent link to this record |
|
|
|
|
Author |
Fatemeh Noroozi; Marina Marjanovic; Angelina Njegus; Sergio Escalera; Gholamreza Anbarjafari |

|
|
Title  |
Audio-Visual Emotion Recognition in Video Clips |
Type |
Journal Article |
|
Year |
2019 |
Publication |
IEEE Transactions on Affective Computing |
Abbreviated Journal |
TAC |
|
|
Volume |
10 |
Issue |
1 |
Pages |
60-75 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues. From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual part, two strategies are considered. First, facial landmarks’ geometric relations, i.e. distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/stacking fashion. The experiments conducted on the SAVEE, eNTERFACE’05, and RML databases show significant performance improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases. |
|
|
Address |
1 Jan.-March 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; 602.143; 602.133;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ NMN2017 |
Serial |
3011 |
|
Permanent link to this record |
|
|
|
|
Author |
Jun Wan; Sergio Escalera; Francisco Perales; Josef Kittler |

|
|
Title  |
Articulated Motion and Deformable Objects |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
79 |
Issue |
|
Pages |
55-64 |
|
|
Keywords |
|
|
|
Abstract |
This guest editorial introduces the twenty two papers accepted for this Special Issue on Articulated Motion and Deformable Objects (AMDO). They are grouped into four main categories within the field of AMDO: human motion analysis (action/gesture), human pose estimation, deformable shape segmentation, and face analysis. For each of the four topics, a survey of the recent developments in the field is presented. The accepted papers are briefly introduced in the context of this survey. They contribute novel methods, algorithms with improved performance as measured on benchmarking datasets, as well as two new datasets for hand action detection and human posture analysis. The special issue should be of high relevance to the reader interested in AMDO recognition and promote future research directions in the field. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; no proj;MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ WEP2018 |
Serial |
3126 |
|
Permanent link to this record |