|
Records |
Links |
|
Author |
Mario Rojas; David Masip; A. Todorov; Jordi Vitria |
|
|
Title |
Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models |
Type |
Journal Article |
|
Year |
2011 |
Publication |
PloS one |
Abbreviated Journal |
Plos |
|
|
Volume |
6 |
Issue |
8 |
Pages |
e23323 |
|
|
Keywords |
|
|
|
Abstract |
JCR Impact Factor 2010: 4.411
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Public Library of Science |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ RMT2011 |
Serial |
1883 |
|
Permanent link to this record |
|
|
|
|
Author |
Victor Ponce; Mario Gorga; Xavier Baro; Petia Radeva; Sergio Escalera |
|
|
Title |
Análisis de la expresión oral y gestual en proyectos fin de carrera vía un sistema de visión artificial |
Type |
Journal Article |
|
Year |
2011 |
Publication |
ReVisión |
Abbreviated Journal |
|
|
|
Volume |
4 |
Issue |
1 |
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1989-1199 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA; MILAB;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ PGB2011d |
Serial |
2514 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera |
|
|
Title |
Deteccion automatica de la dominancia en conversaciones diadicas |
Type |
Journal Article |
|
Year |
2010 |
Publication |
Escritos de Psicologia |
Abbreviated Journal |
EP |
|
|
Volume |
3 |
Issue |
2 |
Pages |
41–45 |
|
|
Keywords |
Dominance detection; Non-verbal communication; Visual features |
|
|
Abstract |
Dominance is referred to the level of influence that a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on the dominance detection of visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers' opinion. Moreover, these indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analyses showed a high correlation and allows the categorization of dominant people in public discussion video sequences. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1989-3809 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; OR; MILAB;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ EMV2010 |
Serial |
1315 |
|
Permanent link to this record |
|
|
|
|
Author |
Patrick Brandao; O. Zisimopoulos; E. Mazomenos; G. Ciutib; Jorge Bernal; M. Visentini-Scarzanell; A. Menciassi; P. Dario; A. Koulaouzidis; A. Arezzo; D.J. Hawkes; D. Stoyanov |
|
|
Title |
Towards a computed-aided diagnosis system in colonoscopy: Automatic polyp segmentation using convolution neural networks |
Type |
Journal |
|
Year |
2018 |
Publication |
Journal of Medical Robotics Research |
Abbreviated Journal |
JMRR |
|
|
Volume |
3 |
Issue |
2 |
Pages |
|
|
|
Keywords |
convolutional neural networks; colonoscopy; computer aided diagnosis |
|
|
Abstract |
Early diagnosis is essential for the successful treatment of bowel cancers including colorectal cancer (CRC) and capsule endoscopic imaging with robotic actuation can be a valuable diagnostic tool when combined with automated image analysis. We present a deep learning rooted detection and segmentation framework for recognizing lesions in colonoscopy and capsule endoscopy images. We restructure established convolution architectures, such as VGG and ResNets, by converting them into fully-connected convolution networks (FCNs), ne-tune them and study their capabilities for polyp segmentation and detection. We additionally use Shape-from-Shading (SfS) to recover depth and provide a richer representation of the tissue's structure in colonoscopy images. Depth is
incorporated into our network models as an additional input channel to the RGB information and we demonstrate that the resulting network yields improved performance. Our networks are tested on publicly available datasets and the most accurate segmentation model achieved a mean segmentation IU of 47.78% and 56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp
detection, the top performing models we propose surpass the current state of the art with detection recalls superior to 90% for all datasets tested. To our knowledge, we present the rst work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MV; no menciona |
Approved |
no |
|
|
Call Number |
BZM2018 |
Serial |
2976 |
|
Permanent link to this record |
|
|
|
|
Author |
Agata Lapedriza; David Masip; Jordi Vitria |
|
|
Title |
On the Use of External Face Features for Identity Verification |
Type |
Journal |
|
Year |
2006 |
Publication |
Journal of Multimedia, 1(4): 11–20 |
Abbreviated Journal |
|
|
|
Volume |
1 |
Issue |
4 |
Pages |
11-20 |
|
|
Keywords |
Face Verification, Computer Vision, Machine Learning |
|
|
Abstract |
In general automatic face classification applications images are captured in natural environments. In these cases, the performance is affected by variations in facial images related to illumination, pose, occlusion or expressions. Most of the existing face classification systems use only the internal features information, composed by eyes, nose and mouth, since they are more difficult to imitate. Nevertheless, nowadays a lot of applications not related to security are developed, and in these cases the information located at head, chin or ears zones (external features) can be useful to improve the current accuracies. However, the lack of a natural alignment in these areas makes difficult to extract these features applying classic Bottom-Up methods. In this paper, we propose a complete scheme based on a Top-Down reconstruction algorithm to extract external features of face images. To test our system we have performed face verification experiments using public databases, given that identity verification is a general task that has many real life applications. We have considered images uniformly illuminated, images with occlusions and images with high local changes in the illumination, and the obtained results show that the information contributed by the external features can be useful for verification purposes, specially significant when faces are partially occluded. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ LMV2006b |
Serial |
708 |
|
Permanent link to this record |