|
Records |
Links |
|
Author |
Cristina Sanchez Montes; F. Javier Sanchez; Jorge Bernal; Henry Cordova; Maria Lopez Ceron; Miriam Cuatrecasas; Cristina Rodriguez de Miguel; Ana Garcia Rodriguez; Rodrigo Garces Duran; Maria Pellise; Josep Llach; Gloria Fernandez Esparrach |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Computer-aided Prediction of Polyp Histology on White-Light Colonoscopy using Surface Pattern Analysis |
Type |
Journal Article |
|
Year |
2019 |
Publication ![sorted by Publication field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
Endoscopy |
Abbreviated Journal |
END |
|
|
Volume |
51 |
Issue |
3 |
Pages |
261-265 |
|
|
Keywords |
|
|
|
Abstract |
Background and study aims: To evaluate a new computational histology prediction system based on colorectal polyp textural surface patterns using high definition white light images.
Patients and methods: Textural elements (textons) were characterized according to their contrast with respect to the surface, shape and number of bifurcations, assuming that dysplastic polyps are associated with highly contrasted, large tubular patterns with some degree of bifurcation. Computer-aided diagnosis (CAD) was compared with pathological diagnosis and the diagnosis by the endoscopists using Kudo and NICE classification.
Results: Images of 225 polyps were evaluated (142 dysplastic and 83 non-dysplastic). CAD system correctly classified 205 (91.1%) polyps, 131/142 (92.3%) dysplastic and 74/83 (89.2%) non-dysplastic. For the subgroup of 100 diminutive (<5 mm) polyps, CAD correctly classified 87 (87%) polyps, 43/50 (86%) dysplastic and 44/50 (88%) non-dysplastic. There were not statistically significant differences in polyp histology prediction based on CAD system and on endoscopist assessment.
Conclusion: A computer vision system based on the characterization of the polyp surface in the white light accurately predicts colorectal polyp histology. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MV; 600.096; 600.119; 600.075 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SSB2019 |
Serial |
3164 |
|
Permanent link to this record |
|
|
|
|
Author |
Fadi Dornaika; Abdelmalik Moujahid; Bogdan Raducanu |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
|
|
Title |
Facial expression recognition using tracked facial actions: Classifier performance analysis |
Type |
Journal Article |
|
Year |
2013 |
Publication ![sorted by Publication field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
Engineering Applications of Artificial Intelligence |
Abbreviated Journal |
EAAI |
|
|
Volume |
26 |
Issue |
1 |
Pages |
467-477 |
|
|
Keywords |
Visual face tracking; 3D deformable models; Facial actions; Dynamic facial expression recognition; Human–computer interaction |
|
|
Abstract |
In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR; 600.046;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ DMR2013 |
Serial |
2185 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
|
|
Title |
Deteccion automatica de la dominancia en conversaciones diadicas |
Type |
Journal Article |
|
Year |
2010 |
Publication ![sorted by Publication field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
Escritos de Psicologia |
Abbreviated Journal |
EP |
|
|
Volume |
3 |
Issue |
2 |
Pages |
41–45 |
|
|
Keywords |
Dominance detection; Non-verbal communication; Visual features |
|
|
Abstract |
Dominance is referred to the level of influence that a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on the dominance detection of visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers' opinion. Moreover, these indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analyses showed a high correlation and allows the categorization of dominant people in public discussion video sequences. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1989-3809 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HUPBA; OR; MILAB;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ EMV2010 |
Serial |
1315 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Oriol Pujol; Petia Radeva; Jordi Vitria; Maria Teresa Anguera |
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
|
|
Title |
Automatic Detection of Dominance and Expected Interest |
Type |
Journal Article |
|
Year |
2010 |
Publication ![sorted by Publication field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
EURASIP Journal on Advances in Signal Processing |
Abbreviated Journal |
EURASIPJ |
|
|
Volume |
|
Issue |
|
Pages |
12 |
|
|
Keywords |
|
|
|
Abstract |
Article ID 491819
Social Signal Processing is an emergent area of research that focuses on the analysis of social constructs. Dominance and interest are two of these social constructs. Dominance refers to the level of influence a person has in a conversation. Interest, when referred in terms of group interactions, can be defined as the degree of engagement that the members of a group collectively display during their interaction. In this paper, we argue that only using behavioral motion information, we are able to predict the interest of observers when looking at face-to-face interactions as well as the dominant people. First, we propose a simple set of movement-based features from body, face, and mouth activity in order to define a higher set of interaction indicators. The considered indicators are manually annotated by observers. Based on the opinions obtained, we define an automatic binary dominance detection problem and a multiclass interest quantification problem. Error-Correcting Output Codes framework is used to learn to rank the perceived observer's interest in face-to-face interactions meanwhile Adaboost is used to solve the dominant detection problem. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers in both dominance and interest detection problems. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1110-8657 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MILAB;HUPBA;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ EPR2010d |
Serial |
1283 |
|
Permanent link to this record |
|
|
|
|
Author |
Laura Igual; Agata Lapedriza; Ricard Borras |
![download PDF file pdf](http://refbase.cvc.uab.es/img/file_PDF.gif)
![find record details (via OpenURL) openurl](http://refbase.cvc.uab.es/img/xref.gif)
|
|
Title |
Robust Gait-Based Gender Classification using Depth Cameras |
Type |
Journal Article |
|
Year |
2013 |
Publication ![sorted by Publication field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
EURASIP Journal on Advances in Signal Processing |
Abbreviated Journal |
EURASIPJ |
|
|
Volume |
37 |
Issue |
1 |
Pages |
72-80 |
|
|
Keywords |
|
|
|
Abstract |
This article presents a new approach for gait-based gender recognition using depth cameras, that can run in real time. The main contribution of this study is a new fast feature extraction strategy that uses the 3D point cloud obtained from the frames in a gait cycle. For each frame, these points are aligned according to their centroid and grouped. After that, they are projected into their PCA plane, obtaining a representation of the cycle particularly robust against view changes. Then, final discriminative features are computed by first making a histogram of the projected points and then using linear discriminant analysis. To test the method we have used the DGait database, which is currently the only publicly available database for gait analysis that includes depth information. We have performed experiments on manually labeled cycles and over whole video sequences, and the results show that our method improves the accuracy significantly, compared with state-of-the-art systems which do not use depth information. Furthermore, our approach is insensitive to illumination changes, given that it discards the RGB information. That makes the method especially suitable for real applications, as illustrated in the last part of the experiments section. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ ILB2013 |
Serial |
2144 |
|
Permanent link to this record |