|
Records |
Links |
|
Author |
Agata Lapedriza; David Masip; Jordi Vitria |
|
|
Title |
On the Use of External Face Features for Identity Verification |
Type |
Journal |
|
Year |
2006 |
Publication |
Journal of Multimedia, 1(4): 11–20 |
Abbreviated Journal |
|
|
|
Volume |
1 |
Issue |
4 |
Pages |
11-20 |
|
|
Keywords |
Face Verification, Computer Vision, Machine Learning |
|
|
Abstract |
In general automatic face classification applications images are captured in natural environments. In these cases, the performance is affected by variations in facial images related to illumination, pose, occlusion or expressions. Most of the existing face classification systems use only the internal features information, composed by eyes, nose and mouth, since they are more difficult to imitate. Nevertheless, nowadays a lot of applications not related to security are developed, and in these cases the information located at head, chin or ears zones (external features) can be useful to improve the current accuracies. However, the lack of a natural alignment in these areas makes difficult to extract these features applying classic Bottom-Up methods. In this paper, we propose a complete scheme based on a Top-Down reconstruction algorithm to extract external features of face images. To test our system we have performed face verification experiments using public databases, given that identity verification is a general task that has many real life applications. We have considered images uniformly illuminated, images with occlusions and images with high local changes in the illumination, and the obtained results show that the information contributed by the external features can be useful for verification purposes, specially significant when faces are partially occluded. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ LMV2006b |
Serial |
708 |
|
Permanent link to this record |
|
|
|
|
Author |
Agata Lapedriza; Santiago Segui; David Masip; Jordi Vitria |
|
|
Title |
A Sparse Bayesian Approach for Joint Feature Selection and Classifier Learning |
Type |
Journal |
|
Year |
2008 |
Publication |
Pattern Analysis and Applications, Special Issue: Non–Parametric Distance–Based Classification Techniques and Their Applications, |
Abbreviated Journal |
|
|
|
Volume |
11 |
Issue |
3-4 |
Pages |
299-308 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ LSM2008 |
Serial |
996 |
|
Permanent link to this record |
|
|
|
|
Author |
Ana Garcia Rodriguez; Jorge Bernal; F. Javier Sanchez; Henry Cordova; Rodrigo Garces Duran; Cristina Rodriguez de Miguel; Gloria Fernandez Esparrach |
|
|
Title |
Polyp fingerprint: automatic recognition of colorectal polyps’ unique features |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Surgical Endoscopy and other Interventional Techniques |
Abbreviated Journal |
SEND |
|
|
Volume |
34 |
Issue |
4 |
Pages |
1887-1889 |
|
|
Keywords |
|
|
|
Abstract |
BACKGROUND:
Content-based image retrieval (CBIR) is an application of machine learning used to retrieve images by similarity on the basis of features. Our objective was to develop a CBIR system that could identify images containing the same polyp ('polyp fingerprint').
METHODS:
A machine learning technique called Bag of Words was used to describe each endoscopic image containing a polyp in a unique way. The system was tested with 243 white light images belonging to 99 different polyps (for each polyp there were at least two images representing it in two different temporal moments). Images were acquired in routine colonoscopies at Hospital Clínic using high-definition Olympus endoscopes. The method provided for each image the closest match within the dataset.
RESULTS:
The system matched another image of the same polyp in 221/243 cases (91%). No differences were observed in the number of correct matches according to Paris classification (protruded: 90.7% vs. non-protruded: 91.3%) and size (< 10 mm: 91.6% vs. > 10 mm: 90%).
CONCLUSIONS:
A CBIR system can match accurately two images containing the same polyp, which could be a helpful aid for polyp image recognition.
KEYWORDS:
Artificial intelligence; Colorectal polyps; Content-based image retrieval |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MV; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3403 |
|
Permanent link to this record |
|
|
|
|
Author |
Antonio Hernandez; Miguel Angel Bautista; Xavier Perez Sala; Victor Ponce; Sergio Escalera; Xavier Baro; Oriol Pujol; Cecilio Angulo |
|
|
Title |
Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
50 |
Issue |
1 |
Pages |
112-121 |
|
|
Keywords |
RGB-D; Bag-of-Words; Dynamic Time Warping; Human Gesture Recognition |
|
|
Abstract |
PATREC5825
We present a methodology to address the problem of human gesture segmentation and recognition in video and depth image sequences. A Bag-of-Visual-and-Depth-Words (BoVDW) model is introduced as an extension of the Bag-of-Visual-Words (BoVW) model. State-of-the-art RGB and depth features, including a newly proposed depth descriptor, are analysed and combined in a late fusion form. The method is integrated in a Human Gesture Recognition pipeline, together with a novel probability-based Dynamic Time Warping (PDTW) algorithm which is used to perform prior segmentation of idle gestures. The proposed DTW variant uses samples of the same gesture category to build a Gaussian Mixture Model driven probabilistic model of that gesture class. Results of the whole Human Gesture Recognition pipeline in a public data set show better performance in comparison to both standard BoVW model and DTW approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MV; 605.203 |
Approved |
no |
|
|
Call Number |
Admin @ si @ HBP2014 |
Serial |
2353 |
|
Permanent link to this record |
|
|
|
|
Author |
Bogdan Raducanu; D. Gatica-Perez |
|
|
Title |
Inferring competitive role patterns in reality TV show through nonverbal analysis |
Type |
Journal Article |
|
Year |
2012 |
Publication |
Multimedia Tools and Applications |
Abbreviated Journal |
MTAP |
|
|
Volume |
56 |
Issue |
1 |
Pages |
207-226 |
|
|
Keywords |
|
|
|
Abstract |
This paper introduces a new facet of social media, namely that depicting social interaction. More concretely, we address this problem from the perspective of nonverbal behavior-based analysis of competitive meetings. For our study, we made use of “The Apprentice” reality TV show, which features a competition for a real, highly paid corporate job. Our analysis is centered around two tasks regarding a person's role in a meeting: predicting the person with the highest status, and predicting the fired candidates. We address this problem by adopting both supervised and unsupervised strategies. The current study was carried out using nonverbal audio cues. Our approach is based only on the nonverbal interaction dynamics during the meeting without relying on the spoken words. The analysis is based on two types of data: individual and relational measures. Results obtained from the analysis of a full season of the show are promising (up to 85.7% of accuracy in the first case and up to 92.8% in the second case). Our approach has been conveniently compared with the Influence Model, demonstrating its superiority. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1380-7501 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaG2012 |
Serial |
1360 |
|
Permanent link to this record |