|
Records |
Links |
|
Author |
Victor Ponce; Sergio Escalera; Marc Perez; Oriol Janes; Xavier Baro |
|
|
Title |
Non-Verbal Communication Analysis in Victim-Offender Mediations |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
67 |
Issue |
1 |
Pages |
19-27 |
|
|
Keywords |
Victim–Offender Mediation; Multi-modal human behavior analysis; Face and gesture recognition; Social signal processing; Computer vision; Machine learning |
|
|
Abstract |
We present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. We propose the use of computer vision and social signal processing technologies in real scenarios of Victim–Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real Victim–Offender Mediation sessions in Catalonia. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state of the art binary classification approaches, our system achieves recognition accuracies of 86% when predicting satisfaction, and 79% when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1–5] for the computed social signals. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
HuPBA;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ PEP2015 |
Serial |
2583 |
|
Permanent link to this record |
|
|
|
|
Author |
David Sanchez-Mendoza; David Masip; Agata Lapedriza |
|
|
Title |
Emotion recognition from mid-level features |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
67 |
Issue |
Part 1 |
Pages |
66–74 |
|
|
Keywords |
Facial expression; Emotion recognition; Action units; Computer vision |
|
|
Abstract |
In this paper we present a study on the use of Action Units as mid-level features for automatically recognizing basic and subtle emotions. We propose a representation model based on mid-level facial muscular movement features. We encode these movements dynamically using the Facial Action Coding System, and propose to use these intermediate features based on Action Units (AUs) to classify emotions. AUs activations are detected fusing a set of spatiotemporal geometric and appearance features. The algorithm is validated in two applications: (i) the recognition of 7 basic emotions using the publicly available Cohn-Kanade database, and (ii) the inference of subtle emotional cues in the Newscast database. In this second scenario, we consider emotions that are perceived cumulatively in longer periods of time. In particular, we Automatically classify whether video shoots from public News TV channels refer to Good or Bad news. To deal with the different video lengths we propose a Histogram of Action Units and compute it using a sliding window strategy on the frame sequences. Our approach achieves accuracies close to human perception. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier B.V. |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0167-8655 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ SML2015 |
Serial |
2746 |
|
Permanent link to this record |
|
|
|
|
Author |
R. Clariso; David Masip; A. Rius |
|
|
Title |
Student projects empowering mobile learning in higher education |
Type |
Journal |
|
Year |
2014 |
Publication |
Revista de Universidad y Sociedad del Conocimiento |
Abbreviated Journal |
RUSC |
|
|
Volume |
11 |
Issue |
|
Pages |
192-207 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1698-580X |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ CMR2014 |
Serial |
2619 |
|
Permanent link to this record |
|
|
|
|
Author |
Ana Garcia Rodriguez; Jorge Bernal; F. Javier Sanchez; Henry Cordova; Rodrigo Garces Duran; Cristina Rodriguez de Miguel; Gloria Fernandez Esparrach |
|
|
Title |
Polyp fingerprint: automatic recognition of colorectal polyps’ unique features |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Surgical Endoscopy and other Interventional Techniques |
Abbreviated Journal |
SEND |
|
|
Volume |
34 |
Issue |
4 |
Pages |
1887-1889 |
|
|
Keywords |
|
|
|
Abstract |
BACKGROUND:
Content-based image retrieval (CBIR) is an application of machine learning used to retrieve images by similarity on the basis of features. Our objective was to develop a CBIR system that could identify images containing the same polyp ('polyp fingerprint').
METHODS:
A machine learning technique called Bag of Words was used to describe each endoscopic image containing a polyp in a unique way. The system was tested with 243 white light images belonging to 99 different polyps (for each polyp there were at least two images representing it in two different temporal moments). Images were acquired in routine colonoscopies at Hospital Clínic using high-definition Olympus endoscopes. The method provided for each image the closest match within the dataset.
RESULTS:
The system matched another image of the same polyp in 221/243 cases (91%). No differences were observed in the number of correct matches according to Paris classification (protruded: 90.7% vs. non-protruded: 91.3%) and size (< 10 mm: 91.6% vs. > 10 mm: 90%).
CONCLUSIONS:
A CBIR system can match accurately two images containing the same polyp, which could be a helpful aid for polyp image recognition.
KEYWORDS:
Artificial intelligence; Colorectal polyps; Content-based image retrieval |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MV; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3403 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Xavier Baro; Jordi Vitria; Petia Radeva; Bogdan Raducanu |
|
|
Title |
Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction |
Type |
Journal Article |
|
Year |
2012 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
12 |
Issue |
2 |
Pages |
1702-1719 |
|
|
Keywords |
|
|
|
Abstract |
IF=1.77 (2010)
Social interactions are a very important component in peopleís lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Timesí Blogging Heads opinion blog.
The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The linksí weights are a measure of the ìinfluenceî a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Molecular Diversity Preservation International |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB; OR;HuPBA;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ EBV2012 |
Serial |
1885 |
|
Permanent link to this record |