Home | << 1 2 >> |
Records | |||||
---|---|---|---|---|---|
Author | Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas | ||||
Title | Head-gestures mirroring detection in dyadic social linteractions with computer vision-based wearable devices | Type | Journal Article | ||
Year | 2016 | Publication | Neurocomputing | Abbreviated Journal | NEUCOM |
Volume | 175 | Issue | B | Pages | 866–876 |
Keywords | Head gestures recognition; Mirroring detection; Dyadic social interaction analysis; Wearable devices | ||||
Abstract | During face-to-face human interaction, nonverbal communication plays a fundamental role. A relevant aspect that takes part during social interactions is represented by mirroring, in which a person tends to mimic the non-verbal behavior (head and body gestures, vocal prosody, etc.) of the counterpart. In this paper, we introduce a computer vision-based system to detect mirroring in dyadic social interactions with the use of a wearable platform. In our context, mirroring is inferred as simultaneous head noddings displayed by the interlocutors. Our approach consists of the following steps: (1) facial features extraction; (2) facial features stabilization; (3) head nodding recognition; and (4) mirroring detection. Our system achieves a mirroring detection accuracy of 72% on a custom mirroring dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.072; 600.068; | Approved | no | ||
Call Number | Admin @ si @ TRM2016 | Serial | 2721 | ||
Permanent link to this record | |||||
Author | Manuel Graña; Bogdan Raducanu | ||||
Title | Special Issue on Bioinspired and knowledge based techniques and applications | Type | Journal Article | ||
Year | 2015 | Publication | Neurocomputing | Abbreviated Journal | NEUCOM |
Volume | Issue | Pages | 1-3 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; | Approved | no | ||
Call Number | Admin @ si @ GrR2015 | Serial | 2598 | ||
Permanent link to this record | |||||
Author | Cesar Isaza; Joaquin Salas; Bogdan Raducanu | ||||
Title | Rendering ground truth data sets to detect shadows cast by static objects in outdoors | Type | Journal Article | ||
Year | 2014 | Publication | Multimedia Tools and Applications | Abbreviated Journal | MTAP |
Volume | 70 | Issue | 1 | Pages | 557-571 |
Keywords | Synthetic ground truth data set; Sun position; Shadow detection; Static objects shadow detection | ||||
Abstract | In our work, we are particularly interested in studying the shadows cast by static objects in outdoor environments, during daytime. To assess the accuracy of a shadow detection algorithm, we need ground truth information. The collection of such information is a very tedious task because it is a process that requires manual annotation. To overcome this severe limitation, we propose in this paper a methodology to automatically render ground truth using a virtual environment. To increase the degree of realism and usefulness of the simulated environment, we incorporate in the scenario the precise longitude, latitude and elevation of the actual location of the object, as well as the sun’s position for a given time and day. To evaluate our method, we consider a qualitative and a quantitative comparison. In the quantitative one, we analyze the shadow cast by a real object in a particular geographical location and its corresponding rendered model. To evaluate qualitatively the methodology, we use some ground truth images obtained both manually and automatically. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer US | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1380-7501 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | LAMP; | Approved | no | ||
Call Number | Admin @ si @ ISR2014 | Serial | 2229 | ||
Permanent link to this record | |||||
Author | Juan Ramon Terven Salinas; Joaquin Salas; Bogdan Raducanu | ||||
Title | Robust Head Gestures Recognition for Assistive Technology | Type | Book Chapter | ||
Year | 2014 | Publication | Pattern Recognition | Abbreviated Journal | |
Volume | 8495 | Issue | Pages | 152-161 | |
Keywords | |||||
Abstract | This paper presents a system capable of recognizing six head gestures: nodding, shaking, turning right, turning left, looking up, and looking down. The main difference of our system compared to other methods is that the Hidden Markov Models presented in this paper, are fully connected and consider all possible states in any given order, providing the following advantages to the system: (1) allows unconstrained movement of the head and (2) it can be easily integrated into a wearable device (e.g. glasses, neck-hung devices), in which case it can robustly recognize gestures in the presence of ego-motion. Experimental results show that this approach outperforms common methods that use restricted HMMs for each gesture. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-319-07490-0 | Medium | |
Area | Expedition | Conference | |||
Notes | LAMP; | Approved | no | ||
Call Number | Admin @ si @ TSR2014b | Serial | 2505 | ||
Permanent link to this record | |||||
Author | Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo | ||||
Title | Single view facial hair 3D reconstruction | Type | Conference Article | ||
Year | 2019 | Publication | 9th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 11867 | Issue | Pages | 423-436 | |
Keywords | 3D Vision; Shape Reconstruction; Facial Hair Modeling | ||||
Abstract | n this work, we introduce a novel energy-based framework that addresses the challenging problem of 3D reconstruction of facial hair from a single RGB image. To this end, we identify hair pixels over the image via texture analysis and then determine individual hair fibers that are modeled by means of a parametric hair model based on 3D helixes. We propose to minimize an energy composed of several terms, in order to adapt the hair parameters that better fit the image detections. The final hairs respond to the resulting fibers after a post-processing step where we encourage further realism. The resulting approach generates realistic facial hair fibers from solely an RGB image without assuming any training data nor user interaction. We provide an experimental evaluation on real-world pictures where several facial hair styles and image conditions are observed, showing consistent results and establishing a comparison with respect to competing approaches. | ||||
Address | Madrid; July 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IbPRIA | ||
Notes | MSIAU; 600.086; 600.130; 600.122 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3707 | ||
Permanent link to this record | |||||
Author | Xim Cerda-Company; C. Alejandro Parraga; Xavier Otazu | ||||
Title | Which tone-mapping is the best? A comparative study of tone-mapping perceived quality | Type | Abstract | ||
Year | 2014 | Publication | Perception | Abbreviated Journal | |
Volume | 43 | Issue | Pages | 106 | |
Keywords | |||||
Abstract | Perception 43 ECVP Abstract Supplement
High-dynamic-range (HDR) imaging refers to the methods designed to increase the brightness dynamic range present in standard digital imaging techniques. This increase is achieved by taking the same picture under dierent exposure values and mapping the intensity levels into a single image by way of a tone-mapping operator (TMO). Currently, there is no agreement on how to evaluate the quality of dierent TMOs. In this work we psychophysically evaluate 15 dierent TMOs obtaining rankings based on the perceived properties of the resulting tone-mapped images. We performed two dierent experiments on a CRT calibrated display using 10 subjects: (1) a study of the internal relationships between grey-levels and (2) a pairwise comparison of the resulting 15 tone-mapped images. In (1) observers internally matched the grey-levels to a reference inside the tone-mapped images and in the real scene. In (2) observers performed a pairwise comparison of the tone-mapped images alongside the real scene. We obtained two rankings of the TMOs according their performance. In (1) the best algorithm was ICAM by J.Kuang et al (2007) and in (2) the best algorithm was a TMO by Krawczyk et al (2005). Our results also show no correlation between these two rankings. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECVP | ||
Notes | NEUROBIT; 600.074 | Approved | no | ||
Call Number | Admin @ si @ CPO2014 | Serial | 2527 | ||
Permanent link to this record | |||||
Author | Bogdan Raducanu; Alireza Bosaghzadeh; Fadi Dornaika | ||||
Title | Facial Expression Recognition based on Multi-view Observations with Application to Social Robotics | Type | Conference Article | ||
Year | 2014 | Publication | 1st Workshop on Computer Vision for Affective Computing | Abbreviated Journal | |
Volume | Issue | Pages | 1-8 | ||
Keywords | |||||
Abstract | Human-robot interaction is a hot topic nowadays in the social robotics community. One crucial aspect is represented by the affective communication which comes encoded through the facial expressions. In this paper, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, view- and texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial
expression. |
||||
Address | Singapore; November 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACCV | ||
Notes | LAMP; | Approved | no | ||
Call Number | Admin @ si @ RBD2014 | Serial | 2599 | ||
Permanent link to this record | |||||
Author | Xavier Otazu; Olivier Penacchio; Xim Cerda-Company | ||||
Title | An excitatory-inhibitory firing rate model accounts for brightness induction, colour induction and visual discomfort | Type | Conference Article | ||
Year | 2015 | Publication | Barcelona Computational, Cognitive and Systems Neuroscience | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Barcelona; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | BARCCSYN | ||
Notes | NEUROBIT; | Approved | no | ||
Call Number | Admin @ si @ OPC2015b | Serial | 2634 | ||
Permanent link to this record | |||||
Author | Fadi Dornaika; Bogdan Raducanu; Alireza Bosaghzadeh | ||||
Title | Facial expression recognition based on multi observations with application to social robotics | Type | Book Chapter | ||
Year | 2015 | Publication | Emotional and Facial Expressions: Recognition, Developmental Differences and Social Importance | Abbreviated Journal | |
Volume | Issue | Pages | 153-166 | ||
Keywords | |||||
Abstract | Human-robot interaction is a hot topic nowadays in the social robotics
community. One crucial aspect is represented by the affective communication which comes encoded through the facial expressions. In this chapter, we propose a novel approach for facial expression recognition, which exploits an efficient and adaptive graph-based label propagation (semi-supervised mode) in a multi-observation framework. The facial features are extracted using an appearance-based 3D face tracker, viewand texture independent. Our method has been extensively tested on the CMU dataset, and has been conveniently compared with other methods for graph construction. With the proposed approach, we developed an application for an AIBO robot, in which it mirrors the recognized facial expression. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Nova Science publishers | Place of Publication | Editor | Bruce Flores | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; | Approved | no | ||
Call Number | Admin @ si @ DRB2015 | Serial | 2720 | ||
Permanent link to this record |