|   | 
Details
   web
Records
Author Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas
Title Evaluating Real-Time Mirroring of Head Gestures using Smart Glasses Type Conference Article
Year 2015 Publication 16th IEEE International Conference on Computer Vision Workshops Abbreviated Journal
Volume Issue Pages 452-460
Keywords
Abstract Mirroring occurs when one person tends to mimic the non-verbal communication of their counterparts. Even though mirroring is a complex phenomenon, in this study, we focus on the detection of head-nodding as a simple non-verbal communication cue due to its significance as a gesture displayed during social interactions. This paper introduces a computer vision-based method to detect mirroring through the analysis of head gestures using wearable cameras (smart glasses). In addition, we study how such a method can be used to explore perceived competence. The proposed method has been evaluated and the experiments demonstrate how static and wearable cameras seem to be equally effective to gather the information required for the analysis.
Address Santiago de Chile; December 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes (up) LAMP; 600.068; 600.072; Approved no
Call Number Admin @ si @ TRM2015 Serial 2722
Permanent link to this record
 

 
Author Maria Elena Meza-de-Luna; Juan Ramon Terven Salinas; Bogdan Raducanu; Joaquin Salas
Title Assessing the Influence of Mirroring on the Perception of Professional Competence using Wearable Technology Type Journal Article
Year 2016 Publication IEEE Transactions on Affective Computing Abbreviated Journal TAC
Volume 9 Issue 2 Pages 161-175
Keywords Mirroring; Nodding; Competence; Perception; Wearable Technology
Abstract Nonverbal communication is an intrinsic part in daily face-to-face meetings. A frequently observed behavior during social interactions is mirroring, in which one person tends to mimic the attitude of the counterpart. This paper shows that a computer vision system could be used to predict the perception of competence in dyadic interactions through the automatic detection of mirroring
events. To prove our hypothesis, we developed: (1) A social assistant for mirroring detection, using a wearable device which includes a video camera and (2) an automatic classifier for the perception of competence, using the number of nodding gestures and mirroring events as predictors. For our study, we used a mixed-method approach in an experimental design where 48 participants acting as customers interacted with a confederated psychologist. We found that the number of nods or mirroring events has a significant influence on the perception of competence. Our results suggest that: (1) Customer mirroring is a better predictor than psychologist mirroring; (2) the number of psychologist’s nods is a better predictor than the number of customer’s nods; (3) except for the psychologist mirroring, the computer vision algorithm we used worked about equally well whether it was acquiring images from wearable smartglasses or fixed cameras.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) LAMP; 600.072; Approved no
Call Number Admin @ si @ MTR2016 Serial 2826
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas
Title Head-gestures mirroring detection in dyadic social linteractions with computer vision-based wearable devices Type Journal Article
Year 2016 Publication Neurocomputing Abbreviated Journal NEUCOM
Volume 175 Issue B Pages 866–876
Keywords Head gestures recognition; Mirroring detection; Dyadic social interaction analysis; Wearable devices
Abstract During face-to-face human interaction, nonverbal communication plays a fundamental role. A relevant aspect that takes part during social interactions is represented by mirroring, in which a person tends to mimic the non-verbal behavior (head and body gestures, vocal prosody, etc.) of the counterpart. In this paper, we introduce a computer vision-based system to detect mirroring in dyadic social interactions with the use of a wearable platform. In our context, mirroring is inferred as simultaneous head noddings displayed by the interlocutors. Our approach consists of the following steps: (1) facial features extraction; (2) facial features stabilization; (3) head nodding recognition; and (4) mirroring detection. Our system achieves a mirroring detection accuracy of 72% on a custom mirroring dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) LAMP; 600.072; 600.068; Approved no
Call Number Admin @ si @ TRM2016 Serial 2721
Permanent link to this record
 

 
Author Carola Figueroa Flores; Abel Gonzalez-Garcia; Joost Van de Weijer; Bogdan Raducanu
Title Saliency for fine-grained object recognition in domains with scarce training data Type Journal Article
Year 2019 Publication Pattern Recognition Abbreviated Journal PR
Volume 94 Issue Pages 62-73
Keywords
Abstract This paper investigates the role of saliency to improve the classification accuracy of a Convolutional Neural Network (CNN) for the case when scarce training data is available. Our approach consists in adding a saliency branch to an existing CNN architecture which is used to modulate the standard bottom-up visual features from the original image input, acting as an attentional mechanism that guides the feature extraction process. The main aim of the proposed approach is to enable the effective training of a fine-grained recognition model with limited training samples and to improve the performance on the task, thereby alleviating the need to annotate a large dataset. The vast majority of saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline. Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition. We perform extensive experiments on various fine-grained datasets (Flowers, Birds, Cars, and Dogs) under different conditions and show that saliency can considerably improve the network’s performance, especially for the case of scarce training data. Furthermore, our experiments show that saliency methods that obtain improved saliency maps (as measured by traditional saliency benchmarks) also translate to saliency methods that yield improved performance gains when applied in an object recognition pipeline.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) LAMP; 600.109; 600.141; 600.120 Approved no
Call Number Admin @ si @ FGW2019 Serial 3264
Permanent link to this record
 

 
Author Oscar Argudo; Marc Comino; Antonio Chica; Carlos Andujar; Felipe Lumbreras
Title Segmentation of aerial images for plausible detail synthesis Type Journal Article
Year 2018 Publication Computers & Graphics Abbreviated Journal CG
Volume 71 Issue Pages 23-34
Keywords Terrain editing; Detail synthesis; Vegetation synthesis; Terrain rendering; Image segmentation
Abstract The visual enrichment of digital terrain models with plausible synthetic detail requires the segmentation of aerial images into a suitable collection of categories. In this paper we present a complete pipeline for segmenting high-resolution aerial images into a user-defined set of categories distinguishing e.g. terrain, sand, snow, water, and different types of vegetation. This segmentation-for-synthesis problem implies that per-pixel categories must be established according to the algorithms chosen for rendering the synthetic detail. This precludes the definition of a universal set of labels and hinders the construction of large training sets. Since artists might choose to add new categories on the fly, the whole pipeline must be robust against unbalanced datasets, and fast on both training and inference. Under these constraints, we analyze the contribution of common per-pixel descriptors, and compare the performance of state-of-the-art supervised learning algorithms. We report the findings of two user studies. The first one was conducted to analyze human accuracy when manually labeling aerial images. The second user study compares detailed terrains built using different segmentation strategies, including official land cover maps. These studies demonstrate that our approach can be used to turn digital elevation models into fully-featured, detailed terrains with minimal authoring efforts.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0097-8493 ISBN Medium
Area Expedition Conference
Notes (up) MSIAU; 600.086; 600.118 Approved no
Call Number Admin @ si @ ACC2018 Serial 3147
Permanent link to this record
 

 
Author Gemma Rotger; Felipe Lumbreras; Francesc Moreno-Noguer; Antonio Agudo
Title 2D-to-3D Facial Expression Transfer Type Conference Article
Year 2018 Publication 24th International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 2008 - 2013
Keywords
Abstract Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an
input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape –obtained from standard factorization approaches over the input video– using a triangular
mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic
equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPR
Notes (up) MSIAU; 600.086; 600.130; 600.118 Approved no
Call Number Admin @ si @ RLM2018 Serial 3232
Permanent link to this record
 

 
Author Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo
Title Single view facial hair 3D reconstruction Type Conference Article
Year 2019 Publication 9th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 11867 Issue Pages 423-436
Keywords 3D Vision; Shape Reconstruction; Facial Hair Modeling
Abstract n this work, we introduce a novel energy-based framework that addresses the challenging problem of 3D reconstruction of facial hair from a single RGB image. To this end, we identify hair pixels over the image via texture analysis and then determine individual hair fibers that are modeled by means of a parametric hair model based on 3D helixes. We propose to minimize an energy composed of several terms, in order to adapt the hair parameters that better fit the image detections. The final hairs respond to the resulting fibers after a post-processing step where we encourage further realism. The resulting approach generates realistic facial hair fibers from solely an RGB image without assuming any training data nor user interaction. We provide an experimental evaluation on real-world pictures where several facial hair styles and image conditions are observed, showing consistent results and establishing a comparison with respect to competing approaches.
Address Madrid; July 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IbPRIA
Notes (up) MSIAU; 600.086; 600.130; 600.122 Approved no
Call Number Admin @ si @ Serial 3707
Permanent link to this record
 

 
Author Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo
Title Detailed 3D face reconstruction from a single RGB image Type Journal
Year 2019 Publication Journal of WSCG Abbreviated Journal JWSCG
Volume 27 Issue 2 Pages 103-112
Keywords 3D Wrinkle Reconstruction; Face Analysis, Optimization.
Abstract This paper introduces a method to obtain a detailed 3D reconstruction of facial skin from a single RGB image.
To this end, we propose the exclusive use of an input image without requiring any information about the observed material nor training data to model the wrinkle properties. They are detected and characterized directly from the image via a simple and effective parametric model, determining several features such as location, orientation, width, and height. With these ingredients, we propose to minimize a photometric error to retrieve the final detailed 3D map, which is initialized by current techniques based on deep learning. In contrast with other approaches, we only require estimating a depth parameter, making our approach fast and intuitive. Extensive experimental evaluation is presented in a wide variety of synthetic and real images, including different skin properties and facial
expressions. In all cases, our method outperforms the current approaches regarding 3D reconstruction accuracy, providing striking results for both large and fine wrinkles.
Address 2019/11
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) MSIAU; 600.086; 600.130; 600.122 Approved no
Call Number Admin @ si @ Serial 3708
Permanent link to this record
 

 
Author Daniela Rato; Miguel Oliveira; Vitor Santos; Manuel Gomes; Angel Sappa
Title A sensor-to-pattern calibration framework for multi-modal industrial collaborative cells Type Journal Article
Year 2022 Publication Journal of Manufacturing Systems Abbreviated Journal JMANUFSYST
Volume 64 Issue Pages 497-507
Keywords Calibration; Collaborative cell; Multi-modal; Multi-sensor
Abstract Collaborative robotic industrial cells are workspaces where robots collaborate with human operators. In this context, safety is paramount, and for that a complete perception of the space where the collaborative robot is inserted is necessary. To ensure this, collaborative cells are equipped with a large set of sensors of multiple modalities, covering the entire work volume. However, the fusion of information from all these sensors requires an accurate extrinsic calibration. The calibration of such complex systems is challenging, due to the number of sensors and modalities, and also due to the small overlapping fields of view between the sensors, which are positioned to capture different viewpoints of the cell. This paper proposes a sensor to pattern methodology that can calibrate a complex system such as a collaborative cell in a single optimization procedure. Our methodology can tackle RGB and Depth cameras, as well as LiDARs. Results show that our methodology is able to accurately calibrate a collaborative cell containing three RGB cameras, a depth camera and three 3D LiDARs.
Address
Corporate Author Thesis
Publisher Science Direct Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (up) MSIAU; MACO Approved no
Call Number Admin @ si @ ROS2022 Serial 3750
Permanent link to this record
 

 
Author Xavier Otazu; Olivier Penacchio; Xim Cerda-Company
Title Brightness and colour induction through contextual influences in V1 Type Conference Article
Year 2015 Publication Scottish Vision Group 2015 SGV2015 Abbreviated Journal
Volume 12 Issue 9 Pages 1208-2012
Keywords
Abstract
Address Carnoustie; Scotland; March 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference SGV
Notes (up) NEUROBIT; Approved no
Call Number Admin @ si @ OPC2015a Serial 2632
Permanent link to this record
 

 
Author Olivier Penacchio; Xavier Otazu; A. wilkins; J. Harris
Title Uncomfortable images prevent lateral interactions in the cortex from providing a sparse code Type Conference Article
Year 2015 Publication European Conference on Visual Perception ECVP2015 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Liverpool; uk; August 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECVP
Notes (up) NEUROBIT; Approved no
Call Number Admin @ si @ POW2015 Serial 2633
Permanent link to this record
 

 
Author Xavier Otazu; Olivier Penacchio; Xim Cerda-Company
Title An excitatory-inhibitory firing rate model accounts for brightness induction, colour induction and visual discomfort Type Conference Article
Year 2015 Publication Barcelona Computational, Cognitive and Systems Neuroscience Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Barcelona; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BARCCSYN
Notes (up) NEUROBIT; Approved no
Call Number Admin @ si @ OPC2015b Serial 2634
Permanent link to this record
 

 
Author Xim Cerda-Company; C. Alejandro Parraga; Xavier Otazu
Title Which tone-mapping is the best? A comparative study of tone-mapping perceived quality Type Abstract
Year 2014 Publication Perception Abbreviated Journal
Volume 43 Issue Pages 106
Keywords
Abstract Perception 43 ECVP Abstract Supplement
High-dynamic-range (HDR) imaging refers to the methods designed to increase the brightness dynamic range present in standard digital imaging techniques. This increase is achieved by taking the same picture under di erent exposure values and mapping the intensity levels into a single image by way of a tone-mapping operator (TMO). Currently, there is no agreement on how to evaluate the quality
of di erent TMOs. In this work we psychophysically evaluate 15 di erent TMOs obtaining rankings based on the perceived properties of the resulting tone-mapped images. We performed two di erent experiments on a CRT calibrated display using 10 subjects: (1) a study of the internal relationships between grey-levels and (2) a pairwise comparison of the resulting 15 tone-mapped images. In (1) observers internally matched the grey-levels to a reference inside the tone-mapped images and in the real scene. In (2) observers performed a pairwise comparison of the tone-mapped images alongside the real scene. We obtained two rankings of the TMOs according their performance. In (1) the best algorithm
was ICAM by J.Kuang et al (2007) and in (2) the best algorithm was a TMO by Krawczyk et al (2005). Our results also show no correlation between these two rankings.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECVP
Notes (up) NEUROBIT; 600.074 Approved no
Call Number Admin @ si @ CPO2014 Serial 2527
Permanent link to this record