|
Records |
Links |
|
Author |
Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo |
|
|
Title |
Detailed 3D face reconstruction from a single RGB image |
Type |
Journal |
|
Year |
2019 |
Publication |
Journal of WSCG |
Abbreviated Journal |
JWSCG |
|
|
Volume |
27 |
Issue |
2 |
Pages |
103-112 |
|
|
Keywords |
3D Wrinkle Reconstruction; Face Analysis, Optimization. |
|
|
Abstract |
This paper introduces a method to obtain a detailed 3D reconstruction of facial skin from a single RGB image.
To this end, we propose the exclusive use of an input image without requiring any information about the observed material nor training data to model the wrinkle properties. They are detected and characterized directly from the image via a simple and effective parametric model, determining several features such as location, orientation, width, and height. With these ingredients, we propose to minimize a photometric error to retrieve the final detailed 3D map, which is initialized by current techniques based on deep learning. In contrast with other approaches, we only require estimating a depth parameter, making our approach fast and intuitive. Extensive experimental evaluation is presented in a wide variety of synthetic and real images, including different skin properties and facial
expressions. In all cases, our method outperforms the current approaches regarding 3D reconstruction accuracy, providing striking results for both large and fine wrinkles. |
|
|
Address |
2019/11 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; 600.086; 600.130; 600.122 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3708 |
|
Permanent link to this record |
|
|
|
|
Author |
Juan Ramon Terven Salinas; Joaquin Salas; Bogdan Raducanu |
|
|
Title |
New Opportunities for Computer Vision-Based Assistive Technology Systems for the Visually Impaired |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Computer |
Abbreviated Journal |
COMP |
|
|
Volume |
47 |
Issue |
4 |
Pages |
52-58 |
|
|
Keywords |
|
|
|
Abstract |
Computing advances and increased smartphone use gives technology system designers greater flexibility in exploiting computer vision to support visually impaired users. Understanding these users' needs will certainly provide insight for the development of improved usability of computing devices. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0018-9162 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; |
Approved |
no |
|
|
Call Number |
Admin @ si @ TSR2014a |
Serial |
2317 |
|
Permanent link to this record |
|
|
|
|
Author |
Bogdan Raducanu; Fadi Dornaika |
|
|
Title |
Embedding new observations via sparse-coding for non-linear manifold learning |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
47 |
Issue |
1 |
Pages |
480-492 |
|
|
Keywords |
|
|
|
Abstract |
Non-linear dimensionality reduction techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data-the out-of-sample problem. For the first aspect, the proposed solutions, in general, were heuristically driven. For the second aspect, the difficulty resides in finding an accurate mapping that transfers unseen data samples into an existing manifold. Past works addressing these two aspects were heavily parametric in the sense that the optimal performance is only achieved for a suitable parameter choice that should be known in advance. In this paper, we demonstrate that the sparse representation theory not only serves for automatic graph construction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. To evaluate the effectiveness of the proposed out-of-sample embedding, experiments are conducted using the K-nearest neighbor (KNN) and Kernel Support Vector Machines (KSVM) classifiers on six public face datasets. The experimental results show that the proposed model is able to achieve high categorization effectiveness as well as high consistency with non-linear embeddings/manifolds obtained in batch modes. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; |
Approved |
no |
|
|
Call Number |
Admin @ si @ RaD2013b |
Serial |
2316 |
|
Permanent link to this record |
|
|
|
|
Author |
Cesar Isaza; Joaquin Salas; Bogdan Raducanu |
|
|
Title |
Rendering ground truth data sets to detect shadows cast by static objects in outdoors |
Type |
Journal Article |
|
Year |
2014 |
Publication |
Multimedia Tools and Applications |
Abbreviated Journal |
MTAP |
|
|
Volume |
70 |
Issue |
1 |
Pages |
557-571 |
|
|
Keywords |
Synthetic ground truth data set; Sun position; Shadow detection; Static objects shadow detection |
|
|
Abstract |
In our work, we are particularly interested in studying the shadows cast by static objects in outdoor environments, during daytime. To assess the accuracy of a shadow detection algorithm, we need ground truth information. The collection of such information is a very tedious task because it is a process that requires manual annotation. To overcome this severe limitation, we propose in this paper a methodology to automatically render ground truth using a virtual environment. To increase the degree of realism and usefulness of the simulated environment, we incorporate in the scenario the precise longitude, latitude and elevation of the actual location of the object, as well as the sun’s position for a given time and day. To evaluate our method, we consider a qualitative and a quantitative comparison. In the quantitative one, we analyze the shadow cast by a real object in a particular geographical location and its corresponding rendered model. To evaluate qualitatively the methodology, we use some ground truth images obtained both manually and automatically. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer US |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1380-7501 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; |
Approved |
no |
|
|
Call Number |
Admin @ si @ ISR2014 |
Serial |
2229 |
|
Permanent link to this record |
|
|
|
|
Author |
Manuel Graña; Bogdan Raducanu |
|
|
Title |
Special Issue on Bioinspired and knowledge based techniques and applications |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume |
|
Issue |
|
Pages |
1-3 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; |
Approved |
no |
|
|
Call Number |
Admin @ si @ GrR2015 |
Serial |
2598 |
|
Permanent link to this record |
|
|
|
|
Author |
Juan Ramon Terven Salinas; Bogdan Raducanu; Maria Elena Meza-de-Luna; Joaquin Salas |
|
|
Title |
Head-gestures mirroring detection in dyadic social linteractions with computer vision-based wearable devices |
Type |
Journal Article |
|
Year |
2016 |
Publication |
Neurocomputing |
Abbreviated Journal |
NEUCOM |
|
|
Volume |
175 |
Issue |
B |
Pages |
866–876 |
|
|
Keywords |
Head gestures recognition; Mirroring detection; Dyadic social interaction analysis; Wearable devices |
|
|
Abstract |
During face-to-face human interaction, nonverbal communication plays a fundamental role. A relevant aspect that takes part during social interactions is represented by mirroring, in which a person tends to mimic the non-verbal behavior (head and body gestures, vocal prosody, etc.) of the counterpart. In this paper, we introduce a computer vision-based system to detect mirroring in dyadic social interactions with the use of a wearable platform. In our context, mirroring is inferred as simultaneous head noddings displayed by the interlocutors. Our approach consists of the following steps: (1) facial features extraction; (2) facial features stabilization; (3) head nodding recognition; and (4) mirroring detection. Our system achieves a mirroring detection accuracy of 72% on a custom mirroring dataset. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.072; 600.068; |
Approved |
no |
|
|
Call Number |
Admin @ si @ TRM2016 |
Serial |
2721 |
|
Permanent link to this record |
|
|
|
|
Author |
Maria Elena Meza-de-Luna; Juan Ramon Terven Salinas; Bogdan Raducanu; Joaquin Salas |
|
|
Title |
Assessing the Influence of Mirroring on the Perception of Professional Competence using Wearable Technology |
Type |
Journal Article |
|
Year |
2016 |
Publication |
IEEE Transactions on Affective Computing |
Abbreviated Journal |
TAC |
|
|
Volume |
9 |
Issue |
2 |
Pages |
161-175 |
|
|
Keywords |
Mirroring; Nodding; Competence; Perception; Wearable Technology |
|
|
Abstract |
Nonverbal communication is an intrinsic part in daily face-to-face meetings. A frequently observed behavior during social interactions is mirroring, in which one person tends to mimic the attitude of the counterpart. This paper shows that a computer vision system could be used to predict the perception of competence in dyadic interactions through the automatic detection of mirroring
events. To prove our hypothesis, we developed: (1) A social assistant for mirroring detection, using a wearable device which includes a video camera and (2) an automatic classifier for the perception of competence, using the number of nodding gestures and mirroring events as predictors. For our study, we used a mixed-method approach in an experimental design where 48 participants acting as customers interacted with a confederated psychologist. We found that the number of nods or mirroring events has a significant influence on the perception of competence. Our results suggest that: (1) Customer mirroring is a better predictor than psychologist mirroring; (2) the number of psychologist’s nods is a better predictor than the number of customer’s nods; (3) except for the psychologist mirroring, the computer vision algorithm we used worked about equally well whether it was acquiring images from wearable smartglasses or fixed cameras. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.072; |
Approved |
no |
|
|
Call Number |
Admin @ si @ MTR2016 |
Serial |
2826 |
|
Permanent link to this record |
|
|
|
|
Author |
Oscar Argudo; Marc Comino; Antonio Chica; Carlos Andujar; Felipe Lumbreras |
|
|
Title |
Segmentation of aerial images for plausible detail synthesis |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Computers & Graphics |
Abbreviated Journal |
CG |
|
|
Volume |
71 |
Issue |
|
Pages |
23-34 |
|
|
Keywords |
Terrain editing; Detail synthesis; Vegetation synthesis; Terrain rendering; Image segmentation |
|
|
Abstract |
The visual enrichment of digital terrain models with plausible synthetic detail requires the segmentation of aerial images into a suitable collection of categories. In this paper we present a complete pipeline for segmenting high-resolution aerial images into a user-defined set of categories distinguishing e.g. terrain, sand, snow, water, and different types of vegetation. This segmentation-for-synthesis problem implies that per-pixel categories must be established according to the algorithms chosen for rendering the synthetic detail. This precludes the definition of a universal set of labels and hinders the construction of large training sets. Since artists might choose to add new categories on the fly, the whole pipeline must be robust against unbalanced datasets, and fast on both training and inference. Under these constraints, we analyze the contribution of common per-pixel descriptors, and compare the performance of state-of-the-art supervised learning algorithms. We report the findings of two user studies. The first one was conducted to analyze human accuracy when manually labeling aerial images. The second user study compares detailed terrains built using different segmentation strategies, including official land cover maps. These studies demonstrate that our approach can be used to turn digital elevation models into fully-featured, detailed terrains with minimal authoring efforts. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0097-8493 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MSIAU; 600.086; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ ACC2018 |
Serial |
3147 |
|
Permanent link to this record |
|
|
|
|
Author |
Carola Figueroa Flores; Abel Gonzalez-Garcia; Joost Van de Weijer; Bogdan Raducanu |
|
|
Title |
Saliency for fine-grained object recognition in domains with scarce training data |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Pattern Recognition |
Abbreviated Journal |
PR |
|
|
Volume |
94 |
Issue |
|
Pages |
62-73 |
|
|
Keywords |
|
|
|
Abstract |
This paper investigates the role of saliency to improve the classification accuracy of a Convolutional Neural Network (CNN) for the case when scarce training data is available. Our approach consists in adding a saliency branch to an existing CNN architecture which is used to modulate the standard bottom-up visual features from the original image input, acting as an attentional mechanism that guides the feature extraction process. The main aim of the proposed approach is to enable the effective training of a fine-grained recognition model with limited training samples and to improve the performance on the task, thereby alleviating the need to annotate a large dataset. The vast majority of saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline. Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition. We perform extensive experiments on various fine-grained datasets (Flowers, Birds, Cars, and Dogs) under different conditions and show that saliency can considerably improve the network’s performance, especially for the case of scarce training data. Furthermore, our experiments show that saliency methods that obtain improved saliency maps (as measured by traditional saliency benchmarks) also translate to saliency methods that yield improved performance gains when applied in an object recognition pipeline. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.109; 600.141; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ FGW2019 |
Serial |
3264 |
|
Permanent link to this record |