|   | 
Details
   web
Records
Author (up) G.Blasco; Simone Balocco; J.Puig; J.Sanchez-Gonzalez; W.Ricart; J.Daunis-I-Estadella; X.Molina; S.Pedraza; J.M.Fernandez-Real
Title Carotid pulse wave velocity by magnetic resonance imaging is increased in middle-aged subjects with the metabolic syndrome Type Journal Article
Year 2015 Publication International Journal of Cardiovascular Imaging Abbreviated Journal ICJI
Volume 31 Issue 3 Pages 603-612
Keywords Metabolic syndrome; Arterial stiffness; Pulse wave velocity; Carotid artery; Magnetic resonance
Abstract Arterial pulse wave velocity (PWV), an independent predictor of cardiovascular disease, physiologically increases with age; however, growing evidence suggests metabolic syndrome (MetS) accelerates this increase. Magnetic resonance imaging (MRI) enables reliable noninvasive assessment of arterial stiffness by measuring arterial PWV in specific vascular segments. We investigated the association between the presence of MetS and its components with carotid PWV (cPWV) in asymptomatic subjects without diabetes. We assessed cPWV by MRI in 61 individuals (mean age, 55.3 ± 14.1 years; median age, 55 years): 30 with MetS and 31 controls with similar age, sex, body mass index, and LDL-cholesterol levels. The study population was dichotomized by the median age. To remove the physiological association between PWV and age, unpaired t tests and multiple regression analyses were performed using the residuals of the regression between PWV and age. cPWV was higher in middle-aged subjects with MetS than in those without (p = 0.001), but no differences were found in elder subjects (p = 0.313). cPWV was associated with diastolic blood pressure (r = 0.276, p = 0.033) and waist circumference (r = 0.268, p = 0.038). The presence of MetS was associated with increased cPWV regardless of age, sex, blood pressure, and waist (p = 0.007). The MetS components contributing independently to an increased cPWV were hypertension (p = 0.018) and hypertriglyceridemia (p = 0.002). The presence of MetS is associated with an increased cPWV in middle-aged subjects. In particular, hypertension and hypertriglyceridemia may contribute to early progression of carotid stiffness.
Address
Corporate Author Thesis
Publisher Springer Netherlands Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1569-5794 ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ BBP2015 Serial 2670
Permanent link to this record
 

 
Author (up) G.D. Evangelidis; Ferran Diego; Joan Serrat; Antonio Lopez
Title Slice Matching for Accurate Spatio-Temporal Alignment Type Conference Article
Year 2011 Publication In ICCV Workshop on Visual Surveillance Abbreviated Journal
Volume Issue Pages
Keywords video alignment
Abstract Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VS
Notes ADAS Approved no
Call Number Admin @ si @ EDS2011; ADAS @ adas @ eds2011a Serial 1861
Permanent link to this record
 

 
Author (up) G.Estape; Enric Marti
Title L’ús d’aplicacions de visualització 3D com a eina d’aprenenetatge en activitats formatives dirigides i autònomes: el cas del programa Bluestar Type Miscellaneous
Year 2008 Publication V Jornades d’Innovació Docent UAB Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM Approved no
Call Number IAM @ iam @ ESM2008 Serial 1495
Permanent link to this record
 

 
Author (up) G.Thorvaldsen; Joana Maria Pujadas-Mora; T.Andersen ; L.Eikvil; Josep Llados; Alicia Fornes; Anna Cabre
Title A Tale of two Transcriptions Type Journal
Year 2015 Publication Historical Life Course Studies Abbreviated Journal
Volume 2 Issue Pages 1-19
Keywords Nominative Sources; Census; Vital Records; Computer Vision; Optical Character Recognition; Word Spotting
Abstract non-indexed
This article explains how two projects implement semi-automated transcription routines: for census sheets in Norway and marriage protocols from Barcelona. The Spanish system was created to transcribe the marriage license books from 1451 to 1905 for the Barcelona area; one of the world’s longest series of preserved vital records. Thus, in the Project “Five Centuries of Marriages” (5CofM) at the Autonomous University of Barcelona’s Center for Demographic Studies, the Barcelona Historical Marriage Database has been built. More than 600,000 records were transcribed by 150 transcribers working online. The Norwegian material is cross-sectional as it is the 1891 census, recorded on one sheet per person. This format and the underlining of keywords for several variables made it more feasible to semi-automate data entry than when many persons are listed on the same page. While Optical Character Recognition (OCR) for printed text is scientifically mature, computer vision research is now focused on more difficult problems such as handwriting recognition. In the marriage project, document analysis methods have been proposed to automatically recognize the marriage licenses. Fully automatic recognition is still a challenge, but some promising results have been obtained. In Spain, Norway and elsewhere the source material is available as scanned pictures on the Internet, opening up the possibility for further international cooperation concerning automating the transcription of historic source materials. Like what is being done in projects to digitize printed materials, the optimal solution is likely to be a combination of manual transcription and machine-assisted recognition also for hand-written sources.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2352-6343 ISBN Medium
Area Expedition Conference
Notes DAG; 600.077; 602.006 Approved no
Call Number Admin @ si @ TPA2015 Serial 2582
Permanent link to this record
 

 
Author (up) Gabriel Villalonga
Title Leveraging Synthetic Data to Create Autonomous Driving Perception Systems Type Book Whole
Year 2021 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Manually annotating images to develop vision models has been a major bottleneck
since computer vision and machine learning started to walk together. This has
been more evident since computer vision falls on the shoulders of data-hungry
deep learning techniques. When addressing on-board perception for autonomous
driving, the curse of data annotation is exacerbated due to the use of additional
sensors such as LiDAR. Therefore, any approach aiming at reducing such a timeconsuming and costly work is of high interest for addressing autonomous driving
and, in fact, for any application requiring some sort of artificial perception. In the
last decade, it has been shown that leveraging from synthetic data is a paradigm
worth to pursue in order to minimizing manual data annotation. The reason is
that the automatic process of generating synthetic data can also produce different
types of associated annotations (e.g. object bounding boxes for synthetic images
and LiDAR pointclouds, pixel/point-wise semantic information, etc.). Directly
using synthetic data for training deep perception models may not be the definitive
solution in all circumstances since it can appear a synth-to-real domain shift. In
this context, this work focuses on leveraging synthetic data to alleviate manual
annotation for three perception tasks related to driving assistance and autonomous
driving. In all cases, we assume the use of deep convolutional neural networks
(CNNs) to develop our perception models.
The first task addresses traffic sign recognition (TSR), a kind of multi-class
classification problem. We assume that the number of sign classes to be recognized
must be suddenly increased without having annotated samples to perform the
corresponding TSR CNN re-training. We show that leveraging synthetic samples of
such new classes and transforming them by a generative adversarial network (GAN)
trained on the known classes (i.e. without using samples from the new classes), it is
possible to re-train the TSR CNN to properly classify all the signs for a ∼ 1/4 ratio of
new/known sign classes. The second task addresses on-board 2D object detection,
focusing on vehicles and pedestrians. In this case, we assume that we receive a set
of images without the annotations required to train an object detector, i.e. without
object bounding boxes. Therefore, our goal is to self-annotate these images so
that they can later be used to train the desired object detector. In order to reach
this goal, we leverage from synthetic data and propose a semi-supervised learning
approach based on the co-training idea. In fact, we use a GAN to reduce the synthto-real domain shift before applying co-training. Our quantitative results show
that co-training and GAN-based image-to-image translation complement each
other up to allow the training of object detectors without manual annotation, and still almost reaching the upper-bound performances of the detectors trained from
human annotations. While in previous tasks we focus on vision-based perception,
the third task we address focuses on LiDAR pointclouds. Our initial goal was to
develop a 3D object detector trained on synthetic LiDAR-style pointclouds. While
for images we may expect synth/real-to-real domain shift due to differences in
their appearance (e.g. when source and target images come from different camera
sensors), we did not expect so for LiDAR pointclouds since these active sensors
factor out appearance and provide sampled shapes. However, in practice, we have
seen that it can be domain shift even among real-world LiDAR pointclouds. Factors
such as the sampling parameters of the LiDARs, the sensor suite configuration onboard the ego-vehicle, and the human annotation of 3D bounding boxes, do induce
a domain shift. We show it through comprehensive experiments with different
publicly available datasets and 3D detectors. This redirected our goal towards the
design of a GAN for pointcloud-to-pointcloud translation, a relatively unexplored
topic.
Finally, it is worth to mention that all the synthetic datasets used for these three
tasks, have been designed and generated in the context of this PhD work and will
be publicly released. Overall, we think this PhD presents several steps forward to
encourage leveraging synthetic data for developing deep perception models in the
field of driving assistance and autonomous driving.
Address February 2021
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Antonio Lopez;German Ros
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-122714-2-3 Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ Vil2021 Serial 3599
Permanent link to this record
 

 
Author (up) Gabriel Villalonga; Antonio Lopez
Title Co-Training for On-Board Deep Object Detection Type Journal Article
Year 2020 Publication IEEE Access Abbreviated Journal ACCESS
Volume Issue Pages 194441 - 194456
Keywords
Abstract Providing ground truth supervision to train visual models has been a bottleneck over the years, exacerbated by domain shifts which degenerate the performance of such models. This was the case when visual tasks relied on handcrafted features and shallow machine learning and, despite its unprecedented performance gains, the problem remains open within the deep learning paradigm due to its data-hungry nature. Best performing deep vision-based object detectors are trained in a supervised manner by relying on human-labeled bounding boxes which localize class instances (i.e. objects) within the training images. Thus, object detection is one of such tasks for which human labeling is a major bottleneck. In this article, we assess co-training as a semi-supervised learning method for self-labeling objects in unlabeled images, so reducing the human-labeling effort for developing deep object detectors. Our study pays special attention to a scenario involving domain shift; in particular, when we have automatically generated virtual-world images with object bounding boxes and we have real-world images which are unlabeled. Moreover, we are particularly interested in using co-training for deep object detection in the context of driver assistance systems and/or self-driving vehicles. Thus, using well-established datasets and protocols for object detection in these application contexts, we will show how co-training is a paradigm worth to pursue for alleviating object labeling, working both alone and together with task-agnostic domain adaptation.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ ViL2020 Serial 3488
Permanent link to this record
 

 
Author (up) Gabriel Villalonga; Joost Van de Weijer; Antonio Lopez
Title Recognizing new classes with synthetic data in the loop: application to traffic sign recognition Type Journal Article
Year 2020 Publication Sensors Abbreviated Journal SENS
Volume 20 Issue 3 Pages 583
Keywords
Abstract On-board vision systems may need to increase the number of classes that can be recognized in a relatively short period. For instance, a traffic sign recognition system may suddenly be required to recognize new signs. Since collecting and annotating samples of such new classes may need more time than we wish, especially for uncommon signs, we propose a method to generate these samples by combining synthetic images and Generative Adversarial Network (GAN) technology. In particular, the GAN is trained on synthetic and real-world samples from known classes to perform synthetic-to-real domain adaptation, but applied to synthetic samples of the new classes. Using the Tsinghua dataset with a synthetic counterpart, SYNTHIA-TS, we have run an extensive set of experiments. The results show that the proposed method is indeed effective, provided that we use a proper Convolutional Neural Network (CNN) to perform the traffic sign recognition (classification) task as well as a proper GAN to transform the synthetic images. Here, a ResNet101-based classifier and domain adaptation based on CycleGAN performed extremely well for a ratio∼ 1/4 for new/known classes; even for more challenging ratios such as∼ 4/1, the results are also very positive.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; ADAS; 600.118; 600.120 Approved no
Call Number Admin @ si @ VWL2020 Serial 3405
Permanent link to this record
 

 
Author (up) Gabriel Villalonga; Sebastian Ramos; German Ros; David Vazquez; Antonio Lopez
Title 3d Pedestrian Detection via Random Forest Type Miscellaneous
Year 2014 Publication European Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 231-238
Keywords Pedestrian Detection
Abstract Our demo focuses on showing the extraordinary performance of our novel 3D pedestrian detector along with its simplicity and real-time capabilities. This detector has been designed for autonomous driving applications, but it can also be applied in other scenarios that cover both outdoor and indoor applications.
Our pedestrian detector is based on the combination of a random forest classifier with HOG-LBP features and the inclusion of a preprocessing stage based on 3D scene information in order to precisely determinate the image regions where the detector should search for pedestrians. This approach ends up in a high accurate system that runs real-time as it is required by many computer vision and robotics applications.
Address Zurich; suiza; September 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECCV-Demo
Notes ADAS; 600.076 Approved no
Call Number Admin @ si @ VRR2014 Serial 2570
Permanent link to this record
 

 
Author (up) Gabriela Ramirez; Esau Villatoro; Bogdan Ionescu; Hugo Jair Escalante; Sergio Escalera; Martha Larson; Henning Muller; Isabelle Guyon
Title Overview of the Multimedia Information Processing for Personality & Social Networks Analysis Contes Type Conference Article
Year 2018 Publication Multimedia Information Processing for Personality and Social Networks Analysis (MIPPSNA 2018) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Beijing; China; August 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPRW
Notes HUPBA Approved no
Call Number Admin @ si @ RVI2018 Serial 3211
Permanent link to this record
 

 
Author (up) Galadrielle Humblot-Renaux; Sergio Escalera; Thomas B. Moeslund
Title Beyond AUROC & co. for evaluating out-of-distribution detection performance Type Conference Article
Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal
Volume Issue Pages 3880-3889
Keywords
Abstract While there has been a growing research interest in developing out-of-distribution (OOD) detection methods, there has been comparably little discussion around how these methods should be evaluated. Given their relevance for safe(r) AI, it is important to examine whether the basis for comparing OOD detection methods is consistent with practical needs. In this work, we take a closer look at the go-to metrics for evaluating OOD detection, and question the approach of exclusively reducing OOD detection to a binary classification task with little consideration for the detection threshold. We illustrate the limitations of current metrics (AUROC & its friends) and propose a new metric – Area Under the Threshold Curve (AUTC), which explicitly penalizes poor separation between ID and OOD samples. Scripts and data are available at https://github.com/glhr/beyond-auroc
Address Vancouver; Canada; June 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes HUPBA Approved no
Call Number Admin @ si @ HEM2023 Serial 3918
Permanent link to this record
 

 
Author (up) Gema Rotger; Felipe Lumbreras; Francesc Moreno-Noguer; Antonio Agudo
Title 2D-to-3D Facial Expression Transfer Type Conference Article
Year 2018 Publication 24th International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 2008 - 2013
Keywords
Abstract Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an
input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape –obtained from standard factorization approaches over the input video– using a triangular
mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic
equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPR
Notes ADAS; 600.086; 600.130; 600.118 Approved no
Call Number Admin @ si @ RLM2018 Serial 3232
Permanent link to this record
 

 
Author (up) Gemma Roig; Xavier Boix; Fernando De la Torre
Title Optimal Feature Selection for Subspace Image Matching Type Conference Article
Year 2009 Publication 2nd IEEE International Workshop on Subspace Methods in conjunction Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Image matching has been a central research topic in computer vision over the last decades. Typical approaches to correspondence involve matching feature points between images. In this paper, we present a novel problem for establishing correspondences between a sparse set of image features and a previously learned subspace model. We formulate the matching task as an energy minimization, and jointly optimize over all possible feature assignments and parameters of the subspace model. This problem is in general NP-hard. We propose a convex relaxation approximation, and develop two optimization strategies: naïve gradient-descent and quadratic programming. Alternatively, we reformulate the optimization criterion as a sparse eigenvalue problem, and solve it using a recently proposed backward greedy algorithm. Experimental results on facial feature detection show that the quadratic programming solution provides better selection mechanism for relevant features.
Address Kyoto, Japan
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes Approved no
Call Number Admin @ si @ RBT2009 Serial 1233
Permanent link to this record
 

 
Author (up) Gemma Roig; Xavier Boix; R. de Nijs; Sebastian Ramos; K. Kühnlenz; Luc Van Gool
Title Active MAP Inference in CRFs for Efficient Semantic Segmentation Type Conference Article
Year 2013 Publication 15th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 2312 - 2319
Keywords Semantic Segmentation
Abstract Most MAP inference algorithms for CRFs optimize an energy function knowing all the potentials. In this paper, we focus on CRFs where the computational cost of instantiating the potentials is orders of magnitude higher than MAP inference. This is often the case in semantic image segmentation, where most potentials are instantiated by slow classifiers fed with costly features. We introduce Active MAP inference 1) to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest of the parameters of the potentials unknown, and 2) to estimate the MAP labeling from such incomplete energy function. Results for semantic segmentation benchmarks, namely PASCAL VOC 2010 [5] and MSRC-21 [19], show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains.
Address Sydney; Australia; December 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1550-5499 ISBN Medium
Area Expedition Conference ICCV
Notes ADAS; 600.057 Approved no
Call Number ADAS @ adas @ RBN2013 Serial 2377
Permanent link to this record
 

 
Author (up) Gemma Rotger
Title Lifelike Humans: Detailed Reconstruction of Expressive Human Faces Type Book Whole
Year 2021 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Developing human-like digital characters is a challenging task since humans are used to recognizing our fellows, and find the computed generated characters inadequately humanized. To fulfill the standards of the videogame and digital film productions it is necessary to model and animate these characters the most closely to human beings. However, it is an arduous and expensive task, since many artists and specialists are required to work on a single character. Therefore, to fulfill these requirements we found an interesting option to study the automatic creation of detailed characters through inexpensive setups. In this work, we develop novel techniques to bring detailed characters by combining different aspects that stand out when developing realistic characters, skin detail, facial hairs, expressions, and microexpressions. We examine each of the mentioned areas with the aim of automatically recover each of the parts without user interaction nor training data. We study the problems for their robustness but also for the simplicity of the setup, preferring single-image with uncontrolled illumination and methods that can be easily computed with the commodity of a standard laptop. A detailed face with wrinkles and skin details is vital to develop a realistic character. In this work, we introduce our method to automatically describe facial wrinkles on the image and transfer to the recovered base face. Then we advance to facial hair recovery by resolving a fitting problem with a novel parametrization model. As of last, we develop a mapping function that allows transfer expressions and microexpressions between different meshes, which provides realistic animations to our detailed mesh. We cover all the mentioned points with the focus on key aspects as (i) how to describe skin wrinkles in a simple and straightforward manner, (ii) how to recover 3D from 2D detections, (iii) how to recover and model facial hair from 2D to 3D, (iv) how to transfer expressions between models holding both skin detail and facial hair, (v) how to perform all the described actions without training data nor user interaction. In this work, we present our proposals to solve these aspects with an efficient and simple setup. We validate our work with several datasets both synthetic and real data, prooving remarkable results even in challenging cases as occlusions as glasses, thick beards, and indeed working with different face topologies like single-eyed cyclops.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Felipe Lumbreras;Antonio Agudo
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-122714-3-0 Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ Rot2021 Serial 3513
Permanent link to this record
 

 
Author (up) Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo
Title Single view facial hair 3D reconstruction Type Conference Article
Year 2019 Publication 9th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 11867 Issue Pages 423-436
Keywords 3D Vision; Shape Reconstruction; Facial Hair Modeling
Abstract n this work, we introduce a novel energy-based framework that addresses the challenging problem of 3D reconstruction of facial hair from a single RGB image. To this end, we identify hair pixels over the image via texture analysis and then determine individual hair fibers that are modeled by means of a parametric hair model based on 3D helixes. We propose to minimize an energy composed of several terms, in order to adapt the hair parameters that better fit the image detections. The final hairs respond to the resulting fibers after a post-processing step where we encourage further realism. The resulting approach generates realistic facial hair fibers from solely an RGB image without assuming any training data nor user interaction. We provide an experimental evaluation on real-world pictures where several facial hair styles and image conditions are observed, showing consistent results and establishing a comparison with respect to competing approaches.
Address Madrid; July 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IbPRIA
Notes ADAS; 600.086; 600.130; 600.122 Approved no
Call Number Admin @ si @ Serial 3707
Permanent link to this record