|
Gemma Rotger, Felipe Lumbreras, Francesc Moreno-Noguer, & Antonio Agudo. (2018). 2D-to-3D Facial Expression Transfer. In 24th International Conference on Pattern Recognition (pp. 2008–2013).
Abstract: Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an
input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape –obtained from standard factorization approaches over the input video– using a triangular
mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic
equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.
|
|
|
Daniel Hernandez, Lukas Schneider, Antonio Espinosa, David Vazquez, Antonio Lopez, Uwe Franke, et al. (2017). Slanted Stixels: Representing San Francisco's Steepest Streets. In 28th British Machine Vision Conference.
Abstract: In this work we present a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach overcomes the previous rather restrictive geometric assumptions for Stixels by introducing a novel depth model to account for non-flat roads and slanted objects. Both semantic and depth cues are used jointly to infer the scene representation in a sound global energy minimization formulation. Furthermore, a novel approximation scheme is introduced that uses an extremely efficient over-segmentation. In doing so, the computational complexity of the Stixel inference algorithm is reduced significantly, achieving real-time computation capabilities with only a slight drop in accuracy. We evaluate the proposed approach in terms of semantic and geometric accuracy as well as run-time on four publicly available benchmark datasets. Our approach maintains accuracy on flat road scene datasets while improving substantially on a novel non-flat road dataset.
|
|
|
Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes Garcia-Martinez, Fethi Bougares, Loic Barrault, et al. (2017). LIUM-CVC Submissions for WMT17 Multimodal Translation Task. In 2nd Conference on Machine Translation.
Abstract: This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation. We mainly explored two multimodal architectures where either global visual features or convolutional feature maps are integrated in order to benefit from visual context. Our final systems ranked first for both En-De and En-Fr language pairs according to the automatic evaluation metrics METEOR and BLEU.
|
|
|
Veronica Romero, Alicia Fornes, Enrique Vidal, & Joan Andreu Sanchez. (2017). Information Extraction in Handwritten Marriage Licenses Books Using the MGGI Methodology. In L.A. Alexandre, J.Salvador Sanchez, & Joao M. F. Rodriguez (Eds.), 8th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 10255, pp. 287–294). LNCS.
Abstract: Historical records of daily activities provide intriguing insights into the life of our ancestors, useful for demographic and genealogical research. For example, marriage license books have been used for centuries by ecclesiastical and secular institutions to register marriages. These books follow a simple structure of the text in the records with a evolutionary vocabulary, mainly composed of proper names that change along the time. This distinct vocabulary makes automatic transcription and semantic information extraction difficult tasks. In previous works we studied the use of category-based language models and how a Grammatical Inference technique known as MGGI could improve the accuracy of these tasks. In this work we analyze the main causes of the semantic errors observed in previous results and apply a better implementation of the MGGI technique to solve these problems. Using the resulting language model, transcription and information extraction experiments have been carried out, and the results support our proposed approach.
Keywords: Handwritten Text Recognition; Information extraction; Language modeling; MGGI; Categories-based language model
|
|
|
Antonio Lopez, Jiaolong Xu, Jose Luis Gomez, David Vazquez, & German Ros. (2017). From Virtual to Real World Visual Perception using Domain Adaptation -- The DPM as Example. In Gabriela Csurka (Ed.), Domain Adaptation in Computer Vision Applications (pp. 243–258). Springer.
Abstract: Supervised learning tends to produce more accurate classifiers than unsupervised learning in general. This implies that training data is preferred with annotations. When addressing visual perception challenges, such as localizing certain object classes within an image, the learning of the involved classifiers turns out to be a practical bottleneck. The reason is that, at least, we have to frame object examples with bounding boxes in thousands of images. A priori, the more complex the model is regarding its number of parameters, the more annotated examples are required. This annotation task is performed by human oracles, which ends up in inaccuracies and errors in the annotations (aka ground truth) since the task is inherently very cumbersome and sometimes ambiguous. As an alternative we have pioneered the use of virtual worlds for collecting such annotations automatically and with high precision. However, since the models learned with virtual data must operate in the real world, we still need to perform domain adaptation (DA). In this chapter we revisit the DA of a deformable part-based model (DPM) as an exemplifying case of virtual- to-real-world DA. As a use case, we address the challenge of vehicle detection for driver assistance, using different publicly available virtual-world data. While doing so, we investigate questions such as: how does the domain gap behave due to virtual-vs-real data with respect to dominant object appearance per domain, as well as the role of photo-realism in the virtual world.
Keywords: Domain Adaptation
|
|
|
David Vazquez, Jorge Bernal, F. Javier Sanchez, Gloria Fernandez Esparrach, Antonio Lopez, Adriana Romero, et al. (2017). A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images. In 31st International Congress and Exhibition on Computer Assisted Radiology and Surgery.
Abstract: Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss-rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. We provide new baselines on this dataset by training standard fully convolutional networks (FCN) for semantic segmentation and significantly outperforming, without any further post-processing, prior results in endoluminal scene segmentation.
Keywords: Deep Learning; Medical Imaging
|
|
|
Marçal Rusiñol, & Josep Llados. (2017). Flowchart Recognition in Patent Information Retrieval. In M. Lupu, K. Mayer, N. Kando, & A.J. Trippe (Eds.), Current Challenges in Patent Information Retrieval (Vol. 37, pp. 351–368). Springer Berlin Heidelberg.
|
|
|
Angel Valencia, Roger Idrovo, Angel Sappa, Douglas Plaza, & Daniel Ochoa. (2017). A 3D Vision Based Approach for Optimal Grasp of Vacuum Grippers. In IEEE International Workshop of Electronics, Control, Measurement, Signals and their application to Mechatronics.
Abstract: In general, robot grasping approaches are based on the usage of multi-finger grippers. However, when large size objects need to be manipulated vacuum grippers are preferred, instead of finger based grippers. This paper aims to estimate the best picking place for a two suction cups vacuum gripper,
when planar objects with an unknown size and geometry are considered. The approach is based on the estimation of geometric properties of object’s shape from a partial cloud of points (a single 3D view), in such a way that combine with considerations of a theoretical model to generate an optimal contact point
that minimizes the vacuum force needed to guarantee a grasp.
Experimental results in real scenarios are presented to show the validity of the proposed approach.
|
|
|
Cristhian Aguilera, Xavier Soria, Angel Sappa, & Ricardo Toledo. (2017). RGBN Multispectral Images: a Novel Color Restoration Approach. In 15th International Conference on Practical Applications of Agents and Multi-Agent System.
Abstract: This paper describes a color restoration technique used to remove NIR information from single sensor cameras where color and near-infrared images are simultaneously acquired|referred to in the literature as RGBN images. The proposed approach is based on a neural network architecture that learns the NIR information contained in the RGBN images. The proposed approach is evaluated on real images obtained by using a pair of RGBN cameras. Additionally, qualitative comparisons with a nave color correction technique based on mean square
error minimization are provided.
Keywords: Multispectral Imaging; Free Sensor Model; Neural Network
|
|
|
Umut Guclu, Yagmur Gucluturk, Meysam Madadi, Sergio Escalera, Xavier Baro, Jordi Gonzalez, et al. (2017). End-to-end semantic face segmentation with conditional random fields as convolutional, recurrent and adversarial networks.
Abstract: arXiv:1703.03305
Recent years have seen a sharp increase in the number of related yet distinct advances in semantic segmentation. Here, we tackle this problem by leveraging the respective strengths of these advances. That is, we formulate a conditional random field over a four-connected graph as end-to-end trainable convolutional and recurrent networks, and estimate them via an adversarial process. Importantly, our model learns not only unary potentials but also pairwise
potentials, while aggregating multi-scale contexts and controlling higher-order inconsistencies.
We evaluate our model on two standard benchmark datasets for semantic face segmentation, achieving state-of-the-art results on both of them.
|
|
|
H. Martin Kjer, Jens Fagertun, Sergio Vera, & Debora Gil. (2017). Medial structure generation for registration of anatomical structures. In Skeletonization, Theory, Methods and Applications (Vol. 11).
|
|
|
David Vazquez, Jorge Bernal, F. Javier Sanchez, Gloria Fernandez Esparrach, Antonio Lopez, Adriana Romero, et al. (2017). A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images. JHCE - Journal of Healthcare Engineering, , 2040–2295.
Abstract: Colorectal cancer (CRC) is the third cause of cancer death world-wide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss- rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aim- ing to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endolumninal scene, tar- geting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCN). We perform a compar- ative study to show that FCN significantly outperform, without any further post-processing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
Keywords: Colonoscopy images; Deep Learning; Semantic Segmentation
|
|
|
Carles Sanchez, Antonio Esteban Lansaque, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2017). Towards a Videobronchoscopy Localization System from Airway Centre Tracking. In 12th International Conference on Computer Vision Theory and Applications (pp. 352–359).
Abstract: Bronchoscopists use fluoroscopy to guide flexible bronchoscopy to the lesion to be biopsied without any kind of incision. Being fluoroscopy an imaging technique based on X-rays, the risk of developmental problems and cancer is increased in those subjects exposed to its application, so minimizing radiation is crucial. Alternative guiding systems such as electromagnetic navigation require specific equipment, increase the cost of the clinical procedure and still require fluoroscopy. In this paper we propose an image based guiding system based on the extraction of airway centres from intra-operative videos. Such anatomical landmarks are matched to the airway centreline extracted from a pre-planned CT to indicate the best path to the nodule. We present a
feasibility study of our navigation system using simulated bronchoscopic videos and a multi-expert validation of landmarks extraction in 3 intra-operative ultrathin explorations.
Keywords: Video-bronchoscopy; Lung cancer diagnosis; Airway lumen detection; Region tracking; Guided bronchoscopy navigation
|
|
|
Xinhang Song, Luis Herranz, & Shuqiang Jiang. (2017). Depth CNNs for RGB-D Scene Recognition: Learning from Scratch Better than Transferring from RGB-CNNs. In 31st AAAI Conference on Artificial Intelligence.
Abstract: Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depth-specific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.
Keywords: RGB-D scene recognition; weakly supervised; fine tune; CNN
|
|
|
Simone Balocco, Francesco Ciompi, Juan Rigla, Xavier Carrillo, J. Mauri, & Petia Radeva. (2017). Intra-Coronary Stent localization In Intravascular Ultrasound Sequences, A Preliminary Study. In International workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting (CVII-STENT). LNCS.
Abstract: An intraluminal coronary stent is a metal scaold deployed in a stenotic artery during Percutaneous Coronary Intervention (PCI).
Intravascular Ultrasound (IVUS) is a catheter-based imaging technique generally used for assessing the correct placement of the stent. All the approaches proposed so far for the stent analysis only focused on the struts detection, while this paper proposes a novel approach to detect the boundaries and the position of the stent along the pullback.
The pipeline of the method requires the identication of the stable frames
of the sequence and the reliable detection of stent struts. Using this data,
a measure of likelihood for a frame to contain a stent is computed. Then,
a robust binary representation of the presence of the stent in the pullback
is obtained applying an iterative and multi-scale approximation of the signal to symbols using the SAX algorithm. Results obtained comparing the automatic results versus the manual annotation of two observers on 80 IVUS in-vivo sequences shows that the method approaches the inter-observer variability scores.
|
|