|
Raul Gomez, Lluis Gomez, Jaume Gibert, & Dimosthenis Karatzas. (2019). Self-Supervised Learning from Web Data for Multimodal Retrieval. In Multi-Modal Scene Understanding Book (pp. 279–306).
Abstract: Self-Supervised learning from multimodal image and text data allows deep neural networks to learn powerful features with no need of human annotated data. Web and Social Media platforms provide a virtually unlimited amount of this multimodal data. In this work we propose to exploit this free available data to learn a multimodal image and text embedding, aiming to leverage the semantic knowledge learnt in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the proposed pipeline can learn from images with associated text without supervision and analyze the semantic structure of the learnt joint image and text embeddingspace. Weperformathoroughanalysisandperformancecomparisonoffivedifferentstateof the art text embeddings in three different benchmarks. We show that the embeddings learnt with Web and Social Media data have competitive performances over supervised methods in the text basedimageretrievaltask,andweclearlyoutperformstateoftheartintheMIRFlickrdatasetwhen training in the target data. Further, we demonstrate how semantic multimodal image retrieval can be performed using the learnt embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed by Instagram images and their associated texts that can be used for fair comparison of image-text embeddings.
Keywords: self-supervised learning; webly supervised learning; text embeddings; multimodal retrieval; multimodal embedding
|
|
|
Ernest Valveny, Salvatore Tabbone, & Oriol Ramos Terrades. (2008). Performance Characterization of Shape Descriptors for Symbol Representation. In J.M. Ogier J. L. W. Liu (Ed.), Graphics Recognition: Recent Advances and New Opportunities (Vol. 5046, 278–287). LNCS.
|
|
|
Sergio Vera, Miguel Angel Gonzalez Ballester, & Debora Gil. (2012). Optimal Medial Surface Generation for Anatomical Volume Representations. In MichaelW. David and Vannier H. and H. Yoshida (Ed.), Abdominal Imaging. Computational and Clinical Applications (Vol. 7601, pp. 265–273). Lecture Notes in Computer Science. Springer Berlin Heidelberg.
Abstract: Medial representations are a widely used technique in abdominal organ shape representation and parametrization. Those methods require good medial manifolds as a starting point. Any medial
surface used to parametrize a volume should be simple enough to allow an easy manipulation and complete enough to allow an accurate reconstruction of the volume. Obtaining good quality medial
surfaces is still a problem with current iterative thinning methods. This forces the usage of generic, pre-calculated medial templates that are adapted to the final shape at the cost of a drop in volume reconstruction.
This paper describes an operator for generation of medial structures that generates clean and complete manifolds well suited for their further use in medial representations of abdominal organ volumes. While being simpler than thinning surfaces, experiments show its high performance in volume reconstruction and preservation of medial surface main branching topology.
Keywords: Medial surface representation; volume reconstruction
|
|
|
Mathieu Nicolas Delalandre, Jean-Yves Ramel, Ernest Valveny, & Muhammad Muzzamil Luqman. (2010). A Performance Characterization Algorithm for Symbol Localization. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, 260–271). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we present an algorithm for performance characterization of symbol localization systems. This algorithm is aimed to be a more “reliable” and “open” solution to characterize the performance. To achieve that, it exploits only single points as the result of localization and offers the possibility to reconsider the localization results provided by a system. We use the information about context in groundtruth, and overall localization results, to detect the ambiguous localization results. A probability score is computed for each matching between a localization point and a groundtruth region, depending on the spatial distribution of the other regions in the groundtruth. Final characterization is given with detection rate/probability score plots, describing the sets of possible interpretations of the localization results, according to a given confidence rate. We present experimentation details along with the results for the symbol localization system of [1], exploiting a synthetic dataset of architectural floorplans and electrical diagrams (composed of 200 images and 3861 symbols).
|
|
|
Fadi Dornaika, & Bogdan Raducanu. (2011). Subtle Facial Expression Recognition in Still Images and Videos. In Yu-Jin Zhang (Ed.), Advances in Face Image Analysis: Techniques and Technologies (pp. 259–277). New York, USA: IGI-Global.
Abstract: This chapter addresses the recognition of basic facial expressions. It has three main contributions. First, the authors introduce a view- and texture independent schemes that exploits facial action parameters estimated by an appearance-based 3D face tracker. they represent the learned facial actions associated with different facial expressions by time series. Two dynamic recognition schemes are proposed: (1) the first is based on conditional predictive models and on an analysis-synthesis scheme, and (2) the second is based on examples allowing straightforward use of machine learning approaches. Second, the authors propose an efficient recognition scheme based on the detection of keyframes in videos. Third, the authors compare the dynamic scheme with a static one based on analyzing individual snapshots and show that in general the former performs better than the latter. The authors then provide evaluations of performance using Linear Discriminant Analysis (LDA), Non parametric Discriminant Analysis (NDA), and Support Vector Machines (SVM).
|
|
|
Jorge Bernal, Fernando Vilariño, & F. Javier Sanchez. (2011). Towards Intelligent Systems for Colonoscopy. In Paul Miskovitz (Ed.), Colonoscopy (Vol. 1, pp. 257–282). Intech.
Abstract: In this chapter we present tools that can be used to build intelligent systems for colonoscopy.
The idea is, by using methods based on computer vision and artificial intelligence, add significant value to the colonoscopy procedure. Intelligent systems are being used to assist in other medical interventions
|
|
|
Antonio Lopez, W. Niessen, Joan Serrat, K. Nikolay, B. Ter Haar Romeny, Juan J. Villanueva, et al. (2000). New improvements in the multiscale analysis of trabecular bone patterns. In Pattern Recognition and Applications (pp. 251–260). IOS Press.
|
|
|
Santiago Segui, Laura Igual, Fernando Vilariño, Petia Radeva, Carolina Malagelada, Fernando Azpiroz, et al. (2008). Diagnostic System for Intestinal Motility Disfunctions Using Video Capsule Endoscopy. In and J.K. Tsotsos M. V. A. Gasteratos (Ed.), Computer Vision Systems. 6th International (Vol. 5008, 251–260). LNCS. Berlin Heidelberg: Springer-Verlag.
Abstract: Wireless Video Capsule Endoscopy is a clinical technique consisting of the analysis of images from the intestine which are pro- vided by an ingestible device with a camera attached to it. In this paper we propose an automatic system to diagnose severe intestinal motility disfunctions using the video endoscopy data. The system is based on the application of computer vision techniques within a machine learn- ing framework in order to obtain the characterization of diverse motil- ity events from video sequences. We present experimental results that demonstrate the effectiveness of the proposed system and compare them with the ground-truth provided by the gastroenterologists.
|
|
|
Jordina Torrents-Barrena, Aida Valls, Petia Radeva, Meritxell Arenas, & Domenec Puig. (2015). Automatic Recognition of Molecular Subtypes of Breast Cancer in X-Ray images using Segmentation-based Fractal Texture Analysis. In Artificial Intelligence Research and Development (Vol. 277, pp. 247–256). Frontiers in Artificial Intelligence and Applications. IOS Press.
Abstract: Breast cancer disease has recently been classified into four subtypes regarding the molecular properties of the affected tumor region. For each patient, an accurate diagnosis of the specific type is vital to decide the most appropriate therapy in order to enhance life prospects. Nowadays, advanced therapeutic diagnosis research is focused on gene selection methods, which are not robust enough. Hence, we hypothesize that computer vision algorithms can offer benefits to address the problem of discriminating among them through X-Ray images. In this paper, we propose a novel approach driven by texture feature descriptors and machine learning techniques. First, we segment the tumour part through an active contour technique and then, we perform a complete fractal analysis to collect qualitative information of the region of interest in the feature extraction stage. Finally, several supervised and unsupervised classifiers are used to perform multiclass classification of the aforementioned data. The experimental results presented in this paper support that it is possible to establish a relation between each tumor subtype and the extracted features of the patterns revealed on mammograms.
|
|
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2008). A System to Segment Text and Symbols from Color Maps. In Graphics Recognition. Recent Advances and New Opportunities (Vol. 5046, pp. 245–256). LNCS.
|
|
|
Antonio Lopez, Jiaolong Xu, Jose Luis Gomez, David Vazquez, & German Ros. (2017). From Virtual to Real World Visual Perception using Domain Adaptation -- The DPM as Example. In Gabriela Csurka (Ed.), Domain Adaptation in Computer Vision Applications (pp. 243–258). Springer.
Abstract: Supervised learning tends to produce more accurate classifiers than unsupervised learning in general. This implies that training data is preferred with annotations. When addressing visual perception challenges, such as localizing certain object classes within an image, the learning of the involved classifiers turns out to be a practical bottleneck. The reason is that, at least, we have to frame object examples with bounding boxes in thousands of images. A priori, the more complex the model is regarding its number of parameters, the more annotated examples are required. This annotation task is performed by human oracles, which ends up in inaccuracies and errors in the annotations (aka ground truth) since the task is inherently very cumbersome and sometimes ambiguous. As an alternative we have pioneered the use of virtual worlds for collecting such annotations automatically and with high precision. However, since the models learned with virtual data must operate in the real world, we still need to perform domain adaptation (DA). In this chapter we revisit the DA of a deformable part-based model (DPM) as an exemplifying case of virtual- to-real-world DA. As a use case, we address the challenge of vehicle detection for driver assistance, using different publicly available virtual-world data. While doing so, we investigate questions such as: how does the domain gap behave due to virtual-vs-real data with respect to dominant object appearance per domain, as well as the role of photo-realism in the virtual world.
Keywords: Domain Adaptation
|
|
|
Xavier Baro, & Jordi Vitria. (2008). Evolutionary Object Detection by Means of Naive Bayes Models Estimation. In M. Giacobini (Ed.), Applications of Evolutionary Computing. EvoWorkshops (Vol. 4974, 235–244). LNCS.
|
|
|
Debora Gil, & Petia Radeva. (2004). Inhibition of False Landmarks. In J. V. et al (Ed.), Recent Advances in Artificial Intelligence Research and Development (pp. 233–244). Barcelona (Spain): IOS Press.
Abstract: We argue that a corner detector should be based on the degree of continuity of the tangent vector to the image level sets, work on the image domain and need no assumptions on neither the image local structure nor the particular geometry of the corner/junction. An operator measuring the degree of differentiability of the projection matrix on the image gradient fulfills the above requirements. Its high sensitivity to changes in vector directions makes it suitable for landmark location in real images prone to need smoothing to reduce the impact of noise. Because using smoothing kernels leads to corner misplacement, we suggest an alternative fake response remover based on the receptive field inhibition of spurious details. The combination of both orientation discontinuity detection and noise inhibition produce our Inhibition Orientation Energy (IOE) landmark locator.
|
|
|
German Ros, Laura Sellart, Gabriel Villalonga, Elias Maidanik, Francisco Molero, Marc Garcia, et al. (2017). Semantic Segmentation of Urban Scenes via Domain Adaptation of SYNTHIA. In Gabriela Csurka (Ed.), Domain Adaptation in Computer Vision Applications (Vol. 12, pp. 227–241). Springer.
Abstract: Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images; thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this chapter, we propose to use a combination of a virtual world to automatically generate realistic synthetic images with pixel-level annotations, and domain adaptation to transfer the models learnt to correctly operate in real scenarios. We address the question of how useful synthetic data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations and object identifiers. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show that combining SYNTHIA with simple domain adaptation techniques in the training stage significantly improves performance on semantic segmentation.
Keywords: SYNTHIA; Virtual worlds; Autonomous Driving
|
|
|
Sergio Vera, Debora Gil, Agnes Borras, F. Javier Sanchez, Frederic Perez, Marius G. Linguraru, et al. (2012). Computation and Evaluation of Medial Surfaces for Shape Representation of Abdominal Organs. In H. Yoshida et al (Ed.), Workshop on Computational and Clinical Applications in Abdominal Imaging (Vol. 7029, 223–230). LNCS. Berlin: Springer Link.
Abstract: Medial representations are powerful tools for describing and parameterizing the volumetric shape of anatomical structures. Existing methods show excellent results when applied to 2D
objects, but their quality drops across dimensions. This paper contributes to the computation of medial manifolds in two aspects. First, we provide a standard scheme for the computation of medial
manifolds that avoid degenerated medial axis segments; second, we introduce an energy based method which performs independently of the dimension. We evaluate quantitatively the performance of our
method with respect to existing approaches, by applying them to synthetic shapes of known medial geometry. Finally, we show results on shape representation of multiple abdominal organs,
exploring the use of medial manifolds for the representation of multi-organ relations.
Keywords: medial manifolds, abdomen.
|
|