|   | 
Details
   web
Records
Author (down) Y. Patel; Lluis Gomez; Raul Gomez; Marçal Rusiñol; Dimosthenis Karatzas; C.V. Jawahar
Title TextTopicNet-Self-Supervised Learning of Visual Features Through Embedding Images on Semantic Text Spaces Type Miscellaneous
Year 2018 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords
Abstract The immense success of deep learning based methods in computer vision heavily relies on large scale training datasets. These richly annotated datasets help the network learn discriminative visual features. Collecting and annotating such datasets requires a tremendous amount of human effort and annotations are limited to popular set of classes. As an alternative, learning visual features by designing auxiliary tasks which make use of freely available self-supervision has become increasingly popular in the computer vision community.
In this paper, we put forward an idea to take advantage of multi-modal context to provide self-supervision for the training of computer vision algorithms. We show that adequate visual features can be learned efficiently by training a CNN to predict the semantic textual context in which a particular image is more probable to appear as an illustration. More specifically we use popular text embedding techniques to provide the self-supervision for the training of deep CNN.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.084; 601.338; 600.121 Approved no
Call Number Admin @ si @ PGG2018 Serial 3177
Permanent link to this record
 

 
Author (down) Y. Patel; Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas; C.V. Jawahar
Title Self-Supervised Visual Representations for Cross-Modal Retrieval Type Conference Article
Year 2019 Publication ACM International Conference on Multimedia Retrieval Abbreviated Journal
Volume Issue Pages 182–186
Keywords
Abstract Cross-modal retrieval methods have been significantly improved in last years with the use of deep neural networks and large-scale annotated datasets such as ImageNet and Places. However, collecting and annotating such datasets requires a tremendous amount of human effort and, besides, their annotations are limited to discrete sets of popular visual classes that may not be representative of the richer semantics found on large-scale cross-modal retrieval datasets. In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles. Our method consists in training a CNN to predict: (1) the semantic context of the article in which an image is more probable to appear as an illustration, and (2) the semantic context of its caption. Our experiments demonstrate that the proposed method is not only capable of learning discriminative visual representations for solving vision tasks like classification, but that the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.
Address Otawa; Canada; june 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICMR
Notes DAG; 600.121; 600.129 Approved no
Call Number Admin @ si @ PGR2019 Serial 3288
Permanent link to this record
 

 
Author (down) Y. Patel; Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas
Title Dynamic Lexicon Generation for Natural Scene Images Type Conference Article
Year 2016 Publication 14th European Conference on Computer Vision Workshops Abbreviated Journal
Volume Issue Pages 395-410
Keywords scene text; photo OCR; scene understanding; lexicon generation; topic modeling; CNN
Abstract Many scene text understanding methods approach the endtoend recognition problem from a word-spotting perspective and take huge bene t from using small per-image lexicons. Such customized lexicons are normally assumed as given and their source is rarely discussed.
In this paper we propose a method that generates contextualized lexicons
for scene images using only visual information. For this, we exploit
the correlation between visual and textual information in a dataset consisting
of images and textual content associated with them. Using the topic modeling framework to discover a set of latent topics in such a dataset allows us to re-rank a xed dictionary in a way that prioritizes the words that are more likely to appear in a given image. Moreover, we train a CNN that is able to reproduce those word rankings but using only the image raw pixels as input. We demonstrate that the quality of the automatically obtained custom lexicons is superior to a generic frequency-based baseline.
Address Amsterdam; The Netherlands; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECCVW
Notes DAG; 600.084 Approved no
Call Number Admin @ si @ PGR2016 Serial 2825
Permanent link to this record
 

 
Author (down) Y. Mori; M.Misawa; Jorge Bernal; M. Bretthauer; S.Kudo; A. Rastogi; Gloria Fernandez Esparrach
Title Artificial Intelligence for Disease Diagnosis-the Gold Standard Challenge Type Journal Article
Year 2022 Publication Gastrointestinal Endoscopy Abbreviated Journal
Volume 96 Issue 2 Pages 370-372
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ MMB2022 Serial 3701
Permanent link to this record
 

 
Author (down) Xu Hu
Title Real-Time Part Based Models for Object Detection Type Report
Year 2012 Publication CVC Technical Report Abbreviated Journal
Volume 171 Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis Master's thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS;ISE Approved no
Call Number Admin @ si @ Hu2012 Serial 2415
Permanent link to this record
 

 
Author (down) Xose M. Pardo; Petia Radeva; Juan J. Villanueva
Title Self-Training Statistic Snake for Image Segmentation and Tracking. Type Miscellaneous
Year 1999 Publication Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Venice
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number BCNPCL @ bcnpcl @ PRV1999 Serial 26
Permanent link to this record
 

 
Author (down) Xose M. Pardo; Petia Radeva; D. Cabello
Title Discriminant Snakes for 3D Reconstruction of Anatomical Organs Type Journal
Year 2003 Publication Medical Image Analysis, 7(3): 293–310 (IF: 4.442) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number BCNPCL @ bcnpcl @ PPC2003 Serial 398
Permanent link to this record
 

 
Author (down) Xose M. Pardo; Petia Radeva
Title Discriminant snakes for 3D reconstruction in medical Images. Type Conference Article
Year 2000 Publication 15 th International Conference on Pattern Recognition Abbreviated Journal
Volume 4 Issue Pages 336-339
Keywords
Abstract
Address Barcelona.
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPR
Notes MILAB Approved no
Call Number BCNPCL @ bcnpcl @ PaR2000 Serial 234
Permanent link to this record
 

 
Author (down) Xinhang Song; Shuqiang Jiang; Luis Herranz; Chengpeng Chen
Title Learning Effective RGB-D Representations for Scene Recognition Type Journal Article
Year 2019 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 28 Issue 2 Pages 980-993
Keywords
Abstract Deep convolutional networks can achieve impressive results on RGB scene recognition thanks to large data sets such as places. In contrast, RGB-D scene recognition is still underdeveloped in comparison, due to two limitations of RGB-D data we address in this paper. The first limitation is the lack of depth data for training deep learning models. Rather than fine tuning or transferring RGB-specific features, we address this limitation by proposing an architecture and a two-step training approach that directly learns effective depth-specific features using weak supervision via patches. The resulting RGB-D model also benefits from more complementary multimodal features. Another limitation is the short range of depth sensors (typically 0.5 m to 5.5 m), resulting in depth images not capturing distant objects in the scenes that RGB images can. We show that this limitation can be addressed by using RGB-D videos, where more comprehensive depth information is accumulated as the camera travels across the scenes. Focusing on this scenario, we introduce the ISIA RGB-D video data set to evaluate RGB-D scene recognition with videos. Our video recognition architecture combines convolutional and recurrent neural networks that are trained in three steps with increasingly complex data to learn effective features (i.e., patches, frames, and sequences). Our approach obtains the state-of-the-art performances on RGB-D image (NYUD2 and SUN RGB-D) and video (ISIA RGB-D) scene recognition.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.141; 600.120 Approved no
Call Number Admin @ si @ SJH2019 Serial 3247
Permanent link to this record
 

 
Author (down) Xinhang Song; Shuqiang Jiang; Luis Herranz
Title Multi-Scale Multi-Feature Context Modeling for Scene Recognition in the Semantic Manifold Type Journal Article
Year 2017 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 26 Issue 6 Pages 2721-2735
Keywords
Abstract Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top-down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ SJH2017a Serial 2963
Permanent link to this record
 

 
Author (down) Xinhang Song; Shuqiang Jiang; Luis Herranz
Title Combining Models from Multiple Sources for RGB-D Scene Recognition Type Conference Article
Year 2017 Publication 26th International Joint Conference on Artificial Intelligence Abbreviated Journal
Volume Issue Pages 4523-4529
Keywords Robotics and Vision; Vision and Perception
Abstract Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at high-levels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities.
Address Melbourne; Australia; August 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IJCAI
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ SJH2017b Serial 2966
Permanent link to this record
 

 
Author (down) Xinhang Song; Luis Herranz; Shuqiang Jiang
Title Depth CNNs for RGB-D Scene Recognition: Learning from Scratch Better than Transferring from RGB-CNNs Type Conference Article
Year 2017 Publication 31st AAAI Conference on Artificial Intelligence Abbreviated Journal
Volume Issue Pages
Keywords RGB-D scene recognition; weakly supervised; fine tune; CNN
Abstract Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depth-specific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.
Address San Francisco CA; February 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AAAI
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ SHJ2017 Serial 2967
Permanent link to this record
 

 
Author (down) Xinhang Song; Haitao Zeng; Sixian Zhang; Luis Herranz; Shuqiang Jiang
Title Generalized Zero-shot Learning with Multi-source Semantic Embeddings for Scene Recognition Type Conference Article
Year 2020 Publication 28th ACM International Conference on Multimedia Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Recognizing visual categories from semantic descriptions is a promising way to extend the capability of a visual classifier beyond the concepts represented in the training data (i.e. seen categories). This problem is addressed by (generalized) zero-shot learning methods (GZSL), which leverage semantic descriptions that connect them to seen categories (e.g. label embedding, attributes). Conventional GZSL are designed mostly for object recognition. In this paper we focus on zero-shot scene recognition, a more challenging setting with hundreds of categories where their differences can be subtle and often localized in certain objects or regions. Conventional GZSL representations are not rich enough to capture these local discriminative differences. Addressing these limitations, we propose a feature generation framework with two novel components: 1) multiple sources of semantic information (i.e. attributes, word embeddings and descriptions), 2) region descriptions that can enhance scene discrimination. To generate synthetic visual features we propose a two-step generative approach, where local descriptions are sampled and used as conditions to generate visual features. The generated features are then aggregated and used together with real features to train a joint classifier. In order to evaluate the proposed method, we introduce a new dataset for zero-shot scene recognition with multi-semantic annotations. Experimental results on the proposed dataset and SUN Attribute dataset illustrate the effectiveness of the proposed method.
Address Virtual; October 2020
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACM
Notes LAMP; 600.141; 600.120 Approved no
Call Number Admin @ si @ SZZ2020 Serial 3465
Permanent link to this record
 

 
Author (down) Xim Cerda-Company; Xavier Otazu; Nilai Sallent; C. Alejandro Parraga
Title The effect of luminance differences on color assimilation Type Journal Article
Year 2018 Publication Journal of Vision Abbreviated Journal JV
Volume 18 Issue 11 Pages 10-10
Keywords
Abstract The color appearance of a surface depends on the color of its surroundings (inducers). When the perceived color shifts towards that of the surroundings, the effect is called “color assimilation” and when it shifts away from the surroundings it is called “color contrast.” There is also evidence that the phenomenon depends on the spatial configuration of the inducer, e.g., uniform surrounds tend to induce color contrast and striped surrounds tend to induce color assimilation. However, previous work found that striped surrounds under certain conditions do not induce color assimilation but induce color contrast (or do not induce anything at all), suggesting that luminance differences and high spatial frequencies could be key factors in color assimilation. Here we present a new psychophysical study of color assimilation where we assessed the contribution of luminance differences (between the target and its surround) present in striped stimuli. Our results show that luminance differences are key factors in color assimilation for stimuli varying along the s axis of MacLeod-Boynton color space, but not for stimuli varying along the l axis. This asymmetry suggests that koniocellular neural mechanisms responsible for color assimilation only contribute when there is a luminance difference, supporting the idea that mutual-inhibition has a major role in color induction.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT; 600.120; 600.128 Approved no
Call Number Admin @ si @ COS2018 Serial 3148
Permanent link to this record
 

 
Author (down) Xim Cerda-Company; Xavier Otazu
Title Color induction in equiluminant flashed stimuli Type Journal Article
Year 2019 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A
Volume 36 Issue 1 Pages 22-31
Keywords
Abstract Color induction is the influence of the surrounding color (inducer) on the perceived color of a central region. There are two different types of color induction: color contrast (the color of the central region shifts away from that of the inducer) and color assimilation (the color shifts towards the color of the inducer). Several studies on these effects have used uniform and striped surrounds, reporting color contrast and color assimilation, respectively. Other authors [J. Vis. 12(1), 22 (2012) [CrossRef] ] have studied color induction using flashed uniform surrounds, reporting that the contrast is higher for shorter flash duration. Extending their study, we present new psychophysical results using both flashed and static (i.e., non-flashed) equiluminant stimuli for both striped and uniform surrounds. Similarly to them, for uniform surround stimuli we observed color contrast, but we did not obtain the maximum contrast for the shortest (10 ms) flashed stimuli, but for 40 ms. We only observed this maximum contrast for red, green, and lime inducers, while for a purple inducer we obtained an asymptotic profile along the flash duration. For striped stimuli, we observed color assimilation only for the static (infinite flash duration) red–green surround inducers (red first inducer, green second inducer). For the other inducers’ configurations, we observed color contrast or no induction. Since other studies showed that non-equiluminant striped static stimuli induce color assimilation, our results also suggest that luminance differences could be a key factor to induce it.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes NEUROBIT; 600.120; 600.128 Approved no
Call Number Admin @ si @ CeO2019 Serial 3226
Permanent link to this record