toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Aitor Alvarez-Gila; Joost Van de Weijer; Estibaliz Garrote edit   pdf
openurl 
  Title Adversarial Networks for Spatial Context-Aware Spectral Image Reconstruction from RGB Type Conference Article
  Year 2017 Publication 1st International Workshop on Physics Based Vision meets Deep Learning Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Hyperspectral signal reconstruction aims at recovering the original spectral input that produced a certain trichromatic (RGB) response from a capturing device or observer.
Given the heavily underconstrained, non-linear nature of the problem, traditional techniques leverage different statistical properties of the spectral signal in order to build informative priors from real world object reflectances for constructing such RGB to spectral signal mapping. However,
most of them treat each sample independently, and thus do not benefit from the contextual information that the spatial dimensions can provide. We pose hyperspectral natural image reconstruction as an image to image mapping learning problem, and apply a conditional generative adversarial framework to help capture spatial semantics. This is the first time Convolutional Neural Networks -and, particularly, Generative Adversarial Networks- are used to solve this task. Quantitative evaluation shows a Root Mean Squared Error (RMSE) drop of 44:7% and a Relative RMSE drop of 47:0% on the ICVL natural hyperspectral image dataset.
 
  Address Venice; Italy; October 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV-PBDL  
  Notes LAMP; 600.109; 600.106; 600.120 Approved no  
  Call Number Admin @ si @ AWG2017 Serial (down) 2969  
Permanent link to this record
 

 
Author Simone Balocco; Francesco Ciompi; Juan Rigla; Xavier Carrillo; Josefina Mauri; Petia Radeva edit   pdf
openurl 
  Title Intra-Coronary Stent localization In Intravascular Ultrasound Sequences, A Preliminary Study Type Conference Article
  Year 2017 Publication International workshop on Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting (CVII-STENT) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract An intraluminal coronary stent is a metal scaold deployed in a stenotic artery during Percutaneous Coronary Intervention (PCI).
Intravascular Ultrasound (IVUS) is a catheter-based imaging technique generally used for assessing the correct placement of the stent. All the approaches proposed so far for the stent analysis only focused on the struts detection, while this paper proposes a novel approach to detect the boundaries and the position of the stent along the pullback.
The pipeline of the method requires the identication of the stable frames
of the sequence and the reliable detection of stent struts. Using this data,
a measure of likelihood for a frame to contain a stent is computed. Then,
a robust binary representation of the presence of the stent in the pullback
is obtained applying an iterative and multi-scale approximation of the signal to symbols using the SAX algorithm. Results obtained comparing the automatic results versus the manual annotation of two observers on 80 IVUS in-vivo sequences shows that the method approaches the inter-observer variability scores.
 
  Address Quebec; Canada; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MICCAIW  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ BCR2017 Serial (down) 2968  
Permanent link to this record
 

 
Author Xinhang Song; Luis Herranz; Shuqiang Jiang edit   pdf
openurl 
  Title Depth CNNs for RGB-D Scene Recognition: Learning from Scratch Better than Transferring from RGB-CNNs Type Conference Article
  Year 2017 Publication 31st AAAI Conference on Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages  
  Keywords RGB-D scene recognition; weakly supervised; fine tune; CNN  
  Abstract Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depth-specific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data.  
  Address San Francisco CA; February 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference AAAI  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ SHJ2017 Serial (down) 2967  
Permanent link to this record
 

 
Author Xinhang Song; Shuqiang Jiang; Luis Herranz edit   pdf
doi  openurl
  Title Combining Models from Multiple Sources for RGB-D Scene Recognition Type Conference Article
  Year 2017 Publication 26th International Joint Conference on Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages 4523-4529  
  Keywords Robotics and Vision; Vision and Perception  
  Abstract Depth can complement RGB with useful cues about object volumes and scene layout. However, RGB-D image datasets are still too small for directly training deep convolutional neural networks (CNNs), in contrast to the massive monomodal RGB datasets. Previous works in RGB-D recognition typically combine two separate networks for RGB and depth data, pretrained with a large RGB dataset and then fine tuned to the respective target RGB and depth datasets. These approaches have several limitations: 1) only use low-level filters learned from RGB data, thus not being able to exploit properly depth-specific patterns, and 2) RGB and depth features are only combined at high-levels but rarely at lower-levels. In this paper, we propose a framework that leverages both knowledge acquired from large RGB datasets together with depth-specific cues learned from the limited depth data, obtaining more effective multi-source and multi-modal representations. We propose a multi-modal combination method that selects discriminative combinations of layers from the different source models and target modalities, capturing both high-level properties of the task and intrinsic low-level properties of both modalities.  
  Address Melbourne; Australia; August 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IJCAI  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ SJH2017b Serial (down) 2966  
Permanent link to this record
 

 
Author Luis Herranz; Shuqiang Jiang; Ruihan Xu edit   pdf
doi  openurl
  Title Modeling Restaurant Context for Food Recognition Type Journal Article
  Year 2017 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM  
  Volume 19 Issue 2 Pages 430 - 440  
  Keywords  
  Abstract Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ HJX2017 Serial (down) 2965  
Permanent link to this record
 

 
Author Weiqing Min; Shuqiang Jiang; Jitao Sang; Huayang Wang; Xinda Liu; Luis Herranz edit  doi
openurl 
  Title Being a Supercook: Joint Food Attributes and Multimodal Content Modeling for Recipe Retrieval and Exploration Type Journal Article
  Year 2017 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM  
  Volume 19 Issue 5 Pages 1100 - 1113  
  Keywords  
  Abstract This paper considers the problem of recipe-oriented image-ingredient correlation learning with multi-attributes for recipe retrieval and exploration. Existing methods mainly focus on food visual information for recognition while we model visual information, textual content (e.g., ingredients), and attributes (e.g., cuisine and course) together to solve extended recipe-oriented problems, such as multimodal cuisine classification and attribute-enhanced food image retrieval. As a solution, we propose a multimodal multitask deep belief network (M3TDBN) to learn joint image-ingredient representation regularized by different attributes. By grouping ingredients into visible ingredients (which are visible in the food image, e.g., “chicken” and “mushroom”) and nonvisible ingredients (e.g., “salt” and “oil”), M3TDBN is capable of learning both midlevel visual representation between images and visible ingredients and nonvisual representation. Furthermore, in order to utilize different attributes to improve the intermodality correlation, M3TDBN incorporates multitask learning to make different attributes collaborate each other. Based on the proposed M3TDBN, we exploit the derived deep features and the discovered correlations for three extended novel applications: 1) multimodal cuisine classification; 2) attribute-augmented cross-modal recipe image retrieval; and 3) ingredient and attribute inference from food images. The proposed approach is evaluated on the constructed Yummly dataset and the evaluation results have validated the effectiveness of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ MJS2017 Serial (down) 2964  
Permanent link to this record
 

 
Author Xinhang Song; Shuqiang Jiang; Luis Herranz edit  doi
openurl 
  Title Multi-Scale Multi-Feature Context Modeling for Scene Recognition in the Semantic Manifold Type Journal Article
  Year 2017 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 26 Issue 6 Pages 2721-2735  
  Keywords  
  Abstract Before the big data era, scene recognition was often approached with two-step inference using localized intermediate representations (objects, topics, and so on). One of such approaches is the semantic manifold (SM), in which patches and images are modeled as points in a semantic probability simplex. Patch models are learned resorting to weak supervision via image labels, which leads to the problem of scene categories co-occurring in this semantic space. Fortunately, each category has its own co-occurrence patterns that are consistent across the images in that category. Thus, discovering and modeling these patterns are critical to improve the recognition performance in this representation. Since the emergence of large data sets, such as ImageNet and Places, these approaches have been relegated in favor of the much more powerful convolutional neural networks (CNNs), which can automatically learn multi-layered representations from the data. In this paper, we address many limitations of the original SM approach and related works. We propose discriminative patch representations using neural networks and further propose a hybrid architecture in which the semantic manifold is built on top of multiscale CNNs. Both representations can be computed significantly faster than the Gaussian mixture models of the original SM. To combine multiple scales, spatial relations, and multiple features, we formulate rich context models using Markov random fields. To solve the optimization problem, we analyze global and local approaches, where a top-down hierarchical algorithm has the best performance. Experimental results show that exploiting different types of contextual relations jointly consistently improves the recognition accuracy.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ SJH2017a Serial (down) 2963  
Permanent link to this record
 

 
Author Pau Rodriguez; Guillem Cucurull; Josep M. Gonfaus; Xavier Roca; Jordi Gonzalez edit   pdf
url  openurl
  Title Age and gender recognition in the wild with deep attention Type Journal Article
  Year 2017 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 72 Issue Pages 563-571  
  Keywords Age recognition; Gender recognition; Deep neural networks; Attention mechanisms  
  Abstract Face analysis in images in the wild still pose a challenge for automatic age and gender recognition tasks, mainly due to their high variability in resolution, deformation, and occlusion. Although the performance has highly increased thanks to Convolutional Neural Networks (CNNs), it is still far from optimal when compared to other image recognition tasks, mainly because of the high sensitiveness of CNNs to facial variations. In this paper, inspired by biology and the recent success of attention mechanisms on visual question answering and fine-grained recognition, we propose a novel feedforward attention mechanism that is able to discover the most informative and reliable parts of a given face for improving age and gender classification. In particular, given a downsampled facial image, the proposed model is trained based on a novel end-to-end learning framework to extract the most discriminative patches from the original high-resolution image. Experimental validation on the standard Adience, Images of Groups, and MORPH II benchmarks show that including attention mechanisms enhances the performance of CNNs in terms of robustness and accuracy.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.098; 602.133; 600.119 Approved no  
  Call Number Admin @ si @ RCG2017b Serial (down) 2962  
Permanent link to this record
 

 
Author Mireia Sole; Joan Blanco; Debora Gil; G. Fonseka; Richard Frodsham; Francesca Vidal; Zaida Sarrate edit   pdf
isbn  openurl
  Title Noves perspectives en l estudi de la territorialitat cromosomica de cel·lules germinals masculines: estudis tridimensionals Type Journal
  Year 2017 Publication Biologia de la Reproduccio Abbreviated Journal JBR  
  Volume 15 Issue Pages 73-78  
  Keywords  
  Abstract In somatic cells, chromosomes occupy specific nuclear regions called chromosome territories which are involved in the
maintenance and regulation of the genome. Preliminary data in male germ cells also suggest the importance of chromosome
territoriality in cell functionality. Nevertheless, the specific characteristics of testicular tissue (presence of different
cell types with different morphological characteristics, in different stages of development and with different ploidy)
makes difficult to achieve conclusive results. In this study we have developed a methodology to approach the threedimensional
study of all chromosome territories in male germ cells from C57BL/6J mice (Mus musculus). The method
includes the following steps: i) Optimized cell fixation to obtain an optimal preservation of the three-dimensionality cell
morphology, ii) Chromosome identification by FISH (Chromoprobe Multiprobe® OctoChrome™ Murine System; Cytocell)
and confocal microscopy (TCS-SP5, Leica Microsystems), iii) Cell type identification by immunofluorescence
iv) Image analysis using Matlab scripts, v) Numerical data extraction related to chromosome features, chromosome
radial position and chromosome relative position. This methodology allows the unequivocally identification and the
analysis of the chromosome territories of all spermatogenic stages. Results will provide information about the features
that determine chromosomal position, preferred associations between chromosomes, and the relationship between chromosome
positioning and genome regulation.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-697-3767-5 Medium  
  Area Expedition Conference  
  Notes IAM; 600.096; 600.145 Approved no  
  Call Number Admin @ si @ SBG2017c Serial (down) 2961  
Permanent link to this record
 

 
Author Mireia Sole; Joan Blanco; Debora Gil; G. Fonseka; Richard Frodsham; Oliver Valero; Francesca Vidal; Zaida Sarrate edit  openurl
  Title Unraveling the enigmas of chromosome territoriality during spermatogenesis Type Conference Article
  Year 2017 Publication IX Jornada del Departament de Biologia Cel•lular, Fisiologia i Immunologia Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address UAB; Barcelona; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.145 Approved no  
  Call Number Admin @ si @ SBG2017b Serial (down) 2959  
Permanent link to this record
 

 
Author Mireia Sole; Joan Blanco; Debora Gil; G. Fonseka; Richard Frodsham; Oliver Valero; Francesca Vidal; Zaida Sarrate edit   pdf
openurl 
  Title Is there a pattern of Chromosome territoriality along mice spermatogenesis? Type Conference Article
  Year 2017 Publication 3rd Spanish MeioNet Meeting Abstract Book Abbreviated Journal  
  Volume Issue Pages 55-56  
  Keywords  
  Abstract  
  Address Miraflores de la Sierra; Madrid; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MEIONET  
  Notes IAM; 600.096; 600.145 Approved no  
  Call Number Admin @ si @ Serial (down) 2958  
Permanent link to this record
 

 
Author Debora Gil; Oriol Ramos Terrades; Elisa Minchole; Carles Sanchez; Noelia Cubero de Frutos; Marta Diez-Ferrer; Rosa Maria Ortiz; Antoni Rosell edit   pdf
url  openurl
  Title Classification of Confocal Endomicroscopy Patterns for Diagnosis of Lung Cancer Type Conference Article
  Year 2017 Publication 6th Workshop on Clinical Image-based Procedures: Translational Research in Medical Imaging Abbreviated Journal  
  Volume 10550 Issue Pages 151-159  
  Keywords  
  Abstract Confocal Laser Endomicroscopy (CLE) is an emerging imaging technique that allows the in-vivo acquisition of cell patterns of potentially malignant lesions. Such patterns could discriminate between inflammatory and neoplastic lesions and, thus, serve as a first in-vivo biopsy to discard cases that do not actually require a cell biopsy.

The goal of this work is to explore whether CLE images obtained during videobronchoscopy contain enough visual information to discriminate between benign and malign peripheral lesions for lung cancer diagnosis. To do so, we have performed a pilot comparative study with 12 patients (6 adenocarcinoma and 6 benign-inflammatory) using 2 different methods for CLE pattern analysis: visual analysis by 3 experts and a novel methodology that uses graph methods to find patterns in pre-trained feature spaces. Our preliminary results indicate that although visual analysis can only achieve a 60.2% of accuracy, the accuracy of the proposed unsupervised image pattern classification raises to 84.6%.

We conclude that CLE images visual information allow in-vivo detection of neoplastic lesions and graph structural analysis applied to deep-learning feature spaces can achieve competitive results.
 
  Address Quebec; Canada; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CLIP  
  Notes IAM; 600.096; 600.075; 600.145 Approved no  
  Call Number Admin @ si @ GRM2017 Serial (down) 2957  
Permanent link to this record
 

 
Author Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades edit  openurl
  Title Spotting Symbol over Graphical Documents Via Sparsity in Visual Vocabulary Type Book Chapter
  Year 2016 Publication Recent Trends in Image Processing and Pattern Recognition Abbreviated Journal  
  Volume 709 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference RTIP2R  
  Notes DAG Approved no  
  Call Number Admin @ si @ HTR2016 Serial (down) 2956  
Permanent link to this record
 

 
Author Pau Riba; Alicia Fornes; Josep Llados edit   pdf
url  isbn
openurl 
  Title Towards the Alignment of Handwritten Music Scores Type Book Chapter
  Year 2017 Publication International Workshop on Graphics Recognition. GREC 2015.Graphic Recognition. Current Trends and Challenges Abbreviated Journal  
  Volume 9657 Issue Pages 103-116  
  Keywords Optical Music Recognition; Handwritten Music Scores; Dynamic Time Warping alignment  
  Abstract It is very common to nd di erent versions of the same music work in archives of Opera Theaters. These di erences correspond to modi cations and annotations from the musicians. From the musicologist point of view, these variations are very interesting and deserve study.
This paper explores the alignment of music scores as a tool for automatically detecting the passages that contain such di erences. Given the diculties in the recognition of handwritten music scores, our goal is to align the music scores and at the same time, avoid the recognition of music elements as much as possible. After removing the sta lines, braces and ties, the bar lines are detected. Then, the bar units are described as a whole using the Blurred Shape Model. The bar units alignment is performed by using Dynamic Time Warping. The analysis of the alignment path is used to detect the variations in the music scores. The method has been evaluated on a subset of the CVC-MUSCIMA dataset, showing encouraging results.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor Bart Lamiroy; R Dueire Lins  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-319-52158-9 Medium  
  Area Expedition Conference  
  Notes DAG; 600.097; 602.006; 600.121 Approved no  
  Call Number Admin @ si @ RFL2017 Serial (down) 2955  
Permanent link to this record
 

 
Author Lasse Martensson; Anders Hast; Alicia Fornes edit   pdf
isbn  openurl
  Title Word Spotting as a Tool for Scribal Attribution Type Conference Article
  Year 2017 Publication 2nd Conference of the association of Digital Humanities in the Nordic Countries Abbreviated Journal  
  Volume Issue Pages 87-89  
  Keywords  
  Abstract  
  Address Gothenburg; Suecia; March 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-91-88348-83-8 Medium  
  Area Expedition Conference DHN  
  Notes DAG; 600.097; 600.121 Approved no  
  Call Number Admin @ si @ MHF2017 Serial (down) 2954  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: