toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Patricia Suarez; Angel Sappa; Boris X. Vintimilla; Riad I. Hammoud edit   pdf
openurl 
  Title Image Vegetation Index through a Cycle Generative Adversarial Network Type Conference Article
  Year 2019 Publication IEEE International Conference on Computer Vision and Pattern Recognition-Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper proposes a novel approach to estimate the Normalized Difference Vegetation Index (NDVI) just from an RGB image. The NDVI values are obtained by using images from the visible spectral band together with a synthetic near infrared image obtained by a cycled GAN. The cycled GAN network is able to obtain a NIR image from a given gray scale image. It is trained by using unpaired set of gray scale and NIR images by using a U-net architecture and a multiple loss function (gray scale images are obtained from the provided RGB images). Then, the NIR image estimated with the proposed cycle generative adversarial network is used to compute the NDVI index. Experimental results are provided showing the validity of the proposed approach. Additionally, comparisons with previous approaches are also provided.  
  Address (down) Long beach; California; USA; June 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MSIAU; 600.130; 601.349; 600.122 Approved no  
  Call Number Admin @ si @ SSV2019 Serial 3272  
Permanent link to this record
 

 
Author Ali Furkan Biten; Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas edit   pdf
url  doi
openurl 
  Title Good News, Everyone! Context driven entity-aware captioning for news images Type Conference Article
  Year 2019 Publication 32nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 12458-12467  
  Keywords  
  Abstract Current image captioning systems perform at a merely descriptive level, essentially enumerating the objects in the scene and their relations. Humans, on the contrary, interpret images by integrating several sources of prior knowledge of the world. In this work, we aim to take a step closer to producing captions that offer a plausible interpretation of the scene, by integrating such contextual information into the captioning pipeline. For this we focus on the captioning of images used to illustrate news articles. We propose a novel captioning method that is able to leverage contextual information provided by the text of news articles associated with an image. Our model is able to selectively draw information from the article guided by visual cues, and to dynamically extend the output dictionary to out-of-vocabulary named entities that appear in the context source. Furthermore we introduce“ GoodNews”, the largest news image captioning dataset in the literature and demonstrate state-of-the-art results.  
  Address (down) Long beach; California; USA; june 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes DAG; 600.129; 600.135; 601.338; 600.121 Approved no  
  Call Number Admin @ si @ BGR2019 Serial 3289  
Permanent link to this record
 

 
Author Lu Yu; Vacit Oguz Yazici; Xialei Liu; Joost Van de Weijer; Yongmei Cheng; Arnau Ramisa edit   pdf
url  doi
openurl 
  Title Learning Metrics from Teachers: Compact Networks for Image Embedding Type Conference Article
  Year 2019 Publication 32nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2907-2916  
  Keywords  
  Abstract Metric learning networks are used to compute image embeddings, which are widely used in many applications such as image retrieval and face recognition. In this paper, we propose to use network distillation to efficiently compute image embeddings with small networks. Network distillation has been successfully applied to improve image classification, but has hardly been explored for metric learning. To do so, we propose two new loss functions that model the
communication of a deep teacher network to a small student network. We evaluate our system in several datasets, including CUB-200-2011, Cars-196, Stanford Online Products and show that embeddings computed using small student networks perform significantly better than those computed using standard networks of similar size. Results on a very compact network (MobileNet-0.25), which can be
used on mobile devices, show that the proposed method can greatly improve Recall@1 results from 27.5% to 44.6%. Furthermore, we investigate various aspects of distillation for embeddings, including hint and attention layers, semisupervised learning and cross quality distillation.
 
  Address (down) Long beach; California; june 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes LAMP; 600.109; 600.120 Approved no  
  Call Number Admin @ si @ YYL2019 Serial 3281  
Permanent link to this record
 

 
Author Sounak Dey; Pau Riba; Anjan Dutta; Josep Llados; Yi-Zhe Song edit   pdf
url  doi
openurl 
  Title Doodle to Search: Practical Zero-Shot Sketch-Based Image Retrieval Type Conference Article
  Year 2019 Publication IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 2179-2188  
  Keywords  
  Abstract In this paper, we investigate the problem of zero-shot sketch-based image retrieval (ZS-SBIR), where human sketches are used as queries to conduct retrieval of photos from unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR scenario that represents a firm step forward in its practical application. The new setting uniquely recognizes two important yet often neglected challenges of practical ZS-SBIR, (i) the large domain gap between amateur sketch and photo, and (ii) the necessity for moving towards large-scale retrieval. We first contribute to the community a novel ZS-SBIR dataset, QuickDraw-Extended, that consists of 330,000 sketches and 204,000 photos spanning across 110 categories. Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic. We then formulate a ZS-SBIR framework to jointly model sketches and photos into a common embedding space. A novel strategy to mine the mutual information among domains is specifically engineered to alleviate the domain gap. External semantic knowledge is further embedded to aid semantic transfer. We show that, rather surprisingly, retrieval performance significantly outperforms that of state-of-the-art on existing datasets that can already be achieved using a reduced version of our model. We further demonstrate the superior performance of our full model by comparing with a number of alternatives on the newly proposed dataset. The new dataset, plus all training and testing code of our model, will be publicly released to facilitate future research.  
  Address (down) Long beach; CA; USA; June 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes DAG; 600.140; 600.121; 600.097 Approved no  
  Call Number Admin @ si @ DRD2019 Serial 3462  
Permanent link to this record
 

 
Author Petia Radeva; A.F. Sole; Antonio Lopez; Joan Serrat edit  openurl
  Title Detecting Nets of Linear Structures in Satellite Images. Type Miscellaneous
  Year 1998 Publication Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) Londres  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;MILAB Approved no  
  Call Number ADAS @ adas @ RSL1998 Serial 25  
Permanent link to this record
 

 
Author Daniel Hernandez; Lukas Schneider; Antonio Espinosa; David Vazquez; Antonio Lopez; Uwe Franke; Marc Pollefeys; Juan C. Moure edit   pdf
openurl 
  Title Slanted Stixels: Representing San Francisco's Steepest Streets} Type Conference Article
  Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this work we present a novel compact scene representation based on Stixels that infers geometric and semantic information. Our approach overcomes the previous rather restrictive geometric assumptions for Stixels by introducing a novel depth model to account for non-flat roads and slanted objects. Both semantic and depth cues are used jointly to infer the scene representation in a sound global energy minimization formulation. Furthermore, a novel approximation scheme is introduced that uses an extremely efficient over-segmentation. In doing so, the computational complexity of the Stixel inference algorithm is reduced significantly, achieving real-time computation capabilities with only a slight drop in accuracy. We evaluate the proposed approach in terms of semantic and geometric accuracy as well as run-time on four publicly available benchmark datasets. Our approach maintains accuracy on flat road scene datasets while improving substantially on a novel non-flat road dataset.  
  Address (down) London; uk; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.118 Approved no  
  Call Number ADAS @ adas @ HSE2017a Serial 2945  
Permanent link to this record
 

 
Author Fernando Vilariño; Dan Norton edit  openurl
  Title Using mutimedia tools to spread poetry collections Type Conference Article
  Year 2017 Publication Internet librarian International Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) London; UK; October 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ILI  
  Notes MV; 600.097;SIAI Approved no  
  Call Number Admin @ si @ ViN2017 Serial 3031  
Permanent link to this record
 

 
Author Dan Norton; Fernando Vilariño; Onur Ferhat edit  openurl
  Title Memory Field – Creative Engagement in Digital Collections Type Conference Article
  Year 2015 Publication Internet Librarian International Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract “Memory Fields” is a trans-disciplinary project aiming at the (re)valorisation of digital collections.Its main deliverable is an interface for a dual screen installation, used to access and mix the public library digital collections. The collections being used in this case are a collection of digitised posters from the Spanish Civil War, belonging to the Arxiu General de Catalunya, and a collection of field recordings made by Dan Norton. The system generates visualisations, and the images and sounds are mixed together using narrative primitives of video dj. Users contribute to the digital collections by adding personal memories and observations. The comments and recollections appear as flowers growing in a “memory field” and memories remain public in a Twitter feed (@Memoryfields).  
  Address (down) London; UK; October 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ILI  
  Notes MV;SIAI Approved no  
  Call Number Admin @ si @NVF2015 Serial 2796  
Permanent link to this record
 

 
Author Kai Wang; Fei Yang; Joost Van de Weijer edit   pdf
openurl 
  Title Attention Distillation: self-supervised vision transformer students need more guidance Type Conference Article
  Year 2022 Publication 33rd British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Self-supervised learning has been widely applied to train high-quality vision transformers. Unleashing their excellent performance on memory and compute constraint devices is therefore an important research topic. However, how to distill knowledge from one self-supervised ViT to another has not yet been explored. Moreover, the existing self-supervised knowledge distillation (SSKD) methods focus on ConvNet based architectures are suboptimal for ViT knowledge distillation. In this paper, we study knowledge distillation of self-supervised vision transformers (ViT-SSKD). We show that directly distilling information from the crucial attention mechanism from teacher to student can significantly narrow the performance gap between both. In experiments on ImageNet-Subset and ImageNet-1K, we show that our method AttnDistill outperforms existing self-supervised knowledge distillation (SSKD) methods and achieves state-of-the-art k-NN accuracy compared with self-supervised learning (SSL) methods learning from scratch (with the ViT-S model). We are also the first to apply the tiny ViT-T model on self-supervised learning. Moreover, AttnDistill is independent of self-supervised learning algorithms, it can be adapted to ViT based SSL methods to improve the performance in future research.  
  Address (down) London; UK; November 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes LAMP; 600.147 Approved no  
  Call Number Admin @ si @ WYW2022 Serial 3793  
Permanent link to this record
 

 
Author Kai Wang; Chenshen Wu; Andrew Bagdanov; Xialei Liu; Shiqi Yang; Shangling Jui; Joost Van de Weijer edit  openurl
  Title Positive Pair Distillation Considered Harmful: Continual Meta Metric Learning for Lifelong Object Re-Identification Type Conference Article
  Year 2022 Publication 33rd British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Lifelong object re-identification incrementally learns from a stream of re-identification tasks. The objective is to learn a representation that can be applied to all tasks and that generalizes to previously unseen re-identification tasks. The main challenge is that at inference time the representation must generalize to previously unseen identities. To address this problem, we apply continual meta metric learning to lifelong object re-identification. To prevent forgetting of previous tasks, we use knowledge distillation and explore the roles of positive and negative pairs. Based on our observation that the distillation and metric losses are antagonistic, we propose to remove positive pairs from distillation to robustify model updates. Our method, called Distillation without Positive Pairs (DwoPP), is evaluated on extensive intra-domain experiments on person and vehicle re-identification datasets, as well as inter-domain experiments on the LReID benchmark. Our experiments demonstrate that DwoPP significantly outperforms the state-of-the-art.  
  Address (down) London; UK; November 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes LAMP; 600.147 Approved no  
  Call Number Admin @ si @ WWB2022 Serial 3794  
Permanent link to this record
 

 
Author Arash Akbarinia; Raquel Gil Rodriguez; C. Alejandro Parraga edit   pdf
openurl 
  Title Colour Constancy: Biologically-inspired Contrast Variant Pooling Mechanism Type Conference Article
  Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Pooling is a ubiquitous operation in image processing algorithms that allows for higher-level processes to collect relevant low-level features from a region of interest. Currently, max-pooling is one of the most commonly used operators in the computational literature. However, it can lack robustness to outliers due to the fact that it relies merely on the peak of a function. Pooling mechanisms are also present in the primate visual cortex where neurons of higher cortical areas pool signals from lower ones. The receptive fields of these neurons have been shown to vary according to the contrast by aggregating signals over a larger region in the presence of low contrast stimuli. We hypothesise that this contrast-variant-pooling mechanism can address some of the shortcomings of maxpooling. We modelled this contrast variation through a histogram clipping in which the percentage of pooled signal is inversely proportional to the local contrast of an image. We tested our hypothesis by applying it to the phenomenon of colour constancy where a number of popular algorithms utilise a max-pooling step (e.g. White-Patch, Grey-Edge and Double-Opponency). For each of these methods, we investigated the consequences of replacing their original max-pooling by the proposed contrast-variant-pooling. Our experiments on three colour constancy benchmark datasets suggest that previous results can significantly improve by adopting a contrast-variant-pooling mechanism.  
  Address (down) London; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes NEUROBIT; 600.068; 600.072 Approved no  
  Call Number Admin @ si @ AGP2017 Serial 2992  
Permanent link to this record
 

 
Author Rada Deeb; Damien Muselet; Mathieu Hebert; Alain Tremeau; Joost Van de Weijer edit   pdf
openurl 
  Title 3D color charts for camera spectral sensitivity estimation Type Conference Article
  Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Estimating spectral data such as camera sensor responses or illuminant spectral power distribution from raw RGB camera outputs is crucial in many computer vision applications.
Usually, 2D color charts with various patches of known spectral reflectance are
used as reference for such purpose. Deducing n-D spectral data (n»3) from 3D RGB inputs is an ill-posed problem that requires a high number of inputs. Unfortunately, most of the natural color surfaces have spectral reflectances that are well described by low-dimensional linear models, i.e. each spectral reflectance can be approximated by a weighted sum of the others. It has been shown that adding patches to color charts does not help in practice, because the information they add is redundant with the information provided by the first set of patches. In this paper, we propose to use spectral data of
higher dimensionality by using 3D color charts that create inter-reflections between the surfaces. These inter-reflections produce multiplications between natural spectral curves and so provide non-linear spectral curves. We show that such data provide enough information for accurate spectral data estimation.
 
  Address (down) London; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes LAMP; 600.109; 600.120 Approved no  
  Call Number Admin @ si @ DMH2017b Serial 3037  
Permanent link to this record
 

 
Author Wenjuan Gong; Jürgen Brauer; Michael Arens; Jordi Gonzalez edit  openurl
  Title Modeling vs. Learning Approaches for Monocular 3D Human Pose Estimation Type Conference Article
  Year 2011 Publication 1st IEEE International Workshop on Performance Evaluation on Recognition of Human Actions and Pose Estimation Methods Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) London, United Kingdom  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference PERHAPS  
  Notes ISE Approved no  
  Call Number Admin @ si @ GBA2011 Serial 1812  
Permanent link to this record
 

 
Author Francesco Ciompi; Oriol Pujol; E Fernandez-Nofrerias; J. Mauri; Petia Radeva edit  doi
isbn  openurl
  Title ECOC Random Fields for Lumen Segmentation in Radial Artery IVUS Sequences Type Conference Article
  Year 2009 Publication 12th International Conference on Medical Image and Computer Assisted Intervention Abbreviated Journal  
  Volume 5762 Issue II Pages  
  Keywords  
  Abstract The measure of lumen volume on radial arteries can be used to evaluate the vessel response to different vasodilators. In this paper, we present a framework for automatic lumen segmentation in longitudinal cut images of radial artery from Intravascular ultrasound sequences. The segmentation is tackled as a classification problem where the contextual information is exploited by means of Conditional Random Fields (CRFs). A multi-class classification framework is proposed, and inference is achieved by combining binary CRFs according to the Error-Correcting-Output-Code technique. The results are validated against manually segmented sequences. Finally, the method is compared with other state-of-the-art classifiers.  
  Address (down) London, UK  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-04270-6 Medium  
  Area Expedition Conference MICCAI  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ CPF2009 Serial 1228  
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez edit  openurl
  Title Video alignment for automotive applications Type Miscellaneous
  Year 2009 Publication BMVA one–day technical meeting on vision for automotive applications Abbreviated Journal  
  Volume Issue Pages  
  Keywords video alignment  
  Abstract  
  Address (down) London, UK  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ DPS2009 Serial 1271  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: