toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin edit   pdf
url  doi
isbn  openurl
  Title (down) Towards Automatic Concept Transfer Type Conference Article
  Year 2011 Publication Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering Abbreviated Journal  
  Volume Issue Pages 167.176  
  Keywords chromatic modeling, color concepts, color transfer, concept transfer  
  Abstract This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.  
  Address  
  Corporate Author Thesis  
  Publisher ACM Press Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4503-0907-3 Medium  
  Area Expedition Conference NPAR  
  Notes CIC Approved no  
  Call Number Admin @ si @ MSM2011 Serial 1866  
Permanent link to this record
 

 
Author Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin edit  url
openurl 
  Title (down) Towards automatic and flexible concept transfer Type Journal Article
  Year 2012 Publication Computers and Graphics Abbreviated Journal CG  
  Volume 36 Issue 6 Pages 622–634  
  Keywords  
  Abstract This paper introduces a novel approach to automatic, yet flexible, image concepttransfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The presented method modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This method is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. Our framework is flexible for two reasons. First, the user may select one of two modalities to map input image chromaticities to target concept chromaticities depending on the level of photo-realism required. Second, the user may adjust the intensity level of the concepttransfer to his/her liking with a single parameter. The proposed method uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. Results show that our approach yields transferred images which effectively represent concepts as confirmed by a user study.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0097-8493 ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number Admin @ si @ MSM2012 Serial 2002  
Permanent link to this record
 

 
Author Zhengying Liu; Zhen Xu; Shangeth Rajaa; Meysam Madadi; Julio C. S. Jacques Junior; Sergio Escalera; Adrien Pavao; Sebastien Treguer; Wei-Wei Tu; Isabelle Guyon edit   pdf
url  openurl
  Title (down) Towards Automated Deep Learning: Analysis of the AutoDL challenge series 2019 Type Conference Article
  Year 2020 Publication Proceedings of Machine Learning Research Abbreviated Journal  
  Volume 123 Issue Pages 242-252  
  Keywords  
  Abstract We present the design and results of recent competitions in Automated Deep Learning (AutoDL). In the AutoDL challenge series 2019, we organized 5 machine learning challenges: AutoCV, AutoCV2, AutoNLP, AutoSpeech and AutoDL. The first 4 challenges concern each a specific application domain, such as computer vision, natural language processing and speech recognition. At the time of March 2020, the last challenge AutoDL is still on-going and we only present its design. Some highlights of this work include: (1) a benchmark suite of baseline AutoML solutions, with emphasis on domains for which Deep Learning methods have had prior success (image, video, text, speech, etc); (2) a novel any-time learning framework, which opens doors for further theoretical consideration; (3) a repository of around 100 datasets (from all above domains) over half of which are released as public datasets to enable research on meta-learning; (4) analyses revealing that winning solutions generalize to new unseen datasets, validating progress towards universal AutoML solution; (5) open-sourcing of the challenge platform, the starting kit, the dataset formatting toolkit, and all winning solutions (All information available at {autodl.chalearn.org}).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NEURIPS  
  Notes HUPBA Approved no  
  Call Number Admin @ si @ LXR2020 Serial 3500  
Permanent link to this record
 

 
Author Zhengying Liu; Zhen Xu; Sergio Escalera; Isabelle Guyon; Julio C. S. Jacques Junior; Meysam Madadi; Adrien Pavao; Sebastien Treguer; Wei-Wei Tu edit   pdf
url  openurl
  Title (down) Towards automated computer vision: analysis of the AutoCV challenges 2019 Type Journal Article
  Year 2020 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 135 Issue Pages 196-203  
  Keywords Computer vision; AutoML; Deep learning  
  Abstract We present the results of recent challenges in Automated Computer Vision (AutoCV, renamed here for clarity AutoCV1 and AutoCV2, 2019), which are part of a series of challenge on Automated Deep Learning (AutoDL). These two competitions aim at searching for fully automated solutions for classification tasks in computer vision, with an emphasis on any-time performance. The first competition was limited to image classification while the second one included both images and videos. Our design imposed to the participants to submit their code on a challenge platform for blind testing on five datasets, both for training and testing, without any human intervention whatsoever. Winning solutions adopted deep learning techniques based on already published architectures, such as AutoAugment, MobileNet and ResNet, to reach state-of-the-art performance in the time budget of the challenge (only 20 minutes of GPU time). The novel contributions include strategies to deliver good preliminary results at any time during the learning process, such that a method can be stopped early and still deliver good performance. This feature is key for the adoption of such techniques by data analysts desiring to obtain rapidly preliminary results on large datasets and to speed up the development process. The soundness of our design was verified in several aspects: (1) Little overfitting of the on-line leaderboard providing feedback on 5 development datasets was observed, compared to the final blind testing on the 5 (separate) final test datasets, suggesting that winning solutions might generalize to other computer vision classification tasks; (2) Error bars on the winners’ performance allow us to say with confident that they performed significantly better than the baseline solutions we provided; (3) The ranking of participants according to the any-time metric we designed, namely the Area under the Learning Curve, was different from that of the fixed-time metric, i.e. AUC at the end of the fixed time budget. We released all winning solutions under open-source licenses. At the end of the AutoDL challenge series, all data of the challenge will be made publicly available, thus providing a collection of uniformly formatted datasets, which can serve to conduct further research, particularly on meta-learning.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ LXE2020 Serial 3427  
Permanent link to this record
 

 
Author Carles Sanchez; Antonio Esteban Lansaque; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil edit   pdf
openurl 
  Title (down) Towards a Videobronchoscopy Localization System from Airway Centre Tracking Type Conference Article
  Year 2017 Publication 12th International Conference on Computer Vision Theory and Applications Abbreviated Journal  
  Volume Issue Pages 352-359  
  Keywords Video-bronchoscopy; Lung cancer diagnosis; Airway lumen detection; Region tracking; Guided bronchoscopy navigation  
  Abstract Bronchoscopists use fluoroscopy to guide flexible bronchoscopy to the lesion to be biopsied without any kind of incision. Being fluoroscopy an imaging technique based on X-rays, the risk of developmental problems and cancer is increased in those subjects exposed to its application, so minimizing radiation is crucial. Alternative guiding systems such as electromagnetic navigation require specific equipment, increase the cost of the clinical procedure and still require fluoroscopy. In this paper we propose an image based guiding system based on the extraction of airway centres from intra-operative videos. Such anatomical landmarks are matched to the airway centreline extracted from a pre-planned CT to indicate the best path to the nodule. We present a
feasibility study of our navigation system using simulated bronchoscopic videos and a multi-expert validation of landmarks extraction in 3 intra-operative ultrathin explorations.
 
  Address Porto; Portugal; February 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference VISAPP  
  Notes IAM; 600.096; 600.075; 600.145 Approved no  
  Call Number Admin @ si @ SEB2017 Serial 2943  
Permanent link to this record
 

 
Author Xavier Otazu; C. Alejandro Parraga; Maria Vanrell edit  url
doi  openurl
  Title (down) Towards a unified chromatic inducction model Type Journal Article
  Year 2010 Publication Journal of Vision Abbreviated Journal VSS  
  Volume 10 Issue 12:5 Pages 1-24  
  Keywords Visual system; Color induction; Wavelet transform  
  Abstract In a previous work (X. Otazu, M. Vanrell, & C. A. Párraga, 2008b), we showed how several brightness induction effects can be predicted using a simple multiresolution wavelet model (BIWaM). Here we present a new model for chromatic induction processes (termed Chromatic Induction Wavelet Model or CIWaM), which is also implemented on a multiresolution framework and based on similar assumptions related to the spatial frequency and the contrast surround energy of the stimulus. The CIWaM can be interpreted as a very simple extension of the BIWaM to the chromatic channels, which in our case are defined in the MacLeod-Boynton (lsY) color space. This new model allows us to unify both chromatic assimilation and chromatic contrast effects in a single mathematical formulation. The predictions of the CIWaM were tested by means of several color and brightness induction experiments, which showed an acceptable agreement between model predictions and psychophysical data.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number CAT @ cat @ OPV2010 Serial 1450  
Permanent link to this record
 

 
Author Anna Salvatella; Maria Vanrell edit  openurl
  Title (down) Towards a texture representation database Type Report
  Year 2002 Publication CVC Technical Report #60 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address CVC (UAB)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number CAT @ cat @ SaV2002 Serial 526  
Permanent link to this record
 

 
Author Javier Vazquez; Maria Vanrell; Ramon Baldrich edit  openurl
  Title (down) Towards a Psychophysical Evaluation of Colour Constancy Algorithms Type Conference Article
  Year 2008 Publication 4th European Conference on Colour in Graphics, Imaging and Vision Proceedings Abbreviated Journal  
  Volume Issue Pages 372–377  
  Keywords  
  Abstract  
  Address Terrassa (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CGIV08  
  Notes CAT;CIC Approved no  
  Call Number CAT @ cat @ VVB2008a Serial 968  
Permanent link to this record
 

 
Author Justine Giroux; Mohammad Reza Karimi Dastjerdi; Yannick Hold-Geoffroy; Javier Vazquez; Jean François Lalonde edit   pdf
url  openurl
  Title (down) Towards a Perceptual Evaluation Framework for Lighting Estimation Type Conference Article
  Year 2024 Publication Arxiv Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract rogress in lighting estimation is tracked by computing existing image quality assessment (IQA) metrics on images from standard datasets. While this may appear to be a reasonable approach, we demonstrate that doing so does not correlate to human preference when the estimated lighting is used to relight a virtual scene into a real photograph. To study this, we design a controlled psychophysical experiment where human observers must choose their preference amongst rendered scenes lit using a set of lighting estimation algorithms selected from the recent literature, and use it to analyse how these algorithms perform according to human perception. Then, we demonstrate that none of the most popular IQA metrics from the literature, taken individually, correctly represent human perception. Finally, we show that by learning a combination of existing IQA metrics, we can more accurately represent human preference. This provides a new perceptual framework to help evaluate future lighting estimation algorithms.  
  Address Seattle; USA; June 2024  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes MACO; CIC Approved no  
  Call Number Admin @ si @ GDH2024 Serial 3999  
Permanent link to this record
 

 
Author Arnau Baro; Jialuo Chen; Alicia Fornes; Beata Megyesi edit   pdf
doi  openurl
  Title (down) Towards a generic unsupervised method for transcription of encoded manuscripts Type Conference Article
  Year 2019 Publication 3rd International Conference on Digital Access to Textual Cultural Heritage Abbreviated Journal  
  Volume Issue Pages 73-78  
  Keywords A. Baró, J. Chen, A. Fornés, B. Megyesi.  
  Abstract Historical ciphers, a special type of manuscripts, contain encrypted information, important for the interpretation of our history. The first step towards decipherment is to transcribe the images, either manually or by automatic image processing techniques. Despite the improvements in handwritten text recognition (HTR) thanks to deep learning methodologies, the need of labelled data to train is an important limitation. Given that ciphers often use symbol sets across various alphabets and unique symbols without any transcription scheme available, these supervised HTR techniques are not suitable to transcribe ciphers. In this paper we propose an un-supervised method for transcribing encrypted manuscripts based on clustering and label propagation, which has been successfully applied to community detection in networks. We analyze the performance on ciphers with various symbol sets, and discuss the advantages and drawbacks compared to supervised HTR methods.  
  Address Brussels; May 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference DATeCH  
  Notes DAG; 600.097; 600.140; 600.121 Approved no  
  Call Number Admin @ si @ BCF2019 Serial 3276  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Robert Benavente; Maria Vanrell edit  openurl
  Title (down) Towards a general model of colour categorization which considers context Type Journal Article
  Year 2010 Publication Perception. ECVP Abstract Supplement Abbreviated Journal PER  
  Volume 39 Issue Pages 86  
  Keywords  
  Abstract In two previous experiments [Parraga et al, 2009 J. of Im. Sci. and Tech 53(3) 031106; Benavente et al,2009 Perception 38 ECVP Supplement, 36] the boundaries of basic colour categories were measured.
In the first experiment, samples were presented in isolation (ie on a dark background) and boundaries were measured using a yes/no paradigm. In the second, subjects adjusted the chromaticity of a sample presented on a random Mondrian background to find the boundary between pairs of adjacent colours.
Results from these experiments showed significant di erences but it was not possible to conclude whether this discrepancy was due to the absence/presence of a colourful background or to the di erences in the paradigms used. In this work, we settle this question by repeating the first experiment (ie samples presented on a dark background) using the second paradigm. A comparison of results shows that
although boundary locations are very similar, boundaries measured in context are significantly di erent(more di use) than those measured in isolation (confirmed by a Student’s t-test analysis on the subject’s answers statistical distributions). In addition, we completed the mapping of colour name space by measuring the boundaries between chromatic colours and the achromatic centre. With these results we
completed our parametric fuzzy-sets model of colour naming space.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number CAT @ cat @ PBV2010b Serial 1326  
Permanent link to this record
 

 
Author Patrick Brandao; O. Zisimopoulos; E. Mazomenos; G. Ciutib; Jorge Bernal; M. Visentini-Scarzanell; A. Menciassi; P. Dario; A. Koulaouzidis; A. Arezzo; D.J. Hawkes; D. Stoyanov edit   pdf
url  doi
openurl 
  Title (down) Towards a computed-aided diagnosis system in colonoscopy: Automatic polyp segmentation using convolution neural networks Type Journal
  Year 2018 Publication Journal of Medical Robotics Research Abbreviated Journal JMRR  
  Volume 3 Issue 2 Pages  
  Keywords convolutional neural networks; colonoscopy; computer aided diagnosis  
  Abstract Early diagnosis is essential for the successful treatment of bowel cancers including colorectal cancer (CRC) and capsule endoscopic imaging with robotic actuation can be a valuable diagnostic tool when combined with automated image analysis. We present a deep learning rooted detection and segmentation framework for recognizing lesions in colonoscopy and capsule endoscopy images. We restructure established convolution architectures, such as VGG and ResNets, by converting them into fully-connected convolution networks (FCNs), ne-tune them and study their capabilities for polyp segmentation and detection. We additionally use Shape-from-Shading (SfS) to recover depth and provide a richer representation of the tissue's structure in colonoscopy images. Depth is
incorporated into our network models as an additional input channel to the RGB information and we demonstrate that the resulting network yields improved performance. Our networks are tested on publicly available datasets and the most accurate segmentation model achieved a mean segmentation IU of 47.78% and 56.95% on the ETIS-Larib and CVC-Colon datasets, respectively. For polyp
detection, the top performing models we propose surpass the current state of the art with detection recalls superior to 90% for all datasets tested. To our knowledge, we present the rst work to use FCNs for polyp segmentation in addition to proposing a novel combination of SfS and RGB that boosts performance.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV; no menciona Approved no  
  Call Number BZM2018 Serial 2976  
Permanent link to this record
 

 
Author Cesar Isaza; Joaquin Salas; Bogdan Raducanu edit  doi
isbn  openurl
  Title (down) Toward the Detection of Urban Infrastructures Edge Shadows Type Conference Article
  Year 2010 Publication 12th International Conference on Advanced Concepts for Intelligent Vision Systems Abbreviated Journal  
  Volume 6474 Issue I Pages 30–37  
  Keywords  
  Abstract In this paper, we propose a novel technique to detect the shadows cast by urban infrastructure, such as buildings, billboards, and traffic signs, using a sequence of images taken from a fixed camera. In our approach, we compute two different background models in parallel: one for the edges and one for the reflected light intensity. An algorithm is proposed to train the system to distinguish between moving edges in general and edges that belong to static objects, creating an edge background model. Then, during operation, a background intensity model allow us to separate between moving and static objects. Those edges included in the moving objects and those that belong to the edge background model are subtracted from the current image edges. The remaining edges are the ones cast by urban infrastructure. Our method is tested on a typical crossroad scene and the results show that the approach is sound and promising.  
  Address Sydney, Australia  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor eds. Blanc–Talon et al  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-17687-6 Medium  
  Area Expedition Conference ACIVS  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ ISR2010 Serial 1458  
Permanent link to this record
 

 
Author Carlo Gatta; Juan Diego Gomez; Francesco Ciompi; Oriol Rodriguez-Leor; Petia Radeva edit  doi
isbn  openurl
  Title (down) Toward robust myocardial blush grade estimation in contrast angiography Type Conference Article
  Year 2009 Publication 4th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 5524 Issue Pages 249–256  
  Keywords  
  Abstract The assessment of Myocardial Blush Grade after primary angioplasty is a precious diagnostic tool to understand if the patient needs further medication or the use of specifics drugs. Unfortunately, the assessment of MBG is difficult for non highly specialized staff. Experimental data show that there is poor correlation between MBG assessment of low and high specialized staff, thus reducing its applicability. This paper proposes a method able to achieve an objective measure of MBG, or a set of parameters that correlates with the MBG. The method tracks the blush area starting from just one single frame tagged by the physician. As a consequence, the blush area is kept isolated from contaminating phenomena such as diaphragm and arteries movements. We also present a method to extract four parameters that are expected to correlate with the MBG. Preliminary results show that the method is capable of extracting interesting information regarding the behavior of the myocardial perfusion.  
  Address Póvoa de Varzim, Portugal  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-02171-8 Medium  
  Area Expedition Conference IbPRIA  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ GGC2009 Serial 1161  
Permanent link to this record
 

 
Author Juan Diego Gomez edit  openurl
  Title (down) Toward Robust Myocardial Blush Grade Estimation in Contrast Angiography Type Report
  Year 2009 Publication CVC Technical Report Abbreviated Journal  
  Volume 134 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Computer Vision Center Thesis Master's thesis  
  Publisher Place of Publication Bellaterra, Barcelona Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ Gom2009 Serial 2393  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: