|   | 
Details
   web
Records
Author (down) Adriana Romero; Simeon Petkov; Carlo Gatta; M.Sabate; Petia Radeva
Title Efficient automatic segmentation of vessels Type Conference Article
Year 2012 Publication 16th Conference on Medical Image Understanding and Analysis Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Swansea, United Kingdom
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MIUA
Notes MILAB Approved no
Call Number Admin @ si @ Serial 2137
Permanent link to this record
 

 
Author (down) Adriana Romero; Petia Radeva; Carlo Gatta
Title No more meta-parameter tuning in unsupervised sparse feature learning Type Miscellaneous
Year 2014 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords
Abstract CoRR abs/1402.5766
We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised feature learning algorithm, which exploits a new way of optimizing for sparsity. Experiments on STL-10 show that the method presents state-of-the-art performance and provides discriminative features that generalize well.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; LAMP; 600.079 Approved no
Call Number Admin @ si @ RRG2014 Serial 2471
Permanent link to this record
 

 
Author (down) Adriana Romero; Petia Radeva; Carlo Gatta
Title Meta-parameter free unsupervised sparse feature learning Type Journal Article
Year 2015 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 37 Issue 8 Pages 1716-1722
Keywords
Abstract We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised feature learning algorithm, which exploits a new way of optimizing for sparsity. Experiments on CIFAR-10, STL- 10 and UCMerced show that the method achieves the state-of-theart performance, providing discriminative features that generalize well.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; 600.068; 600.079; 601.160 Approved no
Call Number Admin @ si @ RRG2014b Serial 2594
Permanent link to this record
 

 
Author (down) Adriana Romero; Nicolas Ballas; Samira Ebrahimi Kahou; Antoine Chassang; Carlo Gatta; Yoshua Bengio
Title FitNets: Hints for Thin Deep Nets Type Conference Article
Year 2015 Publication 3rd International Conference on Learning Representations ICLR2015 Abbreviated Journal
Volume Issue Pages
Keywords Computer Science ; Learning; Computer Science ;Neural and Evolutionary Computing
Abstract While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
Address San Diego; CA; May 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICLR
Notes MILAB Approved no
Call Number Admin @ si @ RBK2015 Serial 2593
Permanent link to this record
 

 
Author (down) Adriana Romero; Carlo Gatta; Gustavo Camps-Valls
Title Unsupervised Deep Feature Extraction Of Hyperspectral Images Type Conference Article
Year 2014 Publication 6th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing Abbreviated Journal
Volume Issue Pages
Keywords Convolutional networks; deep learning; sparse learning; feature extraction; hyperspectral image classification
Abstract This paper presents an effective unsupervised sparse feature learning algorithm to train deep convolutional networks on hyperspectral images. Deep convolutional hierarchical representations are learned and then used for pixel classification. Features in lower layers present less abstract representations of data, while higher layers represent more abstract and complex characteristics. We successfully illustrate the performance of the extracted representations in a challenging AVIRIS hyperspectral image classification problem, compared to standard dimensionality reduction methods like principal component analysis (PCA) and its kernel counterpart (kPCA). The proposed method largely outperforms the previous state-ofthe-art results on the same experimental setting. Results show that single layer networks can extract powerful discriminative features only when the receptive field accounts for neighboring pixels. Regarding the deep architecture, we can conclude that: (1) additional layers in a deep architecture significantly improve the performance w.r.t. single layer variants; (2) the max-pooling step in each layer is mandatory to achieve satisfactory results; and (3) the performance gain w.r.t. the number of layers is upper bounded, since the spatial resolution is reduced at each pooling, resulting in too spatially coarse output features.
Address Lausanne; Switzerland; June 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WHISPERS
Notes MILAB; LAMP; 600.079 Approved no
Call Number Admin @ si @ RGC2014 Serial 2513
Permanent link to this record
 

 
Author (down) Adriana Romero; Carlo Gatta; Gustavo Camps-Valls
Title Unsupervised Deep Feature Extraction for Remote Sensing Image Classification Type Journal Article
Year 2016 Publication IEEE Transaction on Geoscience and Remote Sensing Abbreviated Journal TGRS
Volume 54 Issue 3 Pages 1349 - 1362
Keywords
Abstract This paper introduces the use of single-layer and deep convolutional networks for remote sensing data analysis. Direct application to multi- and hyperspectral imagery of supervised (shallow or deep) convolutional networks is very challenging given the high input data dimensionality and the relatively small amount of available labeled data. Therefore, we propose the use of greedy layerwise unsupervised pretraining coupled with a highly efficient algorithm for unsupervised learning of sparse features. The algorithm is rooted on sparse representations and enforces both population and lifetime sparsity of the extracted features, simultaneously. We successfully illustrate the expressive power of the extracted representations in several scenarios: classification of aerial scenes, as well as land-use classification in very high resolution or land-cover classification from multi- and hyperspectral images. The proposed algorithm clearly outperforms standard principal component analysis (PCA) and its kernel counterpart (kPCA), as well as current state-of-the-art algorithms of aerial classification, while being extremely computationally efficient at learning representations of data. Results show that single-layer convolutional networks can extract powerful discriminative features only when the receptive field accounts for neighboring pixels and are preferred when the classification requires high resolution and detailed results. However, deep architectures significantly outperform single-layer variants, capturing increasing levels of abstraction and complexity throughout the feature hierarchy.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0196-2892 ISBN Medium
Area Expedition Conference
Notes LAMP; 600.079;MILAB Approved no
Call Number Admin @ si @ RGC2016 Serial 2723
Permanent link to this record
 

 
Author (down) Adriana Romero; Carlo Gatta
Title Do We Really Need All These Neurons? Type Conference Article
Year 2013 Publication 6th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 7887 Issue Pages 460--467
Keywords Retricted Boltzmann Machine; hidden units; unsupervised learning; classification
Abstract Restricted Boltzmann Machines (RBMs) are generative neural networks that have received much attention recently. In particular, choosing the appropriate number of hidden units is important as it might hinder their representative power. According to the literature, RBM require numerous hidden units to approximate any distribution properly. In this paper, we present an experiment to determine whether such amount of hidden units is required in a classification context. We then propose an incremental algorithm that trains RBM reusing the previously trained parameters using a trade-off measure to determine the appropriate number of hidden units. Results on the MNIST and OCR letters databases show that using a number of hidden units, which is one order of magnitude smaller than the literature estimate, suffices to achieve similar performance. Moreover, the proposed algorithm allows to estimate the required number of hidden units without the need of training many RBM from scratch.
Address Madeira; Portugal; June 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-38627-5 Medium
Area Expedition Conference IbPRIA
Notes MILAB; 600.046 Approved no
Call Number Admin @ si @ RoG2013 Serial 2311
Permanent link to this record
 

 
Author (down) Adriana Romero
Title Assisting the training of deep neural networks with applications to computer vision Type Book Whole
Year 2015 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Deep learning has recently been enjoying an increasing popularity due to its success in solving challenging tasks. In particular, deep learning has proven to be effective in a large variety of computer vision tasks, such as image classification, object recognition and image parsing. Contrary to previous research, which required engineered feature representations, designed by experts, in order to succeed, deep learning attempts to learn representation hierarchies automatically from data. More recently, the trend has been to go deeper with representation hierarchies.
Learning (very) deep representation hierarchies is a challenging task, which
involves the optimization of highly non-convex functions. Therefore, the search
for algorithms to ease the learning of (very) deep representation hierarchies from data is extensive and ongoing.
In this thesis, we tackle the challenging problem of easing the learning of (very) deep representation hierarchies. We present a hyper-parameter free, off-the-shelf, simple and fast unsupervised algorithm to discover hidden structure from the input data by enforcing a very strong form of sparsity. We study the applicability and potential of the algorithm to learn representations of varying depth in a handful of applications and domains, highlighting the ability of the algorithm to provide discriminative feature representations that are able to achieve top performance.
Yet, while emphasizing the great value of unsupervised learning methods when
labeled data is scarce, the recent industrial success of deep learning has revolved around supervised learning. Supervised learning is currently the focus of many recent research advances, which have shown to excel at many computer vision tasks. Top performing systems often involve very large and deep models, which are not well suited for applications with time or memory limitations. More in line with the current trends, we engage in making top performing models more efficient, by designing very deep and thin models. Since training such very deep models still appears to be a challenging task, we introduce a novel algorithm that guides the training of very thin and deep models by hinting their intermediate representations.
Very deep and thin models trained by the proposed algorithm end up extracting feature representations that are comparable or even better performing
than the ones extracted by large state-of-the-art models, while compellingly
reducing the time and memory consumption of the model.
Address October 2015
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Editor Carlo Gatta;Petia Radeva
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ Rom2015 Serial 2707
Permanent link to this record
 

 
Author (down) Adrian Galdran; Aitor Alvarez-Gila; Alessandro Bria; Javier Vazquez; Marcelo Bertalmio
Title On the Duality Between Retinex and Image Dehazing Type Conference Article
Year 2018 Publication 31st IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 8212–8221
Keywords Image color analysis; Task analysis; Atmospheric modeling; Computer vision; Computational modeling; Lighting
Abstract Image dehazing deals with the removal of undesired loss of visibility in outdoor images due to the presence of fog. Retinex is a color vision model mimicking the ability of the Human Visual System to robustly discount varying illuminations when observing a scene under different spectral lighting conditions. Retinex has been widely explored in the computer vision literature for image enhancement and other related tasks. While these two problems are apparently unrelated, the goal of this work is to show that they can be connected by a simple linear relationship. Specifically, most Retinex-based algorithms have the characteristic feature of always increasing image brightness, which turns them into ideal candidates for effective image dehazing by directly applying Retinex to a hazy image whose intensities have been inverted. In this paper, we give theoretical proof that Retinex on inverted intensities is a solution to the image dehazing problem. Comprehensive qualitative and quantitative results indicate that several classical and modern implementations of Retinex can be transformed into competing image dehazing algorithms performing on pair with more complex fog removal methods, and can overcome some of the main challenges associated with this problem.
Address Salt Lake City; USA; June 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ GAB2018 Serial 3146
Permanent link to this record
 

 
Author (down) Adria Ruiz; Joost Van de Weijer; Xavier Binefa
Title Regularized Multi-Concept MIL for weakly-supervised facial behavior categorization Type Conference Article
Year 2014 Publication 25th British Machine Vision Conference Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We address the problem of estimating high-level semantic labels for videos of recorded people by means of analysing their facial expressions. This problem, to which we refer as facial behavior categorization, is a weakly-supervised learning problem where we do not have access to frame-by-frame facial gesture annotations but only weak-labels at the video level are available. Therefore, the goal is to learn a set of discriminative expressions and how they determine the video weak-labels. Facial behavior categorization can be posed as a Multi-Instance-Learning (MIL) problem and we propose a novel MIL method called Regularized Multi-Concept MIL to solve it. In contrast to previous approaches applied in facial behavior analysis, RMC-MIL follows a Multi-Concept assumption which allows different facial expressions (concepts) to contribute differently to the video-label. Moreover, to handle with the high-dimensional nature of facial-descriptors, RMC-MIL uses a discriminative approach to model the concepts and structured sparsity regularization to discard non-informative features. RMC-MIL is posed as a convex-constrained optimization problem where all the parameters are jointly learned using the Projected-Quasi-Newton method. In our experiments, we use two public data-sets to show the advantages of the Regularized Multi-Concept approach and its improvement compared to existing MIL methods. RMC-MIL outperforms state-of-the-art results in the UNBC data-set for pain detection.
Address Nottingham; UK; September 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes LAMP; CIC; 600.074; 600.079 Approved no
Call Number Admin @ si @ RWB2014 Serial 2508
Permanent link to this record
 

 
Author (down) Adria Ruiz; Joost Van de Weijer; Xavier Binefa
Title From emotions to action units with hidden and semi-hidden-task learning Type Conference Article
Year 2015 Publication 16th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 3703-3711
Keywords
Abstract Limited annotated training data is a challenging problem in Action Unit recognition. In this paper, we investigate how the use of large databases labelled according to the 6 universal facial expressions can increase the generalization ability of Action Unit classifiers. For this purpose, we propose a novel learning framework: Hidden-Task Learning. HTL aims to learn a set of Hidden-Tasks (Action Units)for which samples are not available but, in contrast, training data is easier to obtain from a set of related VisibleTasks (Facial Expressions). To that end, HTL is able to exploit prior knowledge about the relation between Hidden and Visible-Tasks. In our case, we base this prior knowledge on empirical psychological studies providing statistical correlations between Action Units and universal facial expressions. Additionally, we extend HTL to Semi-Hidden Task Learning (SHTL) assuming that Action Unit training samples are also provided. Performing exhaustive experiments over four different datasets, we show that HTL and SHTL improve the generalization ability of AU classifiers by training them with additional facial expression data. Additionally, we show that SHTL achieves competitive performance compared with state-of-the-art Transductive Learning approaches which face the problem of limited training data by using unlabelled test samples during training.
Address Santiago de Chile; Chile; December 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes LAMP; 600.068; 600.079 Approved no
Call Number Admin @ si @ RWB2015 Serial 2671
Permanent link to this record
 

 
Author (down) Adria Rico; Alicia Fornes
Title Camera-based Optical Music Recognition using a Convolutional Neural Network Type Conference Article
Year 2017 Publication 12th IAPR International Workshop on Graphics Recognition Abbreviated Journal
Volume Issue Pages 27-28
Keywords optical music recognition; document analysis; convolutional neural network; deep learning
Abstract Optical Music Recognition (OMR) consists in recognizing images of music scores. Contrary to expectation, the current OMR systems usually fail when recognizing images of scores captured by digital cameras and smartphones. In this work, we propose a camera-based OMR system based on Convolutional Neural Networks, showing promising preliminary results
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference GREC
Notes DAG;600.097; 600.121 Approved no
Call Number Admin @ si @ RiF2017 Serial 3059
Permanent link to this record
 

 
Author (down) Adria Molina; Pau Riba; Lluis Gomez; Oriol Ramos Terrades; Josep Llados
Title Date Estimation in the Wild of Scanned Historical Photos: An Image Retrieval Approach Type Conference Article
Year 2021 Publication 16th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume 12822 Issue Pages 306-320
Keywords
Abstract This paper presents a novel method for date estimation of historical photographs from archival sources. The main contribution is to formulate the date estimation as a retrieval task, where given a query, the retrieved images are ranked in terms of the estimated date similarity. The closer are their embedded representations the closer are their dates. Contrary to the traditional models that design a neural network that learns a classifier or a regressor, we propose a learning objective based on the nDCG ranking metric. We have experimentally evaluated the performance of the method in two different tasks: date estimation and date-sensitive image retrieval, using the DEW public database, overcoming the baseline methods.
Address Lausanne; Suissa; September 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 600.121; 600.140; 110.312 Approved no
Call Number Admin @ si @ MRG2021b Serial 3571
Permanent link to this record
 

 
Author (down) Adria Molina; Lluis Gomez; Oriol Ramos Terrades; Josep Llados
Title A Generic Image Retrieval Method for Date Estimation of Historical Document Collections Type Conference Article
Year 2022 Publication Document Analysis Systems.15th IAPR International Workshop, (DAS2022) Abbreviated Journal
Volume 13237 Issue Pages 583–597
Keywords Date estimation; Document retrieval; Image retrieval; Ranking loss; Smooth-nDCG
Abstract Date estimation of historical document images is a challenging problem, with several contributions in the literature that lack of the ability to generalize from one dataset to others. This paper presents a robust date estimation system based in a retrieval approach that generalizes well in front of heterogeneous collections. We use a ranking loss function named smooth-nDCG to train a Convolutional Neural Network that learns an ordination of documents for each problem. One of the main usages of the presented approach is as a tool for historical contextual retrieval. It means that scholars could perform comparative analysis of historical images from big datasets in terms of the period where they were produced. We provide experimental evaluation on different types of documents from real datasets of manuscript and newspaper images.
Address La Rochelle, France; May 22–25, 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DAS
Notes DAG; 600.140; 600.121 Approved no
Call Number Admin @ si @ MGR2022 Serial 3694
Permanent link to this record
 

 
Author (down) Adela Barbulescu; Wenjuan Gong; Jordi Gonzalez; Thomas B. Moeslund; Xavier Roca
Title 3D Human Pose Estimation Using 2D Body Part Detectors Type Conference Article
Year 2012 Publication 21st International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 2484 - 2487
Keywords
Abstract Automatic 3D reconstruction of human poses from monocular images is a challenging and popular topic in the computer vision community, which provides a wide range of applications in multiple areas. Solutions for 3D pose estimation involve various learning approaches, such as support vector machines and Gaussian processes, but many encounter difficulties in cluttered scenarios and require additional input data, such as silhouettes, or controlled camera settings. We present a framework that is capable of estimating the 3D pose of a person from single images or monocular image sequences without requiring background information and which is robust to camera variations. The framework models the non-linearity present in human pose estimation as it benefits from flexible learning approaches, including a highly customizable 2D detector. Results on the HumanEva benchmark show how they perform and influence the quality of the 3D pose estimates.
Address Tsubuka, Japan
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1051-4651 ISBN 978-1-4673-2216-4 Medium
Area Expedition Conference ICPR
Notes ISE Approved no
Call Number Admin @ si @ BGG2012 Serial 2172
Permanent link to this record