|   | 
Details
   web
Records
Author Ozan Caglayan; Walid Aransa; Yaxing Wang; Marc Masana; Mercedes Garcıa-Martinez; Fethi Bougares; Loic Barrault; Joost Van de Weijer
Title (up) Does Multimodality Help Human and Machine for Translation and Image Captioning? Type Conference Article
Year 2016 Publication 1st conference on machine translation Abbreviated Journal
Volume Issue Pages
Keywords
Abstract This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate theusefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
Address Berlin; Germany; August 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WMT
Notes LAMP; 600.106 ; 600.068 Approved no
Call Number Admin @ si @ CAW2016 Serial 2761
Permanent link to this record
 

 
Author Andreea Glavan; Alina Matei; Petia Radeva; Estefania Talavera
Title (up) Does our social life influence our nutritional behaviour? Understanding nutritional habits from egocentric photo-streams Type Journal Article
Year 2021 Publication Expert Systems with Applications Abbreviated Journal ESWA
Volume 171 Issue Pages 114506
Keywords
Abstract Nutrition and social interactions are both key aspects of the daily lives of humans. In this work, we propose a system to evaluate the influence of social interaction in the nutritional habits of a person from a first-person perspective. In order to detect the routine of an individual, we construct a nutritional behaviour pattern discovery model, which outputs routines over a number of days. Our method evaluates similarity of routines with respect to visited food-related scenes over the collected days, making use of Dynamic Time Warping, as well as considering social engagement and its correlation with food-related activities. The nutritional and social descriptors of the collected days are evaluated and encoded using an LSTM Autoencoder. Later, the obtained latent space is clustered to find similar days unaffected by outliers using the Isolation Forest method. Moreover, we introduce a new score metric to evaluate the performance of the proposed algorithm. We validate our method on 104 days and more than 100 k egocentric images gathered by 7 users. Several different visualizations are evaluated for the understanding of the findings. Our results demonstrate good performance and applicability of our proposed model for social-related nutritional behaviour understanding. At the end, relevant applications of the model are discussed by analysing the discovered routine of particular individuals.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ GMR2021 Serial 3634
Permanent link to this record
 

 
Author Matthias S. Keil; Jordi Vitria
Title (up) Does the brain generate representations of smooth brightness gradients? A novel account for Mach bands, Chevreul’s illusion, and a variant of the Ehrenstein disk Type Miscellaneous
Year 2005 Publication European Conference on Visual Perception Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ KeV2005b Serial 607
Permanent link to this record
 

 
Author Matthias S. Keil; Jordi Vitria
Title (up) Does the brain generate representations of smooth brightness gradients? A novel account for Mach bands, Chevreul’s illusion, and a variant of the Ehrenstein disk Type Journal
Year 2005 Publication Perception 34:209–210 Suppl. S (IF: 1.391) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ KeV2005a Serial 608
Permanent link to this record
 

 
Author Angel Sappa; Patricia Suarez; Henry Velesaca; Dario Carpio
Title (up) Domain Adaptation in Image Dehazing: Exploring the Usage of Images from Virtual Scenarios Type Conference Article
Year 2022 Publication 16th International Conference on Computer Graphics, Visualization, Computer Vision and Image Processing Abbreviated Journal
Volume Issue Pages 85-92
Keywords Domain adaptation; Synthetic hazed dataset; Dehazing
Abstract This work presents a novel domain adaptation strategy for deep learning-based approaches to solve the image dehazing
problem. Firstly, a large set of synthetic images is generated by using a realistic 3D graphic simulator; these synthetic
images contain different densities of haze, which are used for training the model that is later adapted to any real scenario.
The adaptation process requires just a few images to fine-tune the model parameters. The proposed strategy allows
overcoming the limitation of training a given model with few images. In other words, the proposed strategy implements
the adaptation of a haze removal model trained with synthetic images to real scenarios. It should be noticed that it is quite
difficult, if not impossible, to have large sets of pairs of real-world images (with and without haze) to train in a supervised
way dehazing algorithms. Experimental results are provided showing the validity of the proposed domain adaptation
strategy.
Address Lisboa; Portugal; July 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CGVCVIP
Notes MSIAU; no proj Approved no
Call Number Admin @ si @ SSV2022 Serial 3804
Permanent link to this record
 

 
Author Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez
Title (up) Domain Adaptation of Deformable Part-Based Models Type Journal Article
Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 36 Issue 12 Pages 2367-2380
Keywords Domain Adaptation; Pedestrian Detection
Abstract The accuracy of object classifiers can significantly drop when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, adapting the classifiers to the scenario in which they must operate is of paramount importance. We present novel domain adaptation (DA) methods for object detection. As proof of concept, we focus on adapting the state-of-the-art deformable part-based model (DPM) for pedestrian detection. We introduce an adaptive structural SVM (A-SSVM) that adapts a pre-learned classifier between different domains. By taking into account the inherent structure in feature space (e.g., the parts in a DPM), we propose a structure-aware A-SSVM (SA-SSVM). Neither A-SSVM nor SA-SSVM needs to revisit the source-domain training data to perform the adaptation. Rather, a low number of target-domain training examples (e.g., pedestrians) are used. To address the scenario where there are no target-domain annotated samples, we propose a self-adaptive DPM based on a self-paced learning (SPL) strategy and a Gaussian Process Regression (GPR). Two types of adaptation tasks are assessed: from both synthetic pedestrians and general persons (PASCAL VOC) to pedestrians imaged from an on-board camera. Results show that our proposals avoid accuracy drops as high as 15 points when comparing adapted and non-adapted detectors.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.057; 600.054; 601.217; 600.076 Approved no
Call Number ADAS @ adas @ XRV2014b Serial 2436
Permanent link to this record
 

 
Author Jiaolong Xu
Title (up) Domain Adaptation of Deformable Part-based Models Type Book Whole
Year 2015 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract On-board pedestrian detection is crucial for Advanced Driver Assistance Systems
(ADAS). An accurate classi cation is fundamental for vision-based pedestrian detection.
The underlying assumption for learning classi ers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classi ers. However, in practice, there are di erent reasons that can break this constancy assumption. Accordingly, reusing existing classi ers by adapting them from the previous training environment (source domain) to the new testing one (target domain) is an approach with increasing acceptance in the computer vision community. In this thesis we focus on the domain adaptation of deformable part-based models (DPMs) for pedestrian detection. As a prof of concept, we use a computer graphic based synthetic dataset, i.e. a virtual world, as the source domain, and adapt the virtual-world trained DPM detector to various real-world dataset.
We start by exploiting the maximum detection accuracy of the virtual-world
trained DPM. Even though, when operating in various real-world datasets, the virtualworld trained detector still su er from accuracy degradation due to the domain gap of virtual and real worlds. We then focus on domain adaptation of DPM. At the rst step, we consider single source and single target domain adaptation and propose two batch learning methods, namely A-SSVM and SA-SSVM. Later, we further consider leveraging multiple target (sub-)domains for progressive domain adaptation and propose a hierarchical adaptive structured SVM (HA-SSVM) for optimization. Finally, we extend HA-SSVM for the challenging online domain adaptation problem, aiming at making the detector to automatically adapt to the target domain online, without any human intervention. All of the proposed methods in this thesis do not require
revisiting source domain data. The evaluations are done on the Caltech pedestrian detection benchmark. Results show that SA-SSVM slightly outperforms A-SSVM and avoids accuracy drops as high as 15 points when comparing with a non-adapted detector. The hierarchical model learned by HA-SSVM further boosts the domain adaptation performance. Finally, the online domain adaptation method has demonstrated that it can achieve comparable accuracy to the batch learned models while not requiring manually label target domain examples. Domain adaptation for pedestrian detection is of paramount importance and a relatively unexplored area. We humbly hope the work in this thesis could provide foundations for future work in this area.
Address April 2015
Corporate Author Thesis Ph.D. thesis
Publisher Place of Publication Editor Antonio Lopez
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-943427-1-4 Medium
Area Expedition Conference
Notes ADAS; 600.076 Approved no
Call Number Admin @ si @ Xu2015 Serial 2631
Permanent link to this record
 

 
Author David Vazquez
Title (up) Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection Type Book Whole
Year 2013 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal
Volume 1 Issue 1 Pages 1-105
Keywords Pedestrian Detection; Domain Adaptation
Abstract Pedestrian detection is of paramount interest for many applications, e.g. Advanced Driver Assistance Systems, Intelligent Video Surveillance and Multimedia systems. Most promising pedestrian detectors rely on appearance-based classifiers trained with annotated data. However, the required annotation step represents an intensive and subjective task for humans, what makes worth to minimize their intervention in this process by using computational tools like realistic virtual worlds. The reason to use these kind of tools relies in the fact that they allow the automatic generation of precise and rich annotations of visual information. Nevertheless, the use of this kind of data comes with the following question: can a pedestrian appearance model learnt with virtual-world data work successfully for pedestrian detection in real-world scenarios?. To answer this question, we conduct different experiments that suggest a positive answer. However, the pedestrian classifiers trained with virtual-world data can suffer the so called dataset shift problem as real-world based classifiers does. Accordingly, we have designed different domain adaptation techniques to face this problem, all of them integrated in a same framework (V-AYLA). We have explored different methods to train a domain adapted pedestrian classifiers by collecting a few pedestrian samples from the target domain (real world) and combining them with many samples of the source domain (virtual world). The extensive experiments we present show that pedestrian detectors developed within the V-AYLA framework do achieve domain adaptation. Ideally, we would like to adapt our system without any human intervention. Therefore, as a first proof of concept we also propose an unsupervised domain adaptation technique that avoids human intervention during the adaptation process. To the best of our knowledge, this Thesis work is the first demonstrating adaptation of virtual and real worlds for developing an object detector. Last but not least, we also assessed a different strategy to avoid the dataset shift that consists in collecting real-world samples and retrain with them in such a way that no bounding boxes of real-world pedestrians have to be provided. We show that the generated classifier is competitive with respect to the counterpart trained with samples collected by manually annotating pedestrian bounding boxes. The results presented on this Thesis not only end with a proposal for adapting a virtual-world pedestrian detector to the real world, but also it goes further by pointing out a new methodology that would allow the system to adapt to different situations, which we hope will provide the foundations for future research in this unexplored area.
Address Barcelona
Corporate Author Thesis Ph.D. thesis
Publisher Ediciones Graficas Rey Place of Publication Barcelona Editor Antonio Lopez;Daniel Ponsa
Language English Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-940530-1-6 Medium
Area Expedition Conference
Notes adas Approved yes
Call Number ADAS @ adas @ Vaz2013 Serial 2276
Permanent link to this record
 

 
Author Marc Masana; Joost Van de Weijer; Luis Herranz;Andrew Bagdanov; Jose Manuel Alvarez
Title (up) Domain-adaptive deep network compression Type Conference Article
Year 2017 Publication 17th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Deep Neural Networks trained on large datasets can be easily transferred to new domains with far fewer labeled examples by a process called fine-tuning. This has the advantage that representations learned in the large source domain can be exploited on smaller target domains. However, networks designed to be optimal for the source task are often prohibitively large for the target task. In this work we address the compression of networks after domain transfer.
We focus on compression algorithms based on low-rank matrix decomposition. Existing methods base compression solely on learned network weights and ignore the statistics of network activations. We show that domain transfer leads to large shifts in network activations and that it is desirable to take this into account when compressing.
We demonstrate that considering activation statistics when compressing weights leads to a rank-constrained regression problem with a closed-form solution. Because our method takes into account the target domain, it can more optimally
remove the redundancy in the weights. Experiments show that our Domain Adaptive Low Rank (DALR) method significantly outperforms existing low-rank compression techniques. With our approach, the fc6 layer of VGG19 can be compressed more than 4x more than using truncated SVD alone – with only a minor or no loss in accuracy. When applied to domain-transferred networks it allows for compression down to only 5-20% of the original number of parameters with only a minor drop in performance.
Address Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes LAMP; 601.305; 600.106; 600.120 Approved no
Call Number Admin @ si @ Serial 3034
Permanent link to this record
 

 
Author Sergio Escalera; R. M. Martinez; Jordi Vitria; Petia Radeva; Maria Teresa Anguera
Title (up) Dominance Detection in Face-to-face Conversations Type Conference Article
Year 2009 Publication 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis Abbreviated Journal
Volume Issue Pages 97–102
Keywords
Abstract Dominance is referred to the level of influence a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on dominance detection from visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers opinion. Moreover, the considered indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analysis shows a high correlation and allows the categorization of dominant people in public discussion video sequences.
Address Miami, USA
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2160-7508 ISBN 978-1-4244-3994-2 Medium
Area Expedition Conference CVPR
Notes HuPBA; OR; MILAB;MV Approved no
Call Number BCNPCL @ bcnpcl @ EMV2009 Serial 1227
Permanent link to this record
 

 
Author Jianzhy Guo; Zhen Lei; Jun Wan; Egils Avots; Noushin Hajarolasvadi; Boris Knyazev; Artem Kuharenko; Julio C. S. Jacques Junior; Xavier Baro; Hasan Demirel; Sergio Escalera; Juri Allik; Gholamreza Anbarjafari
Title (up) Dominant and Complementary Emotion Recognition from Still Images of Faces Type Journal Article
Year 2018 Publication IEEE Access Abbreviated Journal ACCESS
Volume 6 Issue Pages 26391 - 26403
Keywords
Abstract Emotion recognition has a key role in affective computing. Recently, fine-grained emotion analysis, such as compound facial expression of emotions, has attracted high interest of researchers working on affective computing. A compound facial emotion includes dominant and complementary emotions (e.g., happily-disgusted and sadly-fearful), which is more detailed than the seven classical facial emotions (e.g., happy, disgust, and so on). Current studies on compound emotions are limited to use data sets with limited number of categories and unbalanced data distributions, with labels obtained automatically by machine learning-based algorithms which could lead to inaccuracies. To address these problems, we released the iCV-MEFED data set, which includes 50 classes of compound emotions and labels assessed by psychologists. The task is challenging due to high similarities of compound facial emotions from different categories. In addition, we have organized a challenge based on the proposed iCV-MEFED data set, held at FG workshop 2017. In this paper, we analyze the top three winner methods and perform further detailed experiments on the proposed data set. Experiments indicate that pairs of compound emotion (e.g., surprisingly-happy vs happily-surprised) are more difficult to be recognized if compared with the seven basic emotions. However, we hope the proposed data set can help to pave the way for further research on compound facial emotion recognition.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ GLW2018 Serial 3122
Permanent link to this record
 

 
Author Chirster Loob; Pejman Rasti; Iiris Lusi; Julio C. S. Jacques Junior; Xavier Baro; Sergio Escalera; Tomasz Sapinski; Dorota Kaminska; Gholamreza Anbarjafari
Title (up) Dominant and Complementary Multi-Emotional Facial Expression Recognition Using C-Support Vector Classification Type Conference Article
Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We are proposing a new facial expression recognition model which introduces 30+ detailed facial expressions recognisable by any artificial intelligence interacting with a human. Throughout this research, we introduce two categories for the emotions, namely, dominant emotions and complementary emotions. In this research paper the complementary emotion is recognised by using the eye region if the dominant emotion is angry, fearful or sad, and if the dominant emotion is disgust or happiness the complementary emotion is mainly conveyed by the mouth. In order to verify the tagged dominant and complementary emotions, randomly chosen people voted for the recognised multi-emotional facial expressions. The average results of voting are showing that 73.88% of the voters agree on the correctness of the recognised multi-emotional facial expressions.
Address Washington; DC; USA; May 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference FG
Notes HUPBA; no menciona Approved no
Call Number Admin @ si @ LRL2017 Serial 2925
Permanent link to this record
 

 
Author Arka Ujjal Dey; Suman Ghosh; Ernest Valveny
Title (up) Don't only Feel Read: Using Scene text to understand advertisements Type Conference Article
Year 2018 Publication IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We propose a framework for automated classification of Advertisement Images, using not just Visual features but also Textual cues extracted from embedded text. Our approach takes inspiration from the assumption that Ad images contain meaningful textual content, that can provide discriminative semantic interpretetion, and can thus aid in classifcation tasks. To this end, we develop a framework using off-the-shelf components, and demonstrate the effectiveness of Textual cues in semantic Classfication tasks.
Address Salt Lake City; Utah; USA; June 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes DAG; 600.121; 600.129 Approved no
Call Number Admin @ si @ DGV2018 Serial 3551
Permanent link to this record
 

 
Author Sounak Dey; Pau Riba; Anjan Dutta; Josep Llados; Yi-Zhe Song
Title (up) Doodle to Search: Practical Zero-Shot Sketch-Based Image Retrieval Type Conference Article
Year 2019 Publication IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 2179-2188
Keywords
Abstract In this paper, we investigate the problem of zero-shot sketch-based image retrieval (ZS-SBIR), where human sketches are used as queries to conduct retrieval of photos from unseen categories. We importantly advance prior arts by proposing a novel ZS-SBIR scenario that represents a firm step forward in its practical application. The new setting uniquely recognizes two important yet often neglected challenges of practical ZS-SBIR, (i) the large domain gap between amateur sketch and photo, and (ii) the necessity for moving towards large-scale retrieval. We first contribute to the community a novel ZS-SBIR dataset, QuickDraw-Extended, that consists of 330,000 sketches and 204,000 photos spanning across 110 categories. Highly abstract amateur human sketches are purposefully sourced to maximize the domain gap, instead of ones included in existing datasets that can often be semi-photorealistic. We then formulate a ZS-SBIR framework to jointly model sketches and photos into a common embedding space. A novel strategy to mine the mutual information among domains is specifically engineered to alleviate the domain gap. External semantic knowledge is further embedded to aid semantic transfer. We show that, rather surprisingly, retrieval performance significantly outperforms that of state-of-the-art on existing datasets that can already be achieved using a reduced version of our model. We further demonstrate the superior performance of our full model by comparing with a number of alternatives on the newly proposed dataset. The new dataset, plus all training and testing code of our model, will be publicly released to facilitate future research.
Address Long beach; CA; USA; June 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes DAG; 600.140; 600.121; 600.097 Approved no
Call Number Admin @ si @ DRD2019 Serial 3462
Permanent link to this record
 

 
Author Thanh Ha Do; Oriol Ramos Terrades; Salvatore Tabbone
Title (up) DSD: document sparse-based denoising algorithm Type Journal Article
Year 2019 Publication Pattern Analysis and Applications Abbreviated Journal PAA
Volume 22 Issue 1 Pages 177–186
Keywords Document denoising; Sparse representations; Sparse dictionary learning; Document degradation models
Abstract In this paper, we present a sparse-based denoising algorithm for scanned documents. This method can be applied to any kind of scanned documents with satisfactory results. Unlike other approaches, the proposed approach encodes noise documents through sparse representation and visual dictionary learning techniques without any prior noise model. Moreover, we propose a precision parameter estimator. Experiments on several datasets demonstrate the robustness of the proposed approach compared to the state-of-the-art methods on document denoising.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.097; 600.140; 600.121 Approved no
Call Number Admin @ si @ DRT2019 Serial 3254
Permanent link to this record