toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Laura Lopez-Fuentes; Andrew Bagdanov; Joost Van de Weijer; Harald Skinnemoen edit   pdf
doi  openurl
  Title Bandwidth Limited Object Recognition in High Resolution Imagery Type Conference Article
  Year 2017 Publication IEEE Winter conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper proposes a novel method to optimize bandwidth usage for object detection in critical communication scenarios. We develop two operating models of active information seeking. The first model identifies promising regions in low resolution imagery and progressively requests higher resolution regions on which to perform recognition of higher semantic quality. The second model identifies promising regions in low resolution imagery while simultaneously predicting the approximate location of the object of higher semantic quality. From this general framework, we develop a car recognition system via identification of its license plate and evaluate the performance of both models on a car dataset that we introduce. Results are compared with traditional JPEG compression and demonstrate that our system saves up to one order of magnitude of bandwidth while sacrificing little in terms of recognition performance.  
  Address Santa Rosa; CA; USA; March 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACV  
  Notes LAMP; 600.068; 600.109; 600.084; 600.106; 600.079; 600.120 Approved no  
  Call Number (up) Admin @ si @ LBW2017 Serial 2973  
Permanent link to this record
 

 
Author Karim Lekadir; Alfiia Galimzianova; Angels Betriu; Maria del Mar Vila; Laura Igual; Daniel L. Rubin; Elvira Fernandez-Giraldez; Petia Radeva; Sandy Napel edit  doi
openurl 
  Title A Convolutional Neural Network for Automatic Characterization of Plaque Composition in Carotid Ultrasound Type Journal Article
  Year 2017 Publication IEEE Journal Biomedical and Health Informatics Abbreviated Journal J-BHI  
  Volume 21 Issue 1 Pages 48-55  
  Keywords  
  Abstract Characterization of carotid plaque composition, more specifically the amount of lipid core, fibrous tissue, and calcified tissue, is an important task for the identification of plaques that are prone to rupture, and thus for early risk estimation of cardiovascular and cerebrovascular events. Due to its low costs and wide availability, carotid ultrasound has the potential to become the modality of choice for plaque characterization in clinical practice. However, its significant image noise, coupled with the small size of the plaques and their complex appearance, makes it difficult for automated techniques to discriminate between the different plaque constituents. In this paper, we propose to address this challenging problem by exploiting the unique capabilities of the emerging deep learning framework. More specifically, and unlike existing works which require a priori definition of specific imaging features or thresholding values, we propose to build a convolutional neural network (CNN) that will automatically extract from the images the information that is optimal for the identification of the different plaque constituents. We used approximately 90 000 patches extracted from a database of images and corresponding expert plaque characterizations to train and to validate the proposed CNN. The results of cross-validation experiments show a correlation of about 0.90 with the clinical assessment for the estimation of lipid core, fibrous cap, and calcified tissue areas, indicating the potential of deep learning for the challenging task of automatic characterization of plaque composition in carotid ultrasound.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no menciona Approved no  
  Call Number (up) Admin @ si @ LGB2017 Serial 2931  
Permanent link to this record
 

 
Author Antonio Lopez; Atsushi Imiya; Tomas Pajdla; Jose Manuel Alvarez edit  isbn
openurl 
  Title Computer Vision in Vehicle Technology: Land, Sea & Air Type Book Whole
  Year 2017 Publication Abbreviated Journal  
  Volume Issue Pages 161-163  
  Keywords  
  Abstract Summary This chapter examines different vision-based commercial solutions for real-live problems related to vehicles. It is worth mentioning the recent astonishing performance of deep convolutional neural networks (DCNNs) in difficult visual tasks such as image classification, object recognition/localization/detection, and semantic segmentation. In fact,
different DCNN architectures are already being explored for low-level tasks such as optical flow and disparity computation, and higher level ones such as place recognition.
 
  Address  
  Corporate Author Thesis  
  Publisher John Wiley & Sons, Ltd Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-118-86807-2 Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number (up) Admin @ si @ LIP2017a Serial 2937  
Permanent link to this record
 

 
Author Iiris Lusi; Julio C. S. Jacques Junior; Jelena Gorbova; Xavier Baro; Sergio Escalera; Hasan Demirel; Juri Allik; Cagri Ozcinar; Gholamreza Anbarjafari edit  doi
openurl 
  Title Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation: Databases Type Conference Article
  Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this work two databases for the Joint Challenge on Dominant and Complementary Emotion Recognition Using Micro Emotion Features and Head-Pose Estimation1 are introduced. Head pose estimation paired with and detailed emotion recognition have become very important in relation to human-computer interaction. The 3D head pose database, SASE, is a 3D database acquired with Microsoft Kinect 2 camera, including RGB and depth information of different head poses which is composed by a total of 30000 frames with annotated markers, including 32 male and 18 female subjects. For the dominant and complementary emotion database, iCVMEFED, includes 31250 images with different emotions of 115 subjects whose gender distribution is almost uniform. For each subject there are 5 samples. The emotions are composed by 7 basic emotions plus neutral, being defined as complementary and dominant pairs. The emotion associated to the images were labeled with the support of psychologists.  
  Address Washington; DC; USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference FG  
  Notes HUPBA; no menciona Approved no  
  Call Number (up) Admin @ si @ LJG2017 Serial 2924  
Permanent link to this record
 

 
Author Laura Lopez-Fuentes; Sebastia Massanet; Manuel Gonzalez-Hidalgo edit  doi
openurl 
  Title Image vignetting reduction via a maximization of fuzzy entropy Type Conference Article
  Year 2017 Publication IEEE International Conference on Fuzzy Systems Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In many computer vision applications, vignetting is an undesirable effect which must be removed in a pre-processing step. Recently, an algorithm for image vignetting correction has been presented by means of a minimization of log-intensity entropy. This method relies on an increase of the entropy of the image when it is affected with vignetting. In this paper, we propose a novel algorithm to reduce image vignetting via a maximization of the fuzzy entropy of the image. Fuzzy entropy quantifies the fuzziness degree of a fuzzy set and its value is also modified by the presence of vignetting. The experimental results show that this novel algorithm outperforms in most cases the algorithm based on the minimization of log-intensity entropy both from the qualitative and the quantitative point of view.  
  Address Napoles; Italia; July 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference FUZZ-IEEE  
  Notes LAMP; 600.120 Approved no  
  Call Number (up) Admin @ si @ LMG2017 Serial 2972  
Permanent link to this record
 

 
Author Chirster Loob; Pejman Rasti; Iiris Lusi; Julio C. S. Jacques Junior; Xavier Baro; Sergio Escalera; Tomasz Sapinski; Dorota Kaminska; Gholamreza Anbarjafari edit   pdf
doi  openurl
  Title Dominant and Complementary Multi-Emotional Facial Expression Recognition Using C-Support Vector Classification Type Conference Article
  Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We are proposing a new facial expression recognition model which introduces 30+ detailed facial expressions recognisable by any artificial intelligence interacting with a human. Throughout this research, we introduce two categories for the emotions, namely, dominant emotions and complementary emotions. In this research paper the complementary emotion is recognised by using the eye region if the dominant emotion is angry, fearful or sad, and if the dominant emotion is disgust or happiness the complementary emotion is mainly conveyed by the mouth. In order to verify the tagged dominant and complementary emotions, randomly chosen people voted for the recognised multi-emotional facial expressions. The average results of voting are showing that 73.88% of the voters agree on the correctness of the recognised multi-emotional facial expressions.  
  Address Washington; DC; USA; May 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference FG  
  Notes HUPBA; no menciona Approved no  
  Call Number (up) Admin @ si @ LRL2017 Serial 2925  
Permanent link to this record
 

 
Author Laura Lopez-Fuentes; Claudio Rossi; Harald Skinnemoen edit   pdf
doi  openurl
  Title River segmentation for flood monitoring Type Conference Article
  Year 2017 Publication Data Science for Emergency Management at Big Data 2017 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Floods are major natural disasters which cause deaths and material damages every year. Monitoring these events is crucial in order to reduce both the affected people and the economic losses. In this work we train and test three different Deep Learning segmentation algorithms to estimate the water area from river images, and compare their performances. We discuss the implementation of a novel data chain aimed to monitor river water levels by automatically process data collected from surveillance cameras, and to give alerts in case of high increases of the water level or flooding. We also create and openly publish the first image dataset for river water segmentation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.084; 600.120 Approved no  
  Call Number (up) Admin @ si @ LRS2017 Serial 3078  
Permanent link to this record
 

 
Author Antonio Lopez; Gabriel Villalonga; Laura Sellart; German Ros; David Vazquez; Jiaolong Xu; Javier Marin; Azadeh S. Mozafari edit   pdf
url  openurl
  Title Training my car to see using virtual worlds Type Journal Article
  Year 2017 Publication Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 38 Issue Pages 102-118  
  Keywords  
  Abstract Computer vision technologies are at the core of different advanced driver assistance systems (ADAS) and will play a key role in oncoming autonomous vehicles too. One of the main challenges for such technologies is to perceive the driving environment, i.e. to detect and track relevant driving information in a reliable manner (e.g. pedestrians in the vehicle route, free space to drive through). Nowadays it is clear that machine learning techniques are essential for developing such a visual perception for driving. In particular, the standard working pipeline consists of collecting data (i.e. on-board images), manually annotating the data (e.g. drawing bounding boxes around pedestrians), learning a discriminative data representation taking advantage of such annotations (e.g. a deformable part-based model, a deep convolutional neural network), and then assessing the reliability of such representation with the acquired data. In the last two decades most of the research efforts focused on representation learning (first, designing descriptors and learning classifiers; later doing it end-to-end). Hence, collecting data and, especially, annotating it, is essential for learning good representations. While this has been the case from the very beginning, only after the disruptive appearance of deep convolutional neural networks that it became a serious issue due to their data hungry nature. In this context, the problem is that manual data annotation is a tiresome work prone to errors. Accordingly, in the late 00’s we initiated a research line consisting of training visual models using photo-realistic computer graphics, especially focusing on assisted and autonomous driving. In this paper, we summarize such a work and show how it has become a new tendency with increasing acceptance.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number (up) Admin @ si @ LVS2017 Serial 2985  
Permanent link to this record
 

 
Author Laura Lopez-Fuentes; Joost Van de Weijer; Marc Bolaños; Harald Skinnemoen edit   pdf
openurl 
  Title Multi-modal Deep Learning Approach for Flood Detection Type Conference Article
  Year 2017 Publication MediaEval Benchmarking Initiative for Multimedia Evaluation Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain some metadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the semantic features from the textual metadata. We validate the
method on images extracted from Flickr which contain both visual information and metadata and compare the results when using both, visual information only or metadata only. This work has been done in the context of the MediaEval Multimedia Satellite Task.
 
  Address Dublin; Ireland; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MediaEval  
  Notes LAMP; 600.084; 600.109; 600.120 Approved no  
  Call Number (up) Admin @ si @ LWB2017a Serial 2974  
Permanent link to this record
 

 
Author Xialei Liu; Joost Van de Weijer; Andrew Bagdanov edit   pdf
openurl 
  Title RankIQA: Learning from Rankings for No-reference Image Quality Assessment Type Conference Article
  Year 2017 Publication 17th IEEE International Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We propose a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA). To address the problem of limited IQA dataset size, we train a Siamese Network to rank images in terms of image quality by using synthetically generated distortions for which relative image quality is known. These ranked image sets can be automatically generated without laborious human labeling. We then use fine-tuning to transfer the knowledge represented in the trained Siamese Network to a traditional CNN that estimates absolute image quality from single images. We demonstrate how our approach can be made significantly more efficient than traditional Siamese Networks by forward propagating a batch of images through a single network and backpropagating gradients derived from all pairs of images in the batch. Experiments on the TID2013 benchmark show that we improve the state-of-the-art by over 5%. Furthermore, on the LIVE benchmark we show that our approach is superior to existing NR-IQA techniques and that we even outperform the state-of-the-art in full-reference IQA (FR-IQA) methods without having to resort to high-quality reference images to infer IQA.  
  Address Venice; Italy; October 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCV  
  Notes LAMP; 600.106; 600.109; 600.120 Approved no  
  Call Number (up) Admin @ si @ LWB2017b Serial 3036  
Permanent link to this record
 

 
Author Meysam Madadi edit  isbn
openurl 
  Title Human Segmentation, Pose Estimation and Applications Type Book Whole
  Year 2017 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Automatic analyzing humans in photographs or videos has great potential applications in computer vision, including medical diagnosis, sports, entertainment, movie editing and surveillance, just to name a few. Body, face and hand are the most studied components of humans. Body has many variabilities in shape and clothing along with high degrees of freedom in pose. Face has many muscles causing many visible deformity, beside variable shape and hair style. Hand is a small object, moving fast and has high degrees of freedom. Adding human characteristics to all aforementioned variabilities makes human analysis quite a challenging task.
In this thesis, we developed human segmentation in different modalities. In a first scenario, we segmented human body and hand in depth images using example-based shape warping. We developed a shape descriptor based on shape context and class probabilities of shape regions to extract nearest neighbors. We then considered rigid affine alignment vs. nonrigid iterative shape warping. In a second scenario, we segmented face in RGB images using convolutional neural networks (CNN). We modeled conditional random field with recurrent neural networks. In our model pair-wise kernels are not fixed and learned during training. We trained the network end-to-end using adversarial networks which improved hair segmentation by a high margin.
We also worked on 3D hand pose estimation in depth images. In a generative approach, we fitted a finger model separately for each finger based on our example-based rigid hand segmentation. We minimized an energy function based on overlapping area, depth discrepancy and finger collisions. We also applied linear models in joint trajectory space to refine occluded joints based on visible joints error and invisible joints trajectory smoothness. In a CNN-based approach, we developed a tree-structure network to train specific features for each finger and fused them for global pose consistency. We also formulated physical and appearance constraints as loss functions.
Finally, we developed a number of applications consisting of human soft biometrics measurement and garment retexturing. We also generated some datasets in this thesis consisting of human segmentation, synthetic hand pose, garment retexturing and Italian gestures.
 
  Address October 2017  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Sergio Escalera;Jordi Gonzalez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-945373-3-2 Medium  
  Area Expedition Conference  
  Notes HUPBA Approved no  
  Call Number (up) Admin @ si @ Mad2017 Serial 3017  
Permanent link to this record
 

 
Author Meysam Madadi; Sergio Escalera; Alex Carruesco; Carlos Andujar; Xavier Baro; Jordi Gonzalez edit   pdf
doi  openurl
  Title Occlusion Aware Hand Pose Recovery from Sequences of Depth Images Type Conference Article
  Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract State-of-the-art approaches on hand pose estimation from depth images have reported promising results under quite controlled considerations. In this paper we propose a two-step pipeline for recovering the hand pose from a sequence of depth images. The pipeline has been designed to deal with images taken from any viewpoint and exhibiting a high degree of finger occlusion. In a first step we initialize the hand pose using a part-based model, fitting a set of hand components in the depth images. In a second step we consider temporal data and estimate the parameters of a trained bilinear model consisting of shape and trajectory bases. Results on a synthetic, highly-occluded dataset demonstrate that the proposed method outperforms most recent pose recovering approaches, including those based on CNNs.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference FG  
  Notes HUPBA; ISE; 602.143; 600.098; 600.119 Approved no  
  Call Number (up) Admin @ si @ MEC2017 Serial 2970  
Permanent link to this record
 

 
Author H. Martin Kjer; Jens Fagertun; Sergio Vera; Debora Gil edit   pdf
openurl 
  Title Medial structure generation for registration of anatomical structures Type Book Chapter
  Year 2017 Publication Skeletonization, Theory, Methods and Applications Abbreviated Journal  
  Volume 11 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.096; 600.075; 600.145 Approved no  
  Call Number (up) Admin @ si @ MFV2017a Serial 2935  
Permanent link to this record
 

 
Author Lasse Martensson; Anders Hast; Alicia Fornes edit   pdf
isbn  openurl
  Title Word Spotting as a Tool for Scribal Attribution Type Conference Article
  Year 2017 Publication 2nd Conference of the association of Digital Humanities in the Nordic Countries Abbreviated Journal  
  Volume Issue Pages 87-89  
  Keywords  
  Abstract  
  Address Gothenburg; Suecia; March 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-91-88348-83-8 Medium  
  Area Expedition Conference DHN  
  Notes DAG; 600.097; 600.121 Approved no  
  Call Number (up) Admin @ si @ MHF2017 Serial 2954  
Permanent link to this record
 

 
Author Weiqing Min; Shuqiang Jiang; Jitao Sang; Huayang Wang; Xinda Liu; Luis Herranz edit  doi
openurl 
  Title Being a Supercook: Joint Food Attributes and Multimodal Content Modeling for Recipe Retrieval and Exploration Type Journal Article
  Year 2017 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM  
  Volume 19 Issue 5 Pages 1100 - 1113  
  Keywords  
  Abstract This paper considers the problem of recipe-oriented image-ingredient correlation learning with multi-attributes for recipe retrieval and exploration. Existing methods mainly focus on food visual information for recognition while we model visual information, textual content (e.g., ingredients), and attributes (e.g., cuisine and course) together to solve extended recipe-oriented problems, such as multimodal cuisine classification and attribute-enhanced food image retrieval. As a solution, we propose a multimodal multitask deep belief network (M3TDBN) to learn joint image-ingredient representation regularized by different attributes. By grouping ingredients into visible ingredients (which are visible in the food image, e.g., “chicken” and “mushroom”) and nonvisible ingredients (e.g., “salt” and “oil”), M3TDBN is capable of learning both midlevel visual representation between images and visible ingredients and nonvisual representation. Furthermore, in order to utilize different attributes to improve the intermodality correlation, M3TDBN incorporates multitask learning to make different attributes collaborate each other. Based on the proposed M3TDBN, we exploit the derived deep features and the discovered correlations for three extended novel applications: 1) multimodal cuisine classification; 2) attribute-augmented cross-modal recipe image retrieval; and 3) ingredient and attribute inference from food images. The proposed approach is evaluated on the constructed Yummly dataset and the evaluation results have validated the effectiveness of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.120 Approved no  
  Call Number (up) Admin @ si @ MJS2017 Serial 2964  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: