|   | 
Details
   web
Records
Author Laura Lopez-Fuentes; Andrew Bagdanov; Joost Van de Weijer; Harald Skinnemoen
Title Bandwidth Limited Object Recognition in High Resolution Imagery Type Conference Article
Year 2017 Publication IEEE Winter conference on Applications of Computer Vision Abbreviated Journal (up)
Volume Issue Pages
Keywords
Abstract This paper proposes a novel method to optimize bandwidth usage for object detection in critical communication scenarios. We develop two operating models of active information seeking. The first model identifies promising regions in low resolution imagery and progressively requests higher resolution regions on which to perform recognition of higher semantic quality. The second model identifies promising regions in low resolution imagery while simultaneously predicting the approximate location of the object of higher semantic quality. From this general framework, we develop a car recognition system via identification of its license plate and evaluate the performance of both models on a car dataset that we introduce. Results are compared with traditional JPEG compression and demonstrate that our system saves up to one order of magnitude of bandwidth while sacrificing little in terms of recognition performance.
Address Santa Rosa; CA; USA; March 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WACV
Notes LAMP; 600.068; 600.109; 600.084; 600.106; 600.079; 600.120 Approved no
Call Number Admin @ si @ LBW2017 Serial 2973
Permanent link to this record
 

 
Author Laura Lopez-Fuentes; Joost Van de Weijer; Marc Bolaños; Harald Skinnemoen
Title Multi-modal Deep Learning Approach for Flood Detection Type Conference Article
Year 2017 Publication MediaEval Benchmarking Initiative for Multimedia Evaluation Abbreviated Journal (up)
Volume Issue Pages
Keywords
Abstract In this paper we propose a multi-modal deep learning approach to detect floods in social media posts. Social media posts normally contain some metadata and/or visual information, therefore in order to detect the floods we use this information. The model is based on a Convolutional Neural Network which extracts the visual features and a bidirectional Long Short-Term Memory network to extract the semantic features from the textual metadata. We validate the
method on images extracted from Flickr which contain both visual information and metadata and compare the results when using both, visual information only or metadata only. This work has been done in the context of the MediaEval Multimedia Satellite Task.
Address Dublin; Ireland; September 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MediaEval
Notes LAMP; 600.084; 600.109; 600.120 Approved no
Call Number Admin @ si @ LWB2017a Serial 2974
Permanent link to this record
 

 
Author Quentin Angermann; Jorge Bernal; Cristina Sanchez Montes; Gloria Fernandez Esparrach; Xavier Gray; Olivier Romain; F. Javier Sanchez; Aymeric Histace
Title Towards Real-Time Polyp Detection in Colonoscopy Videos: Adapting Still Frame-Based Methodologies for Video Sequences Analysis Type Conference Article
Year 2017 Publication 4th International Workshop on Computer Assisted and Robotic Endoscopy Abbreviated Journal (up)
Volume Issue Pages 29-41
Keywords Polyp detection; colonoscopy; real time; spatio temporal coherence
Abstract Colorectal cancer is the second cause of cancer death in United States: precursor lesions (polyps) detection is key for patient survival. Though colonoscopy is the gold standard screening tool, some polyps are still missed. Several computational systems have been proposed but none of them are used in the clinical room mainly due to computational constraints. Besides, most of them are built over still frame databases, decreasing their performance on video analysis due to the lack of output stability and not coping with associated variability on image quality and polyp appearance. We propose a strategy to adapt these methods to video analysis by adding a spatio-temporal stability module and studying a combination of features to capture polyp appearance variability. We validate our strategy, incorporated on a real-time detection method, on a public video database. Resulting method detects all
polyps under real time constraints, increasing its performance due to our
adaptation strategy.
Address Quebec; Canada; September 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CARE
Notes MV; 600.096; 600.075 Approved no
Call Number Admin @ si @ ABS2017b Serial 2977
Permanent link to this record
 

 
Author Quentin Angermann; Jorge Bernal; Cristina Sanchez Montes; Maroua Hammami; Gloria Fernandez Esparrach; Xavier Dray; Olivier Romain; F. Javier Sanchez; Aymeric Histace
Title Clinical Usability Quantification Of a Real-Time Polyp Detection Method In Videocolonoscopy Type Conference Article
Year 2017 Publication 25th United European Gastroenterology Week Abbreviated Journal (up)
Volume Issue Pages
Keywords
Abstract
Address Barcelona, October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ESGE
Notes MV; no menciona Approved no
Call Number Admin @ si @ ABS2017c Serial 2978
Permanent link to this record
 

 
Author Cristina Sanchez Montes; F. Javier Sanchez; Cristina Rodriguez de Miguel; Henry Cordova; Jorge Bernal; Maria Lopez Ceron; Josep Llach; Gloria Fernandez Esparrach
Title Histological Prediction Of Colonic Polyps By Computer Vision. Preliminary Results Type Conference Article
Year 2017 Publication 25th United European Gastroenterology Week Abbreviated Journal (up)
Volume Issue Pages
Keywords polyps; histology; computer vision
Abstract during colonoscopy, clinicians perform visual inspection of the polyps to predict histology. Kudo’s pit pattern classification is one of the most commonly used for optical diagnosis. These surface patterns present a contrast with respect to their neighboring regions and they can be considered as bright regions in the image that can attract the attention of computational methods.
Address Barcelona; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ESGE
Notes MV; no menciona Approved no
Call Number Admin @ si @ SSR2017 Serial 2979
Permanent link to this record
 

 
Author Pierdomenico Fiadino; Victor Ponce; Juan Antonio Torrero-Gonzalez; Marc Torrent-Moreno
Title Call Detail Records for Human Mobility Studies: Taking Stock of the Situation in the “Always Connected Era" Type Conference Article
Year 2017 Publication Workshop on Big Data Analytics and Machine Learning for Data Communication Networks Abbreviated Journal (up)
Volume Issue Pages 43-48
Keywords mobile networks; call detail records; human mobility
Abstract The exploitation of cellular network data for studying human mobility has been a popular research topic in the last decade. Indeed, mobile terminals could be considered ubiquitous sensors that allow the observation of human movements on large scale without the need of relying on non-scalable techniques, such as surveys, or dedicated and expensive monitoring infrastructures. In particular, Call Detail Records (CDRs), collected by operators for billing purposes,
have been extensively employed due to their rather large availability, compared to other types of cellular data (e.g., signaling). Despite the interest aroused around this topic, the research community has generally agreed about the scarcity of information provided by CDRs: the position of mobile terminals is logged when some kind of activity (calls, SMS, data connections) occurs, which translates in a picture of mobility somehow biased by the activity degree of users.
By studying two datasets collected by a Nation-wide operator in 2014 and 2016, we show that the situation has drastically changed in terms of data volume and quality. The increase of flat data plans and the higher penetration of “
always connected” terminals have driven up the number of recorded CDRs, providing higher temporal accuracy for users’ locations.
Address UCLA; USA; August 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-5054-9 Medium
Area Expedition Conference ACMW (SIGCOMM)
Notes HuPBA; no menciona Approved no
Call Number Admin @ si @ FPT2017 Serial 2980
Permanent link to this record
 

 
Author Maryam Asadi-Aghbolaghi; Albert Clapes; Marco Bellantonio; Hugo Jair Escalante; Victor Ponce; Xavier Baro; Isabelle Guyon; Shohreh Kasaei; Sergio Escalera
Title Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey Type Book Chapter
Year 2017 Publication Gesture Recognition Abbreviated Journal (up)
Volume Issue Pages 539-578
Keywords Action recognition; Gesture recognition; Deep learning architectures; Fusion strategies
Abstract Interest in automatic action and gesture recognition has grown considerably in the last few years. This is due in part to the large number of application domains for this type of technology. As in many other computer vision areas, deep learning based methods have quickly become a reference methodology for obtaining state-of-the-art performance in both tasks. This chapter is a survey of current deep learning based methodologies for action and gesture recognition in sequences of images. The survey reviews both fundamental and cutting edge methodologies reported in the last few years. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. Details of the proposed architectures, fusion strategies, main datasets, and competitions are reviewed. Also, we summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, their highlighting features, and opportunities and challenges for future research. To the best of our knowledge this is the first survey in the topic. We foresee this survey will become a reference in this ever dynamic field of research.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ ACB2017a Serial 2981
Permanent link to this record
 

 
Author Maryam Asadi-Aghbolaghi; Albert Clapes; Marco Bellantonio; Hugo Jair Escalante; Victor Ponce; Xavier Baro; Isabelle Guyon; Shohreh Kasaei; Sergio Escalera
Title A survey on deep learning based approaches for action and gesture recognition in image sequences Type Conference Article
Year 2017 Publication 12th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal (up)
Volume Issue Pages
Keywords
Abstract The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning
for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions.
We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.
Address Washington; USA; May 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference FG
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ ACB2017b Serial 2982
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell
Title Color representation in CNNs: parallelisms with biological vision Type Conference Article
Year 2017 Publication ICCV Workshop on Mutual Benefits ofr Cognitive and Computer Vision Abbreviated Journal (up)
Volume Issue Pages
Keywords
Abstract Convolutional Neural Networks (CNNs) trained for object recognition tasks present representational capabilities approaching to primate visual systems [1]. This provides a computational framework to explore how image features
are efficiently represented. Here, we dissect a trained CNN
[2] to study how color is represented. We use a classical methodology used in physiology that is measuring index of selectivity of individual neurons to specific features. We use ImageNet Dataset [20] images and synthetic versions
of them to quantify color tuning properties of artificial neurons to provide a classification of the network population.
We conclude three main levels of color representation showing some parallelisms with biological visual systems: (a) a decomposition in a circular hue space to represent single color regions with a wider hue sampling beyond the first
layer (V2), (b) the emergence of opponent low-dimensional spaces in early stages to represent color edges (V1); and (c) a strong entanglement between color and shape patterns representing object-parts (e.g. wheel of a car), objectshapes (e.g. faces) or object-surrounds configurations (e.g. blue sky surrounding an object) in deeper layers (V4 or IT).
Address Venice; Italy; October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV-MBCC
Notes CIC; 600.087; 600.051 Approved no
Call Number Admin @ si @ RaV2017 Serial 2984
Permanent link to this record
 

 
Author Hana Jarraya; Oriol Ramos Terrades; Josep Llados
Title Learning structural loss parameters on graph embedding applied on symbolic graphs Type Conference Article
Year 2017 Publication 12th IAPR International Workshop on Graphics Recognition Abbreviated Journal (up)
Volume Issue Pages
Keywords
Abstract We propose an amelioration of proposed Graph Embedding (GEM) method in previous work that takes advantages of structural pattern representation and the structured distortion. it models an Attributed Graph (AG) as a Probabilistic Graphical Model (PGM). Then, it learns the parameters of this PGM presented by a vector, as new signature of AG in a lower dimensional vectorial space. We focus to adapt the structured learning algorithm via 1_slack formulation with a suitable risk function, called Graph Edit Distance (GED). It defines the dissimilarity of the ground truth and predicted graph labels. It determines by the error tolerant graph matching using bipartite graph matching algorithm. We apply Structured Support Vector Machines (SSVM) to process classification task. During our experiments, we got our results on the GREC dataset.
Address Kyoto; Japan; November 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference GREC
Notes DAG; 600.097; 600.121 Approved no
Call Number Admin @ si @ JRL2017b Serial 3073
Permanent link to this record
 

 
Author Xavier Soria; Angel Sappa; Arash Akbarinia
Title Multispectral Single-Sensor RGB-NIR Imaging: New Challenges and Opportunities Type Conference Article
Year 2017 Publication 7th International Conference on Image Processing Theory, Tools & Applications Abbreviated Journal (up)
Volume Issue Pages
Keywords Color restoration; Neural networks; Singlesensor cameras; Multispectral images; RGB-NIR dataset
Abstract Multispectral images captured with a single sensor camera have become an attractive alternative for numerous computer vision applications. However, in order to fully exploit their potentials, the color restoration problem (RGB representation) should be addressed. This problem is more evident in outdoor scenarios containing vegetation, living beings, or specular materials. The problem of color distortion emerges from the sensitivity of sensors due to the overlap of visible and near infrared spectral bands. This paper empirically evaluates the variability of the near infrared (NIR) information with respect to the changes of light throughout the day. A tiny neural network is proposed to restore the RGB color representation from the given RGBN (Red, Green, Blue, NIR) images. In order to evaluate the proposed algorithm, different experiments on a RGBN outdoor dataset are conducted, which include various challenging cases. The obtained result shows the challenge and the importance of addressing color restoration in single sensor multispectral images.
Address Montreal; Canada; November 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IPTA
Notes NEUROBIT; MSIAU; 600.122 Approved no
Call Number Admin @ si @ SSA2017 Serial 3074
Permanent link to this record
 

 
Author Alexey Dosovitskiy; German Ros; Felipe Codevilla; Antonio Lopez; Vladlen Koltun
Title CARLA: An Open Urban Driving Simulator Type Conference Article
Year 2017 Publication 1st Annual Conference on Robot Learning. Proceedings of Machine Learning Abbreviated Journal (up)
Volume 78 Issue Pages 1-16
Keywords Autonomous driving; sensorimotor control; simulation
Abstract We introduce CARLA, an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. We use CARLA to study the performance of three approaches to autonomous driving: a classic modular pipeline, an endto-end
model trained via imitation learning, and an end-to-end model trained via
reinforcement learning. The approaches are evaluated in controlled scenarios of
increasing difficulty, and their performance is examined via metrics provided by CARLA, illustrating the platform’s utility for autonomous driving research.
Address Mountain View; CA; USA; November 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CORL
Notes ADAS; 600.085; 600.118 Approved no
Call Number Admin @ si @ DRC2017 Serial 2988
Permanent link to this record
 

 
Author Arash Akbarinia; Raquel Gil Rodriguez; C. Alejandro Parraga
Title Colour Constancy: Biologically-inspired Contrast Variant Pooling Mechanism Type Conference Article
Year 2017 Publication 28th British Machine Vision Conference Abbreviated Journal (up)
Volume Issue Pages
Keywords
Abstract Pooling is a ubiquitous operation in image processing algorithms that allows for higher-level processes to collect relevant low-level features from a region of interest. Currently, max-pooling is one of the most commonly used operators in the computational literature. However, it can lack robustness to outliers due to the fact that it relies merely on the peak of a function. Pooling mechanisms are also present in the primate visual cortex where neurons of higher cortical areas pool signals from lower ones. The receptive fields of these neurons have been shown to vary according to the contrast by aggregating signals over a larger region in the presence of low contrast stimuli. We hypothesise that this contrast-variant-pooling mechanism can address some of the shortcomings of maxpooling. We modelled this contrast variation through a histogram clipping in which the percentage of pooled signal is inversely proportional to the local contrast of an image. We tested our hypothesis by applying it to the phenomenon of colour constancy where a number of popular algorithms utilise a max-pooling step (e.g. White-Patch, Grey-Edge and Double-Opponency). For each of these methods, we investigated the consequences of replacing their original max-pooling by the proposed contrast-variant-pooling. Our experiments on three colour constancy benchmark datasets suggest that previous results can significantly improve by adopting a contrast-variant-pooling mechanism.
Address London; September 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference BMVC
Notes NEUROBIT; 600.068; 600.072 Approved no
Call Number Admin @ si @ AGP2017 Serial 2992
Permanent link to this record
 

 
Author Arash Akbarinia; C. Alejandro Parraga; Marta Exposito; Bogdan Raducanu; Xavier Otazu
Title Can biological solutions help computers detect symmetry? Type Conference Article
Year 2017 Publication 40th European Conference on Visual Perception Abbreviated Journal (up)
Volume Issue Pages
Keywords
Abstract
Address Berlin; Germany; August 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECVP
Notes NEUROBIT Approved no
Call Number Admin @ si @ APE2017 Serial 2995
Permanent link to this record
 

 
Author J. Chazalon; P. Gomez-Kramer; Jean-Christophe Burie; M.Coustaty; S.Eskenazi; Muhammad Muzzamil Luqman; Nibal Nayef; Marçal Rusiñol; N. Sidere; Jean-Marc Ogier
Title SmartDoc 2017 Video Capture: Mobile Document Acquisition in Video Mode Type Conference Article
Year 2017 Publication 1st International Workshop on Open Services and Tools for Document Analysis Abbreviated Journal (up)
Volume Issue Pages
Keywords
Abstract As mobile document acquisition using smartphones is getting more and more common, along with the continuous improvement of mobile devices (both in terms of computing power and image quality), we can wonder to which extent mobile phones can replace desktop scanners. Modern applications can cope with perspective distortion and normalize the contrast of a document page captured with a smartphone, and in some cases like bottle labels or posters, smartphones even have the advantage of allowing the acquisition of non-flat or large documents. However, several cases remain hard to handle, such as reflective documents (identity cards, badges, glossy magazine cover, etc.) or large documents for which some regions require an important amount of detail. This paper introduces the SmartDoc 2017 benchmark (named “SmartDoc Video Capture”), which aims at
assessing whether capturing documents using the video mode of a smartphone could solve those issues. The task under evaluation is both a stitching and a reconstruction problem, as the user can move the device over different parts of the document to capture details or try to erase highlights. The material released consists of a dataset, an evaluation method and the associated tool, a sample method, and the tools required to extend the dataset. All the components are released publicly under very permissive licenses, and we particularly cared about maximizing the ease of
understanding, usage and improvement.
Address Kyoto; Japan; November 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR-OST
Notes DAG; 600.084; 600.121 Approved no
Call Number Admin @ si @ CGB2017 Serial 2997
Permanent link to this record