|   | 
Details
   web
Records
Author Arnau Baro; Pau Riba; Alicia Fornes
Title A Starting Point for Handwritten Music Recognition Type Conference Article
Year 2018 Publication 1st International Workshop on Reading Music Systems Abbreviated Journal
Volume Issue Pages 5-6
Keywords Optical Music Recognition; Long Short-Term Memory; Convolutional Neural Networks; MUSCIMA++; CVCMUSCIMA
Abstract In the last years, the interest in Optical Music Recognition (OMR) has reawakened, especially since the appearance of deep learning. However, there are very few works addressing handwritten scores. In this work we describe a full OMR pipeline for handwritten music scores by using Convolutional and Recurrent Neural Networks that could serve as a baseline for the research community.
Address Paris; France; September 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference WORMS
Notes DAG; 600.097; 601.302; 601.330; 600.121 Approved no
Call Number Admin @ si @ BRF2018 Serial 3223
Permanent link to this record
 

 
Author Laura Lopez-Fuentes; Alessandro Farasin; Harald Skinnemoen; Paolo Garza
Title Deep Learning models for passability detection of flooded roads Type Conference Article
Year 2018 Publication MediaEval 2018 Multimedia Benchmark Workshop Abbreviated Journal
Volume 2283 Issue Pages
Keywords
Abstract In this paper we study and compare several approaches to detect floods and evidence for passability of roads by conventional means in Twitter. We focus on tweets containing both visual information (a picture shared by the user) and metadata, a combination of text and related extra information intrinsic to the Twitter API. This work has been done in the context of the MediaEval 2018 Multimedia Satellite Task.
Address Sophia Antipolis; France; October 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference MediaEval
Notes LAMP; 600.084; 600.109; 600.120 Approved no
Call Number Admin @ si @ LFS2018 Serial 3224
Permanent link to this record
 

 
Author Anjan Dutta; Hichem Sahbi
Title Stochastic Graphlet Embedding Type Journal Article
Year 2018 Publication IEEE Transactions on Neural Networks and Learning Systems Abbreviated Journal TNNLS
Volume Issue Pages 1-14
Keywords Stochastic graphlets; Graph embedding; Graph classification; Graph hashing; Betweenness centrality
Abstract Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semi-structured data as graphs where nodes correspond to primitives (parts, interest points, segments,
etc.) and edges characterize the relationships between these primitives. However, these non-vectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of – explicit/implicit –graph vectorization and embedding. This embedding process
should be resilient to intra-class graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding (SGE) that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider
these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When
combined with maximum margin classifiers, these graphlet-based representations have positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference
Notes DAG; 602.167; 602.168; 600.097; 600.121 Approved no
Call Number Admin @ si @ DuS2018 Serial 3225
Permanent link to this record
 

 
Author Xim Cerda-Company; Xavier Otazu
Title Color induction in equiluminant flashed stimuli Type Journal Article
Year 2019 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A
Volume 36 Issue 1 Pages 22-31
Keywords
Abstract Color induction is the influence of the surrounding color (inducer) on the perceived color of a central region. There are two different types of color induction: color contrast (the color of the central region shifts away from that of the inducer) and color assimilation (the color shifts towards the color of the inducer). Several studies on these effects have used uniform and striped surrounds, reporting color contrast and color assimilation, respectively. Other authors [J. Vis. 12(1), 22 (2012) [CrossRef] ] have studied color induction using flashed uniform surrounds, reporting that the contrast is higher for shorter flash duration. Extending their study, we present new psychophysical results using both flashed and static (i.e., non-flashed) equiluminant stimuli for both striped and uniform surrounds. Similarly to them, for uniform surround stimuli we observed color contrast, but we did not obtain the maximum contrast for the shortest (10 ms) flashed stimuli, but for 40 ms. We only observed this maximum contrast for red, green, and lime inducers, while for a purple inducer we obtained an asymptotic profile along the flash duration. For striped stimuli, we observed color assimilation only for the static (infinite flash duration) red–green surround inducers (red first inducer, green second inducer). For the other inducers’ configurations, we observed color contrast or no induction. Since other studies showed that non-equiluminant striped static stimuli induce color assimilation, our results also suggest that luminance differences could be a key factor to induce it.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference
Notes NEUROBIT; 600.120; 600.128 Approved no
Call Number Admin @ si @ CeO2019 Serial 3226
Permanent link to this record
 

 
Author Arnau Baro; Pau Riba; Jorge Calvo-Zaragoza; Alicia Fornes
Title Optical Music Recognition by Long Short-Term Memory Networks Type Book Chapter
Year 2018 Publication Graphics Recognition. Current Trends and Evolutions Abbreviated Journal
Volume 11009 Issue Pages 81-95
Keywords Optical Music Recognition; Recurrent Neural Network; Long ShortTerm Memory
Abstract Optical Music Recognition refers to the task of transcribing the image of a music score into a machine-readable format. Many music scores are written in a single staff, and therefore, they could be treated as a sequence. Therefore, this work explores the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for reading the music score sequentially, where the LSTM helps in keeping the context. For training, we have used a synthetic dataset of more than 40000 images, labeled at primitive level. The experimental results are promising, showing the benefits of our approach.
Address
Corporate Author Thesis
Publisher Springer Place of Publication Editor A. Fornes, B. Lamiroy
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (up) ISBN 978-3-030-02283-9 Medium
Area Expedition Conference GREC
Notes DAG; 600.097; 601.302; 601.330; 600.121 Approved no
Call Number Admin @ si @ BRC2018 Serial 3227
Permanent link to this record
 

 
Author Lichao Zhang; Abel Gonzalez-Garcia; Joost Van de Weijer; Martin Danelljan; Fahad Shahbaz Khan
Title Synthetic Data Generation for End-to-End Thermal Infrared Tracking Type Journal Article
Year 2019 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 28 Issue 4 Pages 1837 - 1850
Keywords
Abstract The usage of both off-the-shelf and end-to-end trained deep networks have significantly improved the performance of visual tracking on RGB videos. However, the lack of large labeled datasets hampers the usage of convolutional neural networks for tracking in thermal infrared (TIR) images. Therefore, most state-of-the-art methods on tracking for TIR data are still based on handcrafted features. To address this problem, we propose to use image-to-image translation models. These models allow us to translate the abundantly available labeled RGB data to synthetic TIR data. We explore both the usage of paired and unpaired image translation models for this purpose. These methods provide us with a large labeled dataset of synthetic TIR sequences, on which we can train end-to-end optimal features for tracking. To the best of our knowledge, we are the first to train end-to-end features for TIR tracking. We perform extensive experiments on the VOT-TIR2017 dataset. We show that a network trained on a large dataset of synthetic TIR data obtains better performance than one trained on the available real TIR data. Combining both data sources leads to further improvement. In addition, when we combine the network with motion features, we outperform the state of the art with a relative gain of over 10%, clearly showing the efficiency of using synthetic data to train end-to-end TIR trackers.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference
Notes LAMP; 600.141; 600.120 Approved no
Call Number Admin @ si @ YGW2019 Serial 3228
Permanent link to this record
 

 
Author Abel Gonzalez-Garcia; Davide Modolo; Vittorio Ferrari
Title Objects as context for detecting their semantic parts Type Conference Article
Year 2018 Publication 31st IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 6907 - 6916
Keywords Proposals; Semantics; Wheels; Automobiles; Context modeling; Task analysis; Object detection
Abstract We present a semantic part detection approach that effectively leverages object information. We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to
detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare
to other part detection methods on both PASCAL-Part and CUB200-2011 datasets.
Address Salt Lake City; USA; June 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference CVPR
Notes LAMP; 600.109; 600.120 Approved no
Call Number Admin @ si @ GMF2018 Serial 3229
Permanent link to this record
 

 
Author Antonio Lopez
Title Pedestrian Detection Systems Type Book Chapter
Year 2018 Publication Wiley Encyclopedia of Electrical and Electronics Engineering Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Pedestrian detection is a highly relevant topic for both advanced driver assistance systems (ADAS) and autonomous driving. In this entry, we review the ideas behind pedestrian detection systems from the point of view of perception based on computer vision and machine learning.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ Lop2018 Serial 3230
Permanent link to this record
 

 
Author Simone Balocco; Francesco Ciompi; Juan Rigla; Xavier Carrillo; Josefina Mauri; Petia Radeva
Title Assessment of intracoronary stent location and extension in intravascular ultrasound sequences Type Journal Article
Year 2019 Publication Medical Physics Abbreviated Journal MEDPHYS
Volume 46 Issue 2 Pages 484-493
Keywords IVUS; malapposition; stent; ultrasound
Abstract PURPOSE:

An intraluminal coronary stent is a metal scaffold deployed in a stenotic artery during percutaneous coronary intervention (PCI). In order to have an effective deployment, a stent should be optimally placed with regard to anatomical structures such as bifurcations and stenoses. Intravascular ultrasound (IVUS) is a catheter-based imaging technique generally used for PCI guiding and assessing the correct placement of the stent. A novel approach that automatically detects the boundaries and the position of the stent along the IVUS pullback is presented. Such a technique aims at optimizing the stent deployment.
METHODS:

The method requires the identification of the stable frames of the sequence and the reliable detection of stent struts. Using these data, a measure of likelihood for a frame to contain a stent is computed. Then, a robust binary representation of the presence of the stent in the pullback is obtained applying an iterative and multiscale quantization of the signal to symbols using the Symbolic Aggregate approXimation algorithm.
RESULTS:

The technique was extensively validated on a set of 103 IVUS of sequences of in vivo coronary arteries containing metallic and bioabsorbable stents acquired through an international multicentric collaboration across five clinical centers. The method was able to detect the stent position with an overall F-measure of 86.4%, a Jaccard index score of 75% and a mean distance of 2.5 mm from manually annotated stent boundaries, and in bioabsorbable stents with an overall F-measure of 88.6%, a Jaccard score of 77.7 and a mean distance of 1.5 mm from manually annotated stent boundaries. Additionally, a map indicating the distance between the lumen and the stent along the pullback is created in order to show the angular sectors of the sequence in which the malapposition is present.
CONCLUSIONS:

Results obtained comparing the automatic results vs the manual annotation of two observers shows that the method approaches the interobserver variability. Similar performances are obtained on both metallic and bioabsorbable stents, showing the flexibility and robustness of the method.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ BCR2019 Serial 3231
Permanent link to this record
 

 
Author Gema Rotger; Felipe Lumbreras; Francesc Moreno-Noguer; Antonio Agudo
Title 2D-to-3D Facial Expression Transfer Type Conference Article
Year 2018 Publication 24th International Conference on Pattern Recognition Abbreviated Journal
Volume Issue Pages 2008 - 2013
Keywords
Abstract Automatically changing the expression and physical features of a face from an input image is a topic that has been traditionally tackled in a 2D domain. In this paper, we bring this problem to 3D and propose a framework that given an
input RGB video of a human face under a neutral expression, initially computes his/her 3D shape and then performs a transfer to a new and potentially non-observed expression. For this purpose, we parameterize the rest shape –obtained from standard factorization approaches over the input video– using a triangular
mesh which is further clustered into larger macro-segments. The expression transfer problem is then posed as a direct mapping between this shape and a source shape, such as the blend shapes of an off-the-shelf 3D dataset of human facial expressions. The mapping is resolved to be geometrically consistent between 3D models by requiring points in specific regions to map on semantic
equivalent regions. We validate the approach on several synthetic and real examples of input faces that largely differ from the source shapes, yielding very realistic expression transfers even in cases with topology changes, such as a synthetic video sequence of a single-eyed cyclops.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference ICPR
Notes ADAS; 600.086; 600.130; 600.118 Approved no
Call Number Admin @ si @ RLM2018 Serial 3232
Permanent link to this record
 

 
Author Lasse Martensson; Ekta Vats; Anders Hast; Alicia Fornes
Title In Search of the Scribe: Letter Spotting as a Tool for Identifying Scribes in Large Handwritten Text Corpora Type Journal
Year 2019 Publication Journal for Information Technology Studies as a Human Science Abbreviated Journal HUMAN IT
Volume 14 Issue 2 Pages 95-120
Keywords Scribal attribution/ writer identification; digital palaeography; word spotting; mediaeval charters; mediaeval manuscripts
Abstract In this article, a form of the so-called word spotting-method is used on a large set of handwritten documents in order to identify those that contain script of similar execution. The point of departure for the investigation is the mediaeval Swedish manuscript Cod. Holm. D 3. The main scribe of this manuscript has yet not been identified in other documents. The current attempt aims at localising other documents that display a large degree of similarity in the characteristics of the script, these being possible candidates for being executed by the same hand. For this purpose, the method of word spotting has been employed, focusing on individual letters, and therefore the process is referred to as letter spotting in the article. In this process, a set of ‘g’:s, ‘h’:s and ‘k’:s have been selected as templates, and then a search has been made for close matches among the mediaeval Swedish charters. The search resulted in a number of charters that displayed great similarities with the manuscript D 3. The used letter spotting method thus proofed to be a very efficient sorting tool localising similar script samples.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference
Notes DAG; 600.097; 600.140; 600.121 Approved no
Call Number Admin @ si @ MVH2019 Serial 3234
Permanent link to this record
 

 
Author Md.Mostafa Kamal Sarker; Mohammed Jabreel; Hatem A. Rashwan; Syeda Furruka Banu; Antonio Moreno; Petia Radeva; Domenec Puig
Title CuisineNet: Food Attributes Classification using Multi-scale Convolution Network. Type Miscellaneous
Year 2018 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ KJR2018 Serial 3235
Permanent link to this record
 

 
Author Eduardo Aguilar; Beatriz Remeseiro; Marc Bolaños; Petia Radeva
Title Grab, Pay, and Eat: Semantic Food Detection for Smart Restaurants Type Journal Article
Year 2018 Publication IEEE Transactions on Multimedia Abbreviated Journal
Volume 20 Issue 12 Pages 3266 - 3275
Keywords
Abstract The increase in awareness of people towards their nutritional habits has drawn considerable attention to the field of automatic food analysis. Focusing on self-service restaurants environment, automatic food analysis is not only useful for extracting nutritional information from foods selected by customers, it is also of high interest to speed up the service solving the bottleneck produced at the cashiers in times of high demand. In this paper, we address the problem of automatic food tray analysis in canteens and restaurants environment, which consists in predicting multiple foods placed on a tray image. We propose a new approach for food analysis based on convolutional neural networks, we name Semantic Food Detection, which integrates in the same framework food localization, recognition and segmentation. We demonstrate that our method improves the state of the art food detection by a considerable margin on the public dataset UNIMIB2016 achieving about 90% in terms of F-measure, and thus provides a significant technological advance towards the automatic billing in restaurant environments.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference
Notes MILAB; no proj Approved no
Call Number Admin @ si @ ARB2018 Serial 3236
Permanent link to this record
 

 
Author Simone Balocco; Mauricio Gonzalez; Ricardo Ñancule; Petia Radeva; Gabriel Thomas
Title Calcified Plaque Detection in IVUS Sequences: Preliminary Results Using Convolutional Nets Type Conference Article
Year 2018 Publication International Workshop on Artificial Intelligence and Pattern Recognition Abbreviated Journal
Volume 11047 Issue Pages 34-42
Keywords Intravascular ultrasound images; Convolutional nets; Deep learning; Medical image analysis
Abstract The manual inspection of intravascular ultrasound (IVUS) images to detect clinically relevant patterns is a difficult and laborious task performed routinely by physicians. In this paper, we present a framework based on convolutional nets for the quick selection of IVUS frames containing arterial calcification, a pattern whose detection plays a vital role in the diagnosis of atherosclerosis. Preliminary experiments on a dataset acquired from eighty patients show that convolutional architectures improve detections of a shallow classifier in terms of 𝐹1-measure, precision and recall.
Address Cuba; September 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference IWAIPR
Notes MILAB; no menciona Approved no
Call Number Admin @ si @ BGÑ2018 Serial 3237
Permanent link to this record
 

 
Author Marçal Rusiñol; Lluis Gomez
Title Avances en clasificación de imágenes en los últimos diez años. Perspectivas y limitaciones en el ámbito de archivos fotográficos históricos Type Journal
Year 2018 Publication Revista anual de la Asociación de Archiveros de Castilla y León Abbreviated Journal
Volume 21 Issue Pages 161-174
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (up) ISBN Medium
Area Expedition Conference
Notes DAG; 600.121; 600.129 Approved no
Call Number Admin @ si @ RuG2018 Serial 3239
Permanent link to this record