|   | 
Details
   web
Records
Author Asma Bensalah; Pau Riba; Alicia Fornes; Josep Llados
Title Shoot less and Sketch more: An Efficient Sketch Classification via Joining Graph Neural Networks and Few-shot Learning Type Conference Article
Year 2019 Publication 13th IAPR International Workshop on Graphics Recognition Abbreviated Journal
Volume Issue Pages 80-85
Keywords Sketch classification; Convolutional Neural Network; Graph Neural Network; Few-shot learning
Abstract With the emergence of the touchpad devices and drawing tablets, a new era of sketching started afresh. However, the recognition of sketches is still a tough task due to the variability of the drawing styles. Moreover, in some application scenarios there is few labelled data available for training,
which imposes a limitation for deep learning architectures. In addition, in many cases there is a need to generate models able to adapt to new classes. In order to cope with these limitations, we propose a method based on few-shot learning and graph neural networks for classifying sketches aiming for an efficient neural model. We test our approach with several databases of
sketches, showing promising results.
Address Sydney; Australia; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference GREC
Notes DAG; 600.140; 601.302; 600.121 Approved no
Call Number Admin @ si @ BRF2019 Serial 3354
Permanent link to this record
 

 
Author Pau Riba; Anjan Dutta; Lutz Goldmann; Alicia Fornes; Oriol Ramos Terrades; Josep Llados
Title Table Detection in Invoice Documents by Graph Neural Networks Type Conference Article
Year 2019 Publication 15th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages 122-127
Keywords
Abstract Tabular structures in documents offer a complementary dimension to the raw textual data, representing logical or quantitative relationships among pieces of information. In digital mail room applications, where a large amount of
administrative documents must be processed with reasonable accuracy, the detection and interpretation of tables is crucial. Table recognition has gained interest in document image analysis, in particular in unconstrained formats (absence of rule lines, unknown information of rows and columns). In this work, we propose a graph-based approach for detecting tables in document images. Instead of using the raw content (recognized text), we make use of the location, context and content type, thus it is purely a structure perception approach, not dependent on the language and the quality of the text
reading. Our framework makes use of Graph Neural Networks (GNNs) in order to describe the local repetitive structural information of tables in invoice documents. Our proposed model has been experimentally validated in two invoice datasets and achieved encouraging results. Additionally, due to the scarcity
of benchmark datasets for this task, we have contributed to the community a novel dataset derived from the RVL-CDIP invoice data. It will be publicly released to facilitate future research.
Address Sydney; Australia; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 600.140; 601.302; 602.167; 600.121; 600.141 Approved no
Call Number Admin @ si @ RDG2019 Serial 3355
Permanent link to this record
 

 
Author Ekta Vats; Anders Hast; Alicia Fornes
Title Training-Free and Segmentation-Free Word Spotting using Feature Matching and Query Expansion Type Conference Article
Year 2019 Publication 15th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages 1294-1299
Keywords Word spotting; Segmentation-free; Trainingfree; Query expansion; Feature matching
Abstract Historical handwritten text recognition is an interesting yet challenging problem. In recent times, deep learning based methods have achieved significant performance in handwritten text recognition. However, handwriting recognition using deep learning needs training data, and often, text must be previously segmented into lines (or even words). These limitations constrain the application of HTR techniques in document collections, because training data or segmented words are not always available. Therefore, this paper proposes a training-free and segmentation-free word spotting approach that can be applied in unconstrained scenarios. The proposed word spotting framework is based on document query word expansion and relaxed feature matching algorithm, which can easily be parallelised. Since handwritten words posses distinct shape and characteristics, this work uses a combination of different keypoint detectors
and Fourier-based descriptors to obtain a sufficient degree of relaxed matching. The effectiveness of the proposed method is empirically evaluated on well-known benchmark datasets using standard evaluation measures. The use of informative features along with query expansion significantly contributed in efficient performance of the proposed method.
Address Sydney; Australia; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 600.140; 600.121 Approved no
Call Number Admin @ si @ VHF2019 Serial 3356
Permanent link to this record
 

 
Author Debora Gil; Carles Sanchez; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell
Title Segmentation of Distal Airways using Structural Analysis Type Journal Article
Year 2019 Publication PloS one Abbreviated Journal Plos
Volume 14 Issue 12 Pages
Keywords
Abstract Segmentation of airways in Computed Tomography (CT) scans is a must for accurate support of diagnosis and intervention of many pulmonary disorders. In particular, lung cancer diagnosis would benefit from segmentations reaching most distal airways. We present a method that combines descriptors of bronchi local appearance and graph global structural analysis to fine-tune thresholds on the descriptors adapted for each bronchial level. We have compared our method to the top performers of the EXACT09 challenge and to a commercial software for biopsy planning evaluated in an own-collected data-base of high resolution CT scans acquired under different breathing conditions. Results on EXACT09 data show that our method provides a high leakage reduction with minimum loss in airway detection. Results on our data-base show the reliability across varying breathing conditions and a competitive performance for biopsy planning compared to a commercial solution.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.139; 600.145 Approved no
Call Number Admin @ si @ GSB2019 Serial 3357
Permanent link to this record
 

 
Author Marta Ligero; Guillermo Torres; Carles Sanchez; Katerine Diaz; Raquel Perez; Debora Gil
Title Selection of Radiomics Features based on their Reproducibility Type Conference Article
Year 2019 Publication 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society Abbreviated Journal
Volume Issue Pages 403-408
Keywords
Abstract Dimensionality reduction is key to alleviate machine learning artifacts in clinical applications with Small Sample Size (SSS) unbalanced datasets. Existing methods rely on either the probabilistic distribution of training data or the discriminant power of the reduced space, disregarding the impact of repeatability and uncertainty in features.In the present study is proposed the use of reproducibility of radiomics features to select features with high inter-class correlation coefficient (ICC). The reproducibility includes the variability introduced in the image acquisition, like medical scans acquisition parameters and convolution kernels, that affects intensity-based features and tumor annotations made by physicians, that influences morphological descriptors of the lesion.For the reproducibility of radiomics features three studies were conducted on cases collected at Vall Hebron Oncology Institute (VHIO) on responders to oncology treatment. The studies focused on the variability due to the convolution kernel, image acquisition parameters, and the inter-observer lesion identification. The features selected were those features with a ICC higher than 0.7 in the three studies.The selected features based on reproducibility were evaluated for lesion malignancy classification using a different database. Results show better performance compared to several state-of-the-art methods including Principal Component Analysis (PCA), Kernel Discriminant Analysis via QR decomposition (KDAQR), LASSO, and an own built Convolutional Neural Network.
Address Berlin; Alemanya; July 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference EMBC
Notes IAM; 600.139; 600.145 Approved no
Call Number Admin @ si @ LTS2019 Serial 3358
Permanent link to this record
 

 
Author Debora Gil; Antonio Esteban Lansaque; Sebastian Stefaniga; Mihail Gaianu; Carles Sanchez
Title Data Augmentation from Sketch Type Conference Article
Year 2019 Publication International Workshop on Uncertainty for Safe Utilization of Machine Learning in Medical Imaging Abbreviated Journal
Volume 11840 Issue Pages 155-162
Keywords Data augmentation; cycleGANs; Multi-objective optimization
Abstract State of the art machine learning methods need huge amounts of data with unambiguous annotations for their training. In the context of medical imaging this is, in general, a very difficult task due to limited access to clinical data, the time required for manual annotations and variability across experts. Simulated data could serve for data augmentation provided that its appearance was comparable to the actual appearance of intra-operative acquisitions. Generative Adversarial Networks (GANs) are a powerful tool for artistic style transfer, but lack a criteria for selecting epochs ensuring also preservation of intra-operative content.

We propose a multi-objective optimization strategy for a selection of cycleGAN epochs ensuring a mapping between virtual images and the intra-operative domain preserving anatomical content. Our approach has been applied to simulate intra-operative bronchoscopic videos and chest CT scans from virtual sketches generated using simple graphical primitives.
Address Shenzhen; China; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CLIP
Notes IAM; 600.145; 601.337; 600.139; 600.145 Approved no
Call Number Admin @ si @ GES2019 Serial 3359
Permanent link to this record
 

 
Author Carles Sanchez; Miguel Viñas; Coen Antens; Agnes Borras; Debora Gil
Title Back to Front Architecture for Diagnosis as a Service Type Conference Article
Year 2018 Publication 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing Abbreviated Journal
Volume Issue Pages 343-346
Keywords
Abstract Software as a Service (SaaS) is a cloud computing model in which a provider hosts applications in a server that customers use via internet. Since SaaS does not require to install applications on customers' own computers, it allows the use by multiple users of highly specialized software without extra expenses for hardware acquisition or licensing. A SaaS tailored for clinical needs not only would alleviate licensing costs, but also would facilitate easy access to new methods for diagnosis assistance. This paper presents a SaaS client-server architecture for Diagnosis as a Service (DaaS). The server is based on docker technology in order to allow execution of softwares implemented in different languages with the highest portability and scalability. The client is a content management system allowing the design of websites with multimedia content and interactive visualization of results allowing user editing. We explain a usage case that uses our DaaS as crowdsourcing platform in a multicentric pilot study carried out to evaluate the clinical benefits of a software for assessment of central airway obstruction.
Address Timisoara; Rumania; September 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference SYNASC
Notes IAM; 600.145 Approved no
Call Number Admin @ si @ SVA2018 Serial 3360
Permanent link to this record
 

 
Author Debora Gil; Antoni Rosell
Title Advances in Artificial Intelligence – How Lung Cancer CT Screening Will Progress? Type Abstract
Year 2019 Publication World Lung Cancer Conference Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Invited speaker
Address Barcelona; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IASLC WCLC
Notes IAM; 600.139; 600.145 Approved no
Call Number Admin @ si @ GiR2019 Serial 3361
Permanent link to this record
 

 
Author Rada Deeb; Joost Van de Weijer; Damien Muselet; Mathieu Hebert; Alain Tremeau
Title Deep spectral reflectance and illuminant estimation from self-interreflections Type Journal Article
Year 2019 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A
Volume 31 Issue 1 Pages 105-114
Keywords
Abstract In this work, we propose a convolutional neural network based approach to estimate the spectral reflectance of a surface and spectral power distribution of light from a single RGB image of a V-shaped surface. Interreflections happening in a concave surface lead to gradients of RGB values over its area. These gradients carry a lot of information concerning the physical properties of the surface and the illuminant. Our network is trained with only simulated data constructed using a physics-based interreflection model. Coupling interreflection effects with deep learning helps to retrieve the spectral reflectance under an unknown light and to estimate spectral power distribution of this light as well. In addition, it is more robust to the presence of image noise than classical approaches. Our results show that the proposed approach outperforms state-of-the-art learning-based approaches on simulated data. In addition, it gives better results on real data compared to other interreflection-based approaches.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.120 Approved no
Call Number Admin @ si @ DWM2019 Serial 3362
Permanent link to this record
 

 
Author Yaxing Wang; Abel Gonzalez-Garcia; Joost Van de Weijer; Luis Herranz
Title SDIT: Scalable and Diverse Cross-domain Image Translation Type Conference Article
Year 2019 Publication 27th ACM International Conference on Multimedia Abbreviated Journal
Volume Issue Pages 1267–1276
Keywords
Abstract Recently, image-to-image translation research has witnessed remarkable progress. Although current approaches successfully generate diverse outputs or perform scalable image transfer, these properties have not been combined into a single method. To address this limitation, we propose SDIT: Scalable and Diverse image-to-image translation. These properties are combined into a single generator. The diversity is determined by a latent variable which is randomly sampled from a normal distribution. The scalability is obtained by conditioning the network on the domain attributes. Additionally, we also exploit an attention mechanism that permits the generator to focus on the domain-specific attribute. We empirically demonstrate the performance of the proposed method on face mapping and other datasets beyond faces.
Address Nice; Francia; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACM-MM
Notes LAMP; 600.106; 600.109; 600.141; 600.120 Approved no
Call Number Admin @ si @ WGW2019 Serial 3363
Permanent link to this record
 

 
Author Arka Ujjal Dey; Suman Ghosh; Ernest Valveny; Gaurav Harit
Title Beyond Visual Semantics: Exploring the Role of Scene Text in Image Understanding Type Journal Article
Year 2021 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 149 Issue Pages 164-171
Keywords
Abstract Images with visual and scene text content are ubiquitous in everyday life. However, current image interpretation systems are mostly limited to using only the visual features, neglecting to leverage the scene text content. In this paper, we propose to jointly use scene text and visual channels for robust semantic interpretation of images. We do not only extract and encode visual and scene text cues, but also model their interplay to generate a contextual joint embedding with richer semantics. The contextual embedding thus generated is applied to retrieval and classification tasks on multimedia images, with scene text content, to demonstrate its effectiveness. In the retrieval framework, we augment our learned text-visual semantic representation with scene text cues, to mitigate vocabulary misses that may have occurred during the semantic embedding. To deal with irrelevant or erroneous recognition of scene text, we also apply query-based attention to our text channel. We show how the multi-channel approach, involving visual semantics and scene text, improves upon state of the art.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.121 Approved no
Call Number Admin @ si @ DGV2021 Serial 3364
Permanent link to this record
 

 
Author Mohammed Al Rawi; Ernest Valveny
Title Compact and Efficient Multitask Learning in Vision, Language and Speech Type Conference Article
Year 2019 Publication IEEE International Conference on Computer Vision Workshops Abbreviated Journal
Volume Issue Pages 2933-2942
Keywords
Abstract Across-domain multitask learning is a challenging area of computer vision and machine learning due to the intra-similarities among class distributions. Addressing this problem to cope with the human cognition system by considering inter and intra-class categorization and recognition complicates the problem even further. We propose in this work an effective holistic and hierarchical learning by using a text embedding layer on top of a deep learning model. We also propose a novel sensory discriminator approach to resolve the collisions between different tasks and domains. We then train the model concurrently on textual sentiment analysis, speech recognition, image classification, action recognition from video, and handwriting word spotting of two different scripts (Arabic and English). The model we propose successfully learned different tasks across multiple domains.
Address Seul; Korea; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes DAG; 600.121; 600.129 Approved no
Call Number Admin @ si @ RaV2019 Serial 3365
Permanent link to this record
 

 
Author Eduardo Aguilar; Petia Radeva
Title Class-Conditional Data Augmentation Applied to Image Classification Type Conference Article
Year 2019 Publication 18th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 11679 Issue Pages 182-192
Keywords CNNs; Data augmentation; Deep learning; Epistemic uncertainty; Image classification; Food recognition
Abstract Image classification is widely researched in the literature, where models based on Convolutional Neural Networks (CNNs) have provided better results. When data is not enough, CNN models tend to be overfitted. To deal with this, often, traditional techniques of data augmentation are applied, such as: affine transformations, adjusting the color balance, among others. However, we argue that some techniques of data augmentation may be more appropriate for some of the classes. In order to select the techniques that work best for particular class, we propose to explore the epistemic uncertainty for the samples within each class. From our experiments, we can observe that when the data augmentation is applied class-conditionally, we improve the results in terms of accuracy and also reduce the overall epistemic uncertainty. To summarize, in this paper we propose a class-conditional data augmentation procedure that allows us to obtain better results and improve robustness of the classification in the face of model uncertainty.
Address Salermo; Italy; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CAIP
Notes MILAB; no proj Approved no
Call Number Admin @ si @ AgR2019 Serial 3366
Permanent link to this record
 

 
Author Estefania Talavera; Nicolai Petkov; Petia Radeva
Title Unsupervised Routine Discovery in Egocentric Photo-Streams Type Conference Article
Year 2019 Publication 18th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 11678 Issue Pages 576-588
Keywords Routine discovery; Lifestyle; Egocentric vision; Behaviour analysis
Abstract The routine of a person is defined by the occurrence of activities throughout different days, and can directly affect the person’s health. In this work, we address the recognition of routine related days. To do so, we rely on egocentric images, which are recorded by a wearable camera and allow to monitor the life of the user from a first-person view perspective. We propose an unsupervised model that identifies routine related days, following an outlier detection approach. We test the proposed framework over a total of 72 days in the form of photo-streams covering around 2 weeks of the life of 5 different camera wearers. Our model achieves an average of 76% Accuracy and 68% Weighted F-Score for all the users. Thus, we show that our framework is able to recognise routine related days and opens the door to the understanding of the behaviour of people.
Address Salermo; Italy; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CAIP
Notes MILAB; no proj Approved no
Call Number Admin @ si @ TPR2019a Serial 3367
Permanent link to this record
 

 
Author Md.Mostafa Kamal Sarker; Syeda Furruka Banu; Hatem A. Rashwan; Mohamed Abdel-Nasser; Vivek Kumar Singh; Sylvie Chambon; Petia Radeva; Domenec Puig
Title Food Places Classification in Egocentric Images Using Siamese Neural Networks Type Conference Article
Year 2019 Publication 22nd International Conference of the Catalan Association of Artificial Intelligence Abbreviated Journal
Volume Issue Pages 145-151
Keywords
Abstract Wearable cameras are become more popular in recent years for capturing the unscripted moments of the first-person that help to analyze the users lifestyle. In this work, we aim to recognize the places related to food in egocentric images during a day to identify the daily food patterns of the first-person. Thus, this system can assist to improve their eating behavior to protect users against food-related diseases. In this paper, we use Siamese Neural Networks to learn the similarity between images from corresponding inputs for one-shot food places classification. We tested our proposed method with ‘MiniEgoFoodPlaces’ with 15 food related places. The proposed Siamese Neural Networks model with MobileNet achieved an overall classification accuracy of 76.74% and 77.53% on the validation and test sets of the “MiniEgoFoodPlaces” dataset, respectively outperforming with the base models, such as ResNet50, InceptionV3, and InceptionResNetV2.
Address Illes Balears; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title (up) Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes MILAB; no proj Approved no
Call Number Admin @ si @ SBR2019 Serial 3368
Permanent link to this record