|   | 
Details
   web
Records
Author Joan M. Nuñez; Jorge Bernal; Miquel Ferrer; Fernando Vilariño
Title Impact of Keypoint Detection on Graph-based Characterization of Blood Vessels in Colonoscopy Videos Type Conference Article
Year 2014 Publication CARE workshop Abbreviated Journal
Volume Issue Pages
Keywords Colonoscopy; Graph Matching; Biometrics; Vessel; Intersection
Abstract We explore the potential of the use of blood vessels as anatomical landmarks for developing image registration methods in colonoscopy images. An unequivocal representation of blood vessels could be used to guide follow-up methods to track lesions over different interventions. We propose a graph-based representation to characterize network structures, such as blood vessels, based on the use of intersections and endpoints. We present a study consisting of the assessment of the minimal performance a keypoint detector should achieve so that the structure can still be recognized. Experimental results prove that, even by achieving a loss of 35% of the keypoints, the descriptive power of the associated graphs to the vessel pattern is still high enough to recognize blood vessels.
Address Boston; USA; September 2014
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CARE
Notes MV; DAG; 600.060; 600.047; 600.077;SIAI Approved no
Call Number (up) Admin @ si @ NBF2014 Serial 2504
Permanent link to this record
 

 
Author A.Nicolaou; Andrew Bagdanov; Marcus Liwicki; Dimosthenis Karatzas
Title Sparse Radial Sampling LBP for Writer Identification Type Conference Article
Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal
Volume Issue Pages 716-720
Keywords
Abstract In this paper we present the use of Sparse Radial Sampling Local Binary Patterns, a variant of Local Binary Patterns (LBP) for text-as-texture classification. By adapting and extending the standard LBP operator to the particularities of text we get a generic text-as-texture classification scheme and apply it to writer identification. In experiments on CVL and ICDAR 2013 datasets, the proposed feature-set demonstrates State-Of-the-Art (SOA) performance. Among the SOA, the proposed method is the only one that is based on dense extraction of a single local feature descriptor. This makes it fast and applicable at the earliest stages in a DIA pipeline without the need for segmentation, binarization, or extraction of multiple features.
Address Nancy; France; August 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 600.077 Approved no
Call Number (up) Admin @ si @ NBL2015 Serial 2692
Permanent link to this record
 

 
Author Khanh Nguyen; Ali Furkan Biten; Andres Mafla; Lluis Gomez; Dimosthenis Karatzas
Title Show, Interpret and Tell: Entity-Aware Contextualised Image Captioning in Wikipedia Type Conference Article
Year 2023 Publication Proceedings of the 37th AAAI Conference on Artificial Intelligence Abbreviated Journal
Volume 37 Issue 2 Pages 1940-1948
Keywords
Abstract Humans exploit prior knowledge to describe images, and are able to adapt their explanation to specific contextual information given, even to the extent of inventing plausible explanations when contextual information and images do not match. In this work, we propose the novel task of captioning Wikipedia images by integrating contextual knowledge. Specifically, we produce models that jointly reason over Wikipedia articles, Wikimedia images and their associated descriptions to produce contextualized captions. The same Wikimedia image can be used to illustrate different articles, and the produced caption needs to be adapted to the specific context allowing us to explore the limits of the model to adjust captions to different contextual information. Dealing with out-of-dictionary words and Named Entities is a challenging task in this domain. To address this, we propose a pre-training objective, Masked Named Entity Modeling (MNEM), and show that this pretext task results to significantly improved models. Furthermore, we verify that a model pre-trained in Wikipedia generalizes well to News Captioning datasets. We further define two different test splits according to the difficulty of the captioning task. We offer insights on the role and the importance of each modality and highlight the limitations of our model.
Address Washington; USA; February 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference AAAI
Notes DAG Approved no
Call Number (up) Admin @ si @ NBM2023 Serial 3860
Permanent link to this record
 

 
Author Fatemeh Noroozi; Ciprian Corneanu; Dorota Kamińska; Tomasz Sapiński; Sergio Escalera; Gholamreza Anbarjafari
Title Survey on Emotional Body Gesture Recognition Type Journal Article
Year 2021 Publication IEEE Transactions on Affective Computing Abbreviated Journal TAC
Volume 12 Issue 2 Pages 505 - 523
Keywords
Abstract Automatic emotion recognition has become a trending research topic in the past decade. While works based on facial expressions or speech abound, recognizing affect from body gestures remains a less explored topic. We present a new comprehensive survey hoping to boost research in the field. We first introduce emotional body gestures as a component of what is commonly known as “body language” and comment general aspects as gender differences and culture dependence. We then define a complete framework for automatic emotional body gesture recognition. We introduce person detection and comment static and dynamic body pose estimation methods both in RGB and 3D. We then comment the recent literature related to representation learning and emotion recognition from images of emotionally expressive gestures. We also discuss multi-modal approaches that combine speech or face with body gestures for improved emotion recognition. While pre-processing methodologies (e.g. human detection and pose estimation) are nowadays mature technologies fully developed for robust large scale analysis, we show that for emotion recognition the quantity of labelled data is scarce, there is no agreement on clearly defined output spaces and the representations are shallow and largely based on naive geometrical representations.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; no proj Approved no
Call Number (up) Admin @ si @ NCK2021 Serial 3657
Permanent link to this record
 

 
Author Anguelos Nicolaou; Sounak Dey; V.Christlein; A.Maier; Dimosthenis Karatzas
Title Non-deterministic Behavior of Ranking-based Metrics when Evaluating Embeddings Type Conference Article
Year 2018 Publication International Workshop on Reproducible Research in Pattern Recognition Abbreviated Journal
Volume 11455 Issue Pages 71-82
Keywords
Abstract Embedding data into vector spaces is a very popular strategy of pattern recognition methods. When distances between embeddings are quantized, performance metrics become ambiguous. In this paper, we present an analysis of the ambiguity quantized distances introduce and provide bounds on the effect. We demonstrate that it can have a measurable effect in empirical data in state-of-the-art systems. We also approach the phenomenon from a computer security perspective and demonstrate how someone being evaluated by a third party can exploit this ambiguity and greatly outperform a random predictor without even access to the input data. We also suggest a simple solution making the performance metrics, which rely on ranking, totally deterministic and impervious to such exploits.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.121; 600.129 Approved no
Call Number (up) Admin @ si @ NDC2018 Serial 3178
Permanent link to this record
 

 
Author Kamal Nasrollahi; Sergio Escalera; P. Rasti; Gholamreza Anbarjafari; Xavier Baro; Hugo Jair Escalante; Thomas B. Moeslund
Title Deep Learning based Super-Resolution for Improved Action Recognition Type Conference Article
Year 2015 Publication 5th International Conference on Image Processing Theory, Tools and Applications IPTA2015 Abbreviated Journal
Volume Issue Pages 67 - 72
Keywords
Abstract Action recognition systems mostly work with videos of proper quality and resolution. Even most challenging benchmark databases for action recognition, hardly include videos of low-resolution from, e.g., surveillance cameras. In videos recorded by such cameras, due to the distance between people and cameras, people are pictured very small and hence challenge action recognition algorithms. Simple upsampling methods, like bicubic interpolation, cannot retrieve all the detailed information that can help the recognition. To deal with this problem, in this paper we combine results of bicubic interpolation with results of a state-ofthe-art deep learning-based super-resolution algorithm, through an alpha-blending approach. The experimental results obtained on down-sampled version of a large subset of Hoolywood2 benchmark database show the importance of the proposed system in increasing the recognition rate of a state-of-the-art action recognition system for handling low-resolution videos.
Address Orleans; France; November 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IPTA
Notes HuPBA;MV Approved no
Call Number (up) Admin @ si @ NER2015 Serial 2648
Permanent link to this record
 

 
Author Francesc Net; Marc Folia; Pep Casals; Lluis Gomez
Title Transductive Learning for Near-Duplicate Image Detection in Scanned Photo Collections Type Conference Article
Year 2023 Publication 17th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume 14191 Issue Pages 3-17
Keywords Image deduplication; Near-duplicate images detection; Transductive Learning; Photographic Archives; Deep Learning
Abstract This paper presents a comparative study of near-duplicate image detection techniques in a real-world use case scenario, where a document management company is commissioned to manually annotate a collection of scanned photographs. Detecting duplicate and near-duplicate photographs can reduce the time spent on manual annotation by archivists. This real use case differs from laboratory settings as the deployment dataset is available in advance, allowing the use of transductive learning. We propose a transductive learning approach that leverages state-of-the-art deep learning architectures such as convolutional neural networks (CNNs) and Vision Transformers (ViTs). Our approach involves pre-training a deep neural network on a large dataset and then fine-tuning the network on the unlabeled target collection with self-supervised learning. The results show that the proposed approach outperforms the baseline methods in the task of near-duplicate image detection in the UKBench and an in-house private dataset.
Address San Jose; CA; USA; August 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG Approved no
Call Number (up) Admin @ si @ NFC2023 Serial 3859
Permanent link to this record
 

 
Author Bhalaji Nagarajan; Ricardo Marques; Marcos Mejia; Petia Radeva
Title Class-conditional Importance Weighting for Deep Learning with Noisy Labels Type Conference Article
Year 2022 Publication 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications Abbreviated Journal
Volume 5 Issue Pages 679-686
Keywords Noisy Labeling; Loss Correction; Class-conditional Importance Weighting; Learning with Noisy Labels
Abstract Large-scale accurate labels are very important to the Deep Neural Networks to train them and assure high performance. However, it is very expensive to create a clean dataset since usually it relies on human interaction. To this purpose, the labelling process is made cheap with a trade-off of having noisy labels. Learning with Noisy Labels is an active area of research being at the same time very challenging. The recent advances in Self-supervised learning and robust loss functions have helped in advancing noisy label research. In this paper, we propose a loss correction method that relies on dynamic weights computed based on the model training. We extend the existing Contrast to Divide algorithm coupled with DivideMix using a new class-conditional weighted scheme. We validate the method using the standard noise experiments and achieved encouraging results.
Address Virtual; February 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference VISAPP
Notes MILAB; no menciona Approved no
Call Number (up) Admin @ si @ NMM2022 Serial 3798
Permanent link to this record
 

 
Author Fatemeh Noroozi; Marina Marjanovic; Angelina Njegus; Sergio Escalera; Gholamreza Anbarjafari
Title Fusion of Classifier Predictions for Audio-Visual Emotion Recognition Type Conference Article
Year 2016 Publication 23rd International Conference on Pattern Recognition Workshops Abbreviated Journal
Volume Issue Pages
Keywords
Abstract In this paper is presented a novel multimodal emotion recognition system which is based on the analysis of audio and visual cues. MFCC-based features are extracted from the audio channel and facial landmark geometric relations are
computed from visual data. Both sets of features are learnt separately using state-of-the-art classifiers. In addition, we summarise each emotion video into a reduced set of key-frames, which are learnt in order to visually discriminate emotions by means of a Convolutional Neural Network. Finally, confidence
outputs of all classifiers from all modalities are used to define a new feature space to be learnt for final emotion prediction, in a late fusion/stacking fashion. The conducted experiments on eNTERFACE’05 database show significant performance improvements of our proposed system in comparison to state-of-the-art approaches.
Address Cancun; Mexico; December 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICPRW
Notes HuPBA;MILAB; Approved no
Call Number (up) Admin @ si @ NMN2016 Serial 2839
Permanent link to this record
 

 
Author Fatemeh Noroozi; Marina Marjanovic; Angelina Njegus; Sergio Escalera; Gholamreza Anbarjafari
Title Audio-Visual Emotion Recognition in Video Clips Type Journal Article
Year 2019 Publication IEEE Transactions on Affective Computing Abbreviated Journal TAC
Volume 10 Issue 1 Pages 60-75
Keywords
Abstract This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues. From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual part, two strategies are considered. First, facial landmarks’ geometric relations, i.e. distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/stacking fashion. The experiments conducted on the SAVEE, eNTERFACE’05, and RML databases show significant performance improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases.
Address 1 Jan.-March 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; 602.143; 602.133 Approved no
Call Number (up) Admin @ si @ NMN2017 Serial 3011
Permanent link to this record
 

 
Author Farshad Nourbakhsh
Title Colour logo recognition Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal
Volume 145 Issue Pages
Keywords
Abstract
Address
Corporate Author Computer Vision Center Thesis Master's thesis
Publisher Place of Publication Bellaterra, Barcelona Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG Approved no
Call Number (up) Admin @ si @ Nou2009 Serial 2399
Permanent link to this record
 

 
Author Nibal Nayef; Yash Patel; Michal Busta; Pinaki Nath Chowdhury; Dimosthenis Karatzas; Wafa Khlif; Jiri Matas; Umapada Pal; Jean-Christophe Burie; Cheng-lin Liu; Jean-Marc Ogier
Title ICDAR2019 Robust Reading Challenge on Multi-lingual Scene Text Detection and Recognition — RRC-MLT-2019 Type Conference Article
Year 2019 Publication 15th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages 1582-1587
Keywords
Abstract With the growing cosmopolitan culture of modern cities, the need of robust Multi-Lingual scene Text (MLT) detection and recognition systems has never been more immense. With the goal to systematically benchmark and push the state-of-the-art forward, the proposed competition builds on top of the RRC-MLT-2017 with an additional end-to-end task, an additional language in the real images dataset, a large scale multi-lingual synthetic dataset to assist the training, and a baseline End-to-End recognition method. The real dataset consists of 20,000 images containing text from 10 languages. The challenge has 4 tasks covering various aspects of multi-lingual scene text: (a) text detection, (b) cropped word script classification, (c) joint text detection and script classification and (d) end-to-end detection and recognition. In total, the competition received 60 submissions from the research and industrial communities. This paper presents the dataset, the tasks and the findings of the presented RRC-MLT-2019 challenge.
Address Sydney; Australia; September 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG; 600.121; 600.129 Approved no
Call Number (up) Admin @ si @ NPB2019 Serial 3341
Permanent link to this record
 

 
Author F.Negin; Pau Rodriguez; M.Koperski; A.Kerboua; Jordi Gonzalez; J.Bourgeois; E.Chapoulie; P.Robert; F.Bremond
Title PRAXIS: Towards automatic cognitive assessment using gesture recognition Type Journal Article
Year 2018 Publication Expert Systems with Applications Abbreviated Journal ESWA
Volume 106 Issue Pages 21-35
Keywords
Abstract Praxis test is a gesture-based diagnostic test which has been accepted as diagnostically indicative of cortical pathologies such as Alzheimer’s disease. Despite being simple, this test is oftentimes skipped by the clinicians. In this paper, we propose a novel framework to investigate the potential of static and dynamic upper-body gestures based on the Praxis test and their potential in a medical framework to automatize the test procedures for computer-assisted cognitive assessment of older adults.

In order to carry out gesture recognition as well as correctness assessment of the performances we have recollected a novel challenging RGB-D gesture video dataset recorded by Kinect v2, which contains 29 specific gestures suggested by clinicians and recorded from both experts and patients performing the gesture set. Moreover, we propose a framework to learn the dynamics of upper-body gestures, considering the videos as sequences of short-term clips of gestures. Our approach first uses body part detection to extract image patches surrounding the hands and then, by means of a fine-tuned convolutional neural network (CNN) model, it learns deep hand features which are then linked to a long short-term memory to capture the temporal dependencies between video frames.
We report the results of four developed methods using different modalities. The experiments show effectiveness of our deep learning based approach in gesture recognition and performance assessment tasks. Satisfaction of clinicians from the assessment reports indicates the impact of framework corresponding to the diagnosis.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number (up) Admin @ si @ NRK2018 Serial 3669
Permanent link to this record
 

 
Author Andrew Nolan; Daniel Serrano; Aura Hernandez-Sabate; Daniel Ponsa; Antonio Lopez
Title Obstacle mapping module for quadrotors on outdoor Search and Rescue operations Type Conference Article
Year 2013 Publication International Micro Air Vehicle Conference and Flight Competition Abbreviated Journal
Volume Issue Pages
Keywords UAV
Abstract Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments.
Address Toulouse; France; September 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IMAV
Notes ADAS; 600.054; 600.057;IAM Approved no
Call Number (up) Admin @ si @ NSH2013 Serial 2371
Permanent link to this record
 

 
Author Marta Nuñez-Garcia; Sonja Simpraga; M.Angeles Jurado; Maite Garolera; Roser Pueyo; Laura Igual
Title FADR: Functional-Anatomical Discriminative Regions for rest fMRI Characterization Type Conference Article
Year 2015 Publication Machine Learning in Medical Imaging, Proceedings of 6th International Workshop, MLMI 2015, Held in Conjunction with MICCAI 2015 Abbreviated Journal
Volume Issue Pages 61-68
Keywords
Abstract
Address Munich; Germany; October 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MLMI
Notes MILAB Approved no
Call Number (up) Admin @ si @ NSJ2015 Serial 2674
Permanent link to this record