toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Jorge Bernal; Aymeric Histace; Marc Masana; Quentin Angermann; Cristina Sanchez Montes; Cristina Rodriguez de Miguel; Maroua Hammami; Ana Garcia Rodriguez; Henry Cordova; Olivier Romain; Gloria Fernandez Esparrach; Xavier Dray; F. Javier Sanchez edit  openurl
  Title Polyp Detection Benchmark in Colonoscopy Videos using GTCreator: A Novel Fully Configurable Tool for Easy and Fast Annotation of Image Databases Type Conference Article
  Year 2018 Publication 32nd International Congress and Exhibition on Computer Assisted Radiology & Surgery Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CARS  
  Notes ISE; MV; 600.119 Approved no  
  Call Number (up) Admin @ si @ BHM2018 Serial 3089  
Permanent link to this record
 

 
Author Jorge Bernal; Aymeric Histace; Marc Masana; Quentin Angermann; Cristina Sanchez Montes; Cristina Rodriguez de Miguel; Maroua Hammami; Ana Garcia Rodriguez; Henry Cordova; Olivier Romain; Gloria Fernandez Esparrach; Xavier Dray; F. Javier Sanchez edit   pdf
doi  openurl
  Title GTCreator: a flexible annotation tool for image-based datasets Type Journal Article
  Year 2019 Publication International Journal of Computer Assisted Radiology and Surgery Abbreviated Journal IJCAR  
  Volume 14 Issue 2 Pages 191–201  
  Keywords Annotation tool; Validation Framework; Benchmark; Colonoscopy; Evaluation  
  Abstract Abstract Purpose: Methodology evaluation for decision support systems for health is a time consuming-task. To assess performance of polyp detection
methods in colonoscopy videos, clinicians have to deal with the annotation
of thousands of images. Current existing tools could be improved in terms of
exibility and ease of use. Methods:We introduce GTCreator, a exible annotation tool for providing image and text annotations to image-based datasets.
It keeps the main basic functionalities of other similar tools while extending
other capabilities such as allowing multiple annotators to work simultaneously
on the same task or enhanced dataset browsing and easy annotation transfer aiming to speed up annotation processes in large datasets. Results: The
comparison with other similar tools shows that GTCreator allows to obtain
fast and precise annotation of image datasets, being the only one which offers
full annotation editing and browsing capabilites. Conclusions: Our proposed
annotation tool has been proven to be efficient for large image dataset annota-
tion, as well as showing potential of use in other stages of method evaluation
such as experimental setup or results analysis.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV; 600.096; 600.109; 600.119; 601.305 Approved no  
  Call Number (up) Admin @ si @ BHM2019 Serial 3163  
Permanent link to this record
 

 
Author Miguel Angel Bautista; Antonio Hernandez; Victor Ponce; Xavier Perez Sala; Xavier Baro; Oriol Pujol; Cecilio Angulo; Sergio Escalera edit   pdf
doi  isbn
openurl 
  Title Probability-based Dynamic TimeWarping for Gesture Recognition on RGB-D data Type Conference Article
  Year 2012 Publication 21st International Conference on Pattern Recognition International Workshop on Depth Image Analysis Abbreviated Journal  
  Volume 7854 Issue Pages 126-135  
  Keywords  
  Abstract Dynamic Time Warping (DTW) is commonly used in gesture recognition tasks in order to tackle the temporal length variability of gestures. In the DTW framework, a set of gesture patterns are compared one by one to a maybe infinite test sequence, and a query gesture category is recognized if a warping cost below a certain threshold is found within the test sequence. Nevertheless, either taking one single sample per gesture category or a set of isolated samples may not encode the variability of such gesture category. In this paper, a probability-based DTW for gesture recognition is proposed. Different samples of the same gesture pattern obtained from RGB-Depth data are used to build a Gaussian-based probabilistic model of the gesture. Finally, the cost of DTW has been adapted accordingly to the new model. The proposed approach is tested in a challenging scenario, showing better performance of the probability-based DTW in comparison to state-of-the-art approaches for gesture recognition on RGB-D data.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-40302-6 Medium  
  Area Expedition Conference WDIA  
  Notes MILAB; OR;HuPBA;MV Approved no  
  Call Number (up) Admin @ si @ BHP2012 Serial 2120  
Permanent link to this record
 

 
Author Marco Bellantonio; Mohammad A. Haque; Pau Rodriguez; Kamal Nasrollahi; Taisi Telve; Sergio Escalera; Jordi Gonzalez; Thomas B. Moeslund; Pejman Rasti; Golamreza Anbarjafari edit  doi
openurl 
  Title Spatio-Temporal Pain Recognition in CNN-based Super-Resolved Facial Images Type Conference Article
  Year 2016 Publication 23rd International Conference on Pattern Recognition Abbreviated Journal  
  Volume 10165 Issue Pages  
  Keywords  
  Abstract Automatic pain detection is a long expected solution to a prevalent medical problem of pain management. This is more relevant when the subject of pain is young children or patients with limited ability to communicate about their pain experience. Computer vision-based analysis of facial pain expression provides a way of efficient pain detection. When deep machine learning methods came into the scene, automatic pain detection exhibited even better performance. In this paper, we figured out three important factors to exploit in automatic pain detection: spatial information available regarding to pain in each of the facial video frames, temporal axis information regarding to pain expression pattern in a subject video sequence, and variation of face resolution. We employed a combination of convolutional neural network and recurrent neural network to setup a deep hybrid pain detection framework that is able to exploit both spatial and temporal pain information from facial video. In order to analyze the effect of different facial resolutions, we introduce a super-resolution algorithm to generate facial video frames with different resolution setups. We investigated the performance on the publicly available UNBC-McMaster Shoulder Pain database. As a contribution, the paper provides novel and important information regarding to the performance of a hybrid deep learning framework for pain detection in facial images of different resolution.  
  Address Cancun; Mexico; December 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes HuPBA; ISE; 600.098; 600.119 Approved no  
  Call Number (up) Admin @ si @ BHR2016 Serial 2902  
Permanent link to this record
 

 
Author Ali Furkan Biten edit  isbn
openurl 
  Title A Bitter-Sweet Symphony on Vision and Language: Bias and World Knowledge Type Book Whole
  Year 2022 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Vision and Language are broadly regarded as cornerstones of intelligence. Even though language and vision have different aims – language having the purpose of communication, transmission of information and vision having the purpose of constructing mental representations around us to navigate and interact with objects – they cooperate and depend on one another in many tasks we perform effortlessly. This reliance is actively being studied in various Computer Vision tasks, e.g. image captioning, visual question answering, image-sentence retrieval, phrase grounding, just to name a few. All of these tasks share the inherent difficulty of the aligning the two modalities, while being robust to language
priors and various biases existing in the datasets. One of the ultimate goal for vision and language research is to be able to inject world knowledge while getting rid of the biases that come with the datasets. In this thesis, we mainly focus on two vision and language tasks, namely Image Captioning and Scene-Text Visual Question Answering (STVQA).
In both domains, we start by defining a new task that requires the utilization of world knowledge and in both tasks, we find that the models commonly employed are prone to biases that exist in the data. Concretely, we introduce new tasks and discover several problems that impede performance at each level and provide remedies or possible solutions in each chapter: i) We define a new task to move beyond Image Captioning to Image Interpretation that can utilize Named Entities in the form of world knowledge. ii) We study the object hallucination problem in classic Image Captioning systems and develop an architecture-agnostic solution. iii) We define a sub-task of Visual Question Answering that requires reading the text in the image (STVQA), where we highlight the limitations of current models. iv) We propose an architecture for the STVQA task that can point to the answer in the image and show how to combine it with classic VQA models. v) We show how far language can get us in STVQA and discover yet another bias which causes the models to disregard the image while doing Visual Question Answering.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher IMPRIMA Place of Publication Editor Dimosthenis Karatzas;Lluis Gomez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-124793-5-5 Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number (up) Admin @ si @ Bit2022 Serial 3755  
Permanent link to this record
 

 
Author Dena Bazazian; Dimosthenis Karatzas; Andrew Bagdanov edit   pdf
openurl 
  Title Soft-PHOC Descriptor for End-to-End Word Spotting in Egocentric Scene Images Type Conference Article
  Year 2018 Publication International Workshop on Egocentric Perception, Interaction and Computing at ECCV Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Word spotting in natural scene images has many applications in scene understanding and visual assistance. We propose Soft-PHOC, an intermediate representation of images based on character probability maps. Our representation extends the concept of the Pyramidal Histogram Of Characters (PHOC) by exploiting Fully Convolutional Networks to derive a pixel-wise mapping of the character distribution within candidate word regions. We show how to use our descriptors for word spotting tasks in egocentric camera streams through an efficient text line proposal algorithm. This is based on the Hough Transform over character attribute maps followed by scoring using Dynamic Time Warping (DTW). We evaluate our results on ICDAR 2015 Challenge 4 dataset of incidental scene text captured by an egocentric camera.  
  Address Munich; Alemanya; September 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCVW  
  Notes DAG; 600.129; 600.121; Approved no  
  Call Number (up) Admin @ si @ BKB2018b Serial 3174  
Permanent link to this record
 

 
Author Fernando Barrera; Felipe Lumbreras; Cristhian Aguilera; Angel Sappa edit   pdf
openurl 
  Title Planar-Based Multispectral Stereo Type Conference Article
  Year 2012 Publication 11th Quantitative InfraRed Thermography Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Naples, Italy  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference QIRT  
  Notes ADAS Approved no  
  Call Number (up) Admin @ si @ BLA2012 Serial 2016  
Permanent link to this record
 

 
Author Sumit K. Banchhor; Narendra D. Londhe; Tadashi Araki; Luca Saba; Petia Radeva; Narendra N. Khanna; Jasjit S. Suri edit  url
openurl 
  Title Calcium detection, its quantification, and grayscale morphology-based risk stratification using machine learning in multimodality big data coronary and carotid scans: A review. Type Journal Article
  Year 2018 Publication Computers in Biology and Medicine Abbreviated Journal CBM  
  Volume 101 Issue Pages 184-198  
  Keywords Heart disease; Stroke; Atherosclerosis; Intravascular; Coronary; Carotid; Calcium; Morphology; Risk stratification  
  Abstract Purpose of review

Atherosclerosis is the leading cause of cardiovascular disease (CVD) and stroke. Typically, atherosclerotic calcium is found during the mature stage of the atherosclerosis disease. It is therefore often a challenge to identify and quantify the calcium. This is due to the presence of multiple components of plaque buildup in the arterial walls. The American College of Cardiology/American Heart Association guidelines point to the importance of calcium in the coronary and carotid arteries and further recommend its quantification for the prevention of heart disease. It is therefore essential to stratify the CVD risk of the patient into low- and high-risk bins.
Recent finding

Calcium formation in the artery walls is multifocal in nature with sizes at the micrometer level. Thus, its detection requires high-resolution imaging. Clinical experience has shown that even though optical coherence tomography offers better resolution, intravascular ultrasound still remains an important imaging modality for coronary wall imaging. For a computer-based analysis system to be complete, it must be scientifically and clinically validated. This study presents a state-of-the-art review (condensation of 152 publications after examining 200 articles) covering the methods for calcium detection and its quantification for coronary and carotid arteries, the pros and cons of these methods, and the risk stratification strategies. The review also presents different kinds of statistical models and gold standard solutions for the evaluation of software systems useful for calcium detection and quantification. Finally, the review concludes with a possible vision for designing the next-generation system for better clinical outcomes.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number (up) Admin @ si @ BLA2018 Serial 3188  
Permanent link to this record
 

 
Author Ricard Borras; Agata Lapedriza; Laura Igual edit   pdf
doi  isbn
openurl 
  Title Depth Information in Human Gait Analysis: An Experimental Study on Gender Recognition Type Conference Article
  Year 2012 Publication 9th International Conference on Image Analysis and Recognition Abbreviated Journal  
  Volume 7325 Issue II Pages 98-105  
  Keywords  
  Abstract This work presents DGait, a new gait database acquired with a depth camera. This database contains videos from 53 subjects walking in different directions. The intent of this database is to provide a public set to explore whether the depth can be used as an additional information source for gait classification purposes. Each video is labelled according to subject, gender and age. Furthermore, for each subject and view point, we provide initial and final frames of an entire walk cycle. On the other hand, we perform gait-based gender classification experiments with DGait database, in order to illustrate the usefulness of depth information for this purpose. In our experiments, we extract 2D and 3D gait features based on shape descriptors, and compare the performance of these features for gender identification, using a Kernel SVM. The obtained results show that depth can be an information source of great relevance for gait classification problems.  
  Address Aveiro, Portugal  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-31297-7 Medium  
  Area Expedition Conference ICIAR  
  Notes OR; MILAB;MV Approved no  
  Call Number (up) Admin @ si @ BLI2012 Serial 2009  
Permanent link to this record
 

 
Author Federico Bartoli; Giuseppe Lisanti; Svebor Karaman; Andrew Bagdanov; Alberto del Bimbo edit  openurl
  Title Unsupervised scene adaptation for faster multi- scale pedestrian detection Type Conference Article
  Year 2014 Publication 22nd International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 3534 - 3539  
  Keywords  
  Abstract  
  Address Stockholm; Sweden; August 2014  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes LAMP; 600.079 Approved no  
  Call Number (up) Admin @ si @ BLK2014 Serial 2519  
Permanent link to this record
 

 
Author Fernando Barrera; Felipe Lumbreras; Angel Sappa edit   pdf
doi  openurl
  Title Multimodal Stereo Vision System: 3D Data Extraction and Algorithm Evaluation Type Journal Article
  Year 2012 Publication IEEE Journal of Selected Topics in Signal Processing Abbreviated Journal J-STSP  
  Volume 6 Issue 5 Pages 437-446  
  Keywords  
  Abstract This paper proposes an imaging system for computing sparse depth maps from multispectral images. A special stereo head consisting of an infrared and a color camera defines the proposed multimodal acquisition system. The cameras are rigidly attached so that their image planes are parallel. Details about the calibration and image rectification procedure are provided. Sparse disparity maps are obtained by the combined use of mutual information enriched with gradient information. The proposed approach is evaluated using a Receiver Operating Characteristics curve. Furthermore, a multispectral dataset, color and infrared images, together with their corresponding ground truth disparity maps, is generated and used as a test bed. Experimental results in real outdoor scenarios are provided showing its viability and that the proposed approach is not restricted to a specific domain.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1932-4553 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number (up) Admin @ si @ BLS2012b Serial 2155  
Permanent link to this record
 

 
Author Fernando Barrera; Felipe Lumbreras; Angel Sappa edit  url
doi  openurl
  Title Multispectral Piecewise Planar Stereo using Manhattan-World Assumption Type Journal Article
  Year 2013 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 34 Issue 1 Pages 52-61  
  Keywords Multispectral stereo rig; Dense disparity maps from multispectral stereo; Color and infrared images  
  Abstract This paper proposes a new framework for extracting dense disparity maps from a multispectral stereo rig. The system is constructed with an infrared and a color camera. It is intended to explore novel multispectral stereo matching approaches that will allow further extraction of semantic information. The proposed framework consists of three stages. Firstly, an initial sparse disparity map is generated by using a cost function based on feature matching in a multiresolution scheme. Then, by looking at the color image, a set of planar hypotheses is defined to describe the surfaces on the scene. Finally, the previous stages are combined by reformulating the disparity computation as a global minimization problem. The paper has two main contributions. The first contribution combines mutual information with a shape descriptor based on gradient in a multiresolution scheme. The second contribution, which is based on the Manhattan-world assumption, extracts a dense disparity representation using the graph cut algorithm. Experimental results in outdoor scenarios are provided showing the validity of the proposed framework.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.054; 600.055; 605.203 Approved no  
  Call Number (up) Admin @ si @ BLS2013 Serial 2245  
Permanent link to this record
 

 
Author Francisco Blanco; Felipe Lumbreras; Joan Serrat; Roswitha Siener; Silvia Serranti; Giuseppe Bonifazi; Montserrat Lopez Mesas; Manuel Valiente edit  doi
openurl 
  Title Taking advantage of Hyperspectral Imaging classification of urinary stones against conventional IR Spectroscopy Type Journal Article
  Year 2014 Publication Journal of Biomedical Optics Abbreviated Journal JBiO  
  Volume 19 Issue 12 Pages 126004-1 - 126004-9  
  Keywords  
  Abstract The analysis of urinary stones is mandatory for the best management of the disease after the stone passage in order to prevent further stone episodes. Thus the use of an appropriate methodology for an individualized stone analysis becomes a key factor for giving the patient the most suitable treatment. A recently developed hyperspectral imaging methodology, based on pixel-to-pixel analysis of near-infrared spectral images, is compared to the reference technique in stone analysis, infrared (IR) spectroscopy. The developed classification model yields >90% correct classification rate when compared to IR and is able to precisely locate stone components within the structure of the stone with a 15 µm resolution. Due to the little sample pretreatment, low analysis time, good performance of the model, and the automation of the measurements, they become analyst independent; this methodology can be considered to become a routine analysis for clinical laboratories.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.076 Approved no  
  Call Number (up) Admin @ si @ BLS2014 Serial 2563  
Permanent link to this record
 

 
Author Souhail Bakkali; Zuheng Ming; Mickael Coustaty; Marçal Rusiñol; Oriol Ramos Terrades edit   pdf
doi  openurl
  Title VLCDoC: Vision-Language Contrastive Pre-Training Model for Cross-Modal Document Classification Type Journal Article
  Year 2023 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 139 Issue Pages 109419  
  Keywords  
  Abstract Multimodal learning from document data has achieved great success lately as it allows to pre-train semantically meaningful features as a prior into a learnable downstream approach. In this paper, we approach the document classification problem by learning cross-modal representations through language and vision cues, considering intra- and inter-modality relationships. Instead of merging features from different modalities into a common representation space, the proposed method exploits high-level interactions and learns relevant semantic information from effective attention flows within and across modalities. The proposed learning objective is devised between intra- and inter-modality alignment tasks, where the similarity distribution per task is computed by contracting positive sample pairs while simultaneously contrasting negative ones in the common feature representation space}. Extensive experiments on public document classification datasets demonstrate the effectiveness and the generalization capacity of our model on both low-scale and large-scale datasets.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISSN 0031-3203 ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.140; 600.121 Approved no  
  Call Number (up) Admin @ si @ BMC2023 Serial 3826  
Permanent link to this record
 

 
Author Hugo Bertiche; Meysam Madadi; Sergio Escalera edit   pdf
openurl 
  Title CLOTH3D: Clothed 3D Humans Type Conference Article
  Year 2020 Publication 16th European Conference on Computer Vision Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This work presents CLOTH3D, the first big scale synthetic dataset of 3D clothed human sequences. CLOTH3D contains a large variability on garment type, topology, shape, size, tightness and fabric. Clothes are simulated on top of thousands of different pose sequences and body shapes, generating realistic cloth dynamics. We provide the dataset with a generative model for cloth generation. We propose a Conditional Variational Auto-Encoder (CVAE) based on graph convolutions (GCVAE) to learn garment latent spaces. This allows for realistic generation of 3D garments on top of SMPL model for any pose and shape.  
  Address Virtual; August 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCV  
  Notes HUPBA Approved no  
  Call Number (up) Admin @ si @ BME2020 Serial 3519  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: