|   | 
Details
   web
Records
Author Zhen Xu; Sergio Escalera; Adrien Pavao; Magali Richard; Wei-Wei Tu; Quanming Yao; Huan Zhao; Isabelle Guyon
Title (down) Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform Type Journal Article
Year 2022 Publication Patterns Abbreviated Journal PATTERNS
Volume 3 Issue 7 Pages 100543
Keywords Machine learning; data science; benchmark platform; reproducibility; competitions
Abstract Obtaining a standardized benchmark of computational methods is a major issue in data-science communities. Dedicated frameworks enabling fair benchmarking in a unified environment are yet to be developed. Here, we introduce Codabench, a meta-benchmark platform that is open sourced and community driven for benchmarking algorithms or software agents versus datasets or tasks. A public instance of Codabench is open to everyone free of charge and allows benchmark organizers to fairly compare submissions under the same setting (software, hardware, data, algorithms), with custom protocols and data formats. Codabench has unique features facilitating easy organization of flexible and reproducible benchmarks, such as the possibility of reusing templates of benchmarks and supplying compute resources on demand. Codabench has been used internally and externally on various applications, receiving more than 130 users and 2,500 submissions. As illustrative use cases, we introduce four diverse benchmarks covering graph machine learning, cancer heterogeneity, clinical diagnosis, and reinforcement learning.
Address June 24, 2022
Corporate Author Thesis
Publisher Science Direct Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA Approved no
Call Number Admin @ si @ XEP2022 Serial 3764
Permanent link to this record
 

 
Author Angel Sappa; M.A. Garcia
Title (down) Coarse-to-Fine Approximation of Range Images with Bounded Error Adaptive Triangular Meshes Type Journal
Year 2007 Publication Journal of Electronic Imaging, 16(2), 023010(11 pages) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number ADAS @ adas @ SaG2007b Serial 802
Permanent link to this record
 

 
Author Jose Luis Gomez; Gabriel Villalonga; Antonio Lopez
Title (down) Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models Type Journal Article
Year 2023 Publication Sensors – Special Issue on “Machine Learning for Autonomous Driving Perception and Prediction” Abbreviated Journal SENS
Volume 23 Issue 2 Pages 621
Keywords Domain adaptation; semi-supervised learning; Semantic segmentation; Autonomous driving
Abstract Semantic image segmentation is a central and challenging task in autonomous driving, addressed by training deep models. Since this training draws to a curse of human-based image labeling, using synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies to address an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic
segmentation models. It consists of a self-training stage, which provides two domain-adapted models, and a model collaboration loop for the mutual improvement of these two models. These models are then used to provide the final semantic segmentation labels (pseudo-labels) for the real-world images. The overall
procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for on-board semantic segmentation. Our
procedure shows improvements ranging from ∼13 to ∼26 mIoU points over baselines, so establishing new state-of-the-art results.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; no proj Approved no
Call Number Admin @ si @ GVL2023 Serial 3705
Permanent link to this record
 

 
Author Gabriel Villalonga; Antonio Lopez
Title (down) Co-Training for On-Board Deep Object Detection Type Journal Article
Year 2020 Publication IEEE Access Abbreviated Journal ACCESS
Volume Issue Pages 194441 - 194456
Keywords
Abstract Providing ground truth supervision to train visual models has been a bottleneck over the years, exacerbated by domain shifts which degenerate the performance of such models. This was the case when visual tasks relied on handcrafted features and shallow machine learning and, despite its unprecedented performance gains, the problem remains open within the deep learning paradigm due to its data-hungry nature. Best performing deep vision-based object detectors are trained in a supervised manner by relying on human-labeled bounding boxes which localize class instances (i.e. objects) within the training images. Thus, object detection is one of such tasks for which human labeling is a major bottleneck. In this article, we assess co-training as a semi-supervised learning method for self-labeling objects in unlabeled images, so reducing the human-labeling effort for developing deep object detectors. Our study pays special attention to a scenario involving domain shift; in particular, when we have automatically generated virtual-world images with object bounding boxes and we have real-world images which are unlabeled. Moreover, we are particularly interested in using co-training for deep object detection in the context of driver assistance systems and/or self-driving vehicles. Thus, using well-established datasets and protocols for object detection in these application contexts, we will show how co-training is a paradigm worth to pursue for alleviating object labeling, working both alone and together with task-agnostic domain adaptation.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ ViL2020 Serial 3488
Permanent link to this record
 

 
Author Volkmar Frinken; Andreas Fischer; Horst Bunke; Alicia Fornes
Title (down) Co-training for Handwritten Word Recognition Type Conference Article
Year 2011 Publication 11th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume Issue Pages 314-318
Keywords
Abstract To cope with the tremendous variations of writing styles encountered between different individuals, unconstrained automatic handwriting recognition systems need to be trained on large sets of labeled data. Traditionally, the training data has to be labeled manually, which is a laborious and costly process. Semi-supervised learning techniques offer methods to utilize unlabeled data, which can be obtained cheaply in large amounts in order, to reduce the need for labeled data. In this paper, we propose the use of Co-Training for improving the recognition accuracy of two weakly trained handwriting recognition systems. The first one is based on Recurrent Neural Networks while the second one is based on Hidden Markov Models. On the IAM off-line handwriting database we demonstrate a significant increase of the recognition accuracy can be achieved with Co-Training for single word recognition.
Address Beijing, China
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG Approved no
Call Number Admin @ si @ FFB2011 Serial 1789
Permanent link to this record
 

 
Author Jose Luis Gomez; Gabriel Villalonga; Antonio Lopez
Title (down) Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches Type Journal Article
Year 2021 Publication Sensors Abbreviated Journal SENS
Volume 21 Issue 9 Pages 3185
Keywords co-training; multi-modality; vision-based object detection; ADAS; self-driving
Abstract Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS; 600.118 Approved no
Call Number Admin @ si @ GVL2021 Serial 3562
Permanent link to this record
 

 
Author A. Martinez; Jordi Vitria
Title (down) Clustering in Image Space for Place Recognition and Visiual Annotations for Human-Robot Interaction. Type Journal
Year 2001 Publication IEEE Trans. on Systems, Man, and Cybernatics–Part B: Cybernetics, 31(5):669–682 (IF: 0.789) Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ MVi2001 Serial 141
Permanent link to this record
 

 
Author Hugo Bertiche; Meysam Madadi; Sergio Escalera
Title (down) CLOTH3D: Clothed 3D Humans Type Conference Article
Year 2020 Publication 16th European Conference on Computer Vision Abbreviated Journal
Volume Issue Pages
Keywords
Abstract This work presents CLOTH3D, the first big scale synthetic dataset of 3D clothed human sequences. CLOTH3D contains a large variability on garment type, topology, shape, size, tightness and fabric. Clothes are simulated on top of thousands of different pose sequences and body shapes, generating realistic cloth dynamics. We provide the dataset with a generative model for cloth generation. We propose a Conditional Variational Auto-Encoder (CVAE) based on graph convolutions (GCVAE) to learn garment latent spaces. This allows for realistic generation of 3D garments on top of SMPL model for any pose and shape.
Address Virtual; August 2020
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECCV
Notes HUPBA Approved no
Call Number Admin @ si @ BME2020 Serial 3519
Permanent link to this record
 

 
Author Quentin Angermann; Jorge Bernal; Cristina Sanchez Montes; Maroua Hammami; Gloria Fernandez Esparrach; Xavier Dray; Olivier Romain; F. Javier Sanchez; Aymeric Histace
Title (down) Clinical Usability Quantification Of a Real-Time Polyp Detection Method In Videocolonoscopy Type Conference Article
Year 2017 Publication 25th United European Gastroenterology Week Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Barcelona, October 2017
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ESGE
Notes MV; no menciona Approved no
Call Number Admin @ si @ ABS2017c Serial 2978
Permanent link to this record
 

 
Author Pierluigi Casale; Oriol Pujol; Petia Radeva
Title (down) Classyfing Agitation in Sedated ICU Patients Type Conference Article
Year 2010 Publication Medical Image Computing in Catalunya: Graduate Student Workshop Abbreviated Journal
Volume Issue Pages 19–20
Keywords
Abstract Agitation is a serious problem in sedated intensive care unit (ICU) patients. In this work, standard machine learning techniques working on wearable accelerometer data have been used to classifying agitation levels achieving very good classification performances.
Address Girona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference MICCAT
Notes MILAB;HUPBA Approved no
Call Number BCNPCL @ bcnpcl @ COR2010 Serial 1467
Permanent link to this record
 

 
Author Eloi Puertas; Sergio Escalera; Oriol Pujol
Title (down) Classifying Objects at Different Sizes with Multi-Scale Stacked Sequential Learning Type Conference Article
Year 2010 Publication 13th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume 220 Issue Pages 193–200
Keywords
Abstract Sequential learning is that discipline of machine learning that deals with dependent data. In this paper, we use the Multi-scale Stacked Sequential Learning approach (MSSL) to solve the task of pixel-wise classification based on contextual information. The main contribution of this work is a shifting technique applied during the testing phase that makes possible, thanks to template images, to classify objects at different sizes. The results show that the proposed method robustly classifies such objects capturing their spatial relationships.
Address
Corporate Author Thesis
Publisher Place of Publication Editor R. Alquezar, A. Moreno, J. Aguilar
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-60750-642-3 Medium
Area Expedition Conference CCIA
Notes HUPBA;MILAB Approved no
Call Number BCNPCL @ bcnpcl @ PEP2010 Serial 1448
Permanent link to this record
 

 
Author David Guillamet; Jordi Vitria
Title (down) Classifying Faces with Non-negative Matrix Factorization. Type Miscellaneous
Year 2002 Publication Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ GuV2002b Serial 312
Permanent link to this record
 

 
Author David Masip; Jordi Vitria
Title (down) Classifier Combination Applied to Real Time Face Detection and Classification. Type Book Chapter
Year 2004 Publication Recerca Automatica, Visio i Robotica, Ed. UPC, A. Grau, V. Puig (Eds.), 345–353, ISBN 84–7653–844–8 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ MBV2004b Serial 449
Permanent link to this record
 

 
Author David Masip; M. Bressan; Jordi Vitria
Title (down) Classifier Combination Applied to Real Time Face Detection and Classification Type Miscellaneous
Year 2004 Publication AVR2004 Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address Barcelona
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;MV Approved no
Call Number BCNPCL @ bcnpcl @ MBV2004a Serial 448
Permanent link to this record
 

 
Author Carolina Malagelada; Michal Drozdzal; Santiago Segui; Sara Mendez; Jordi Vitria; Petia Radeva; Javier Santos; Anna Accarino; Juan R. Malagelada; Fernando Azpiroz
Title (down) Classification of functional bowel disorders by objective physiological criteria based on endoluminal image analysis Type Journal Article
Year 2015 Publication American Journal of Physiology-Gastrointestinal and Liver Physiology Abbreviated Journal AJPGI
Volume 309 Issue 6 Pages G413--G419
Keywords capsule endoscopy; computer vision analysis; functional bowel disorders; intestinal motility; machine learning
Abstract We have previously developed an original method to evaluate small bowel motor function based on computer vision analysis of endoluminal images obtained by capsule endoscopy. Our aim was to demonstrate intestinal motor abnormalities in patients with functional bowel disorders by endoluminal vision analysis. Patients with functional bowel disorders (n = 205) and healthy subjects (n = 136) ingested the endoscopic capsule (Pillcam-SB2, Given-Imaging) after overnight fast and 45 min after gastric exit of the capsule a liquid meal (300 ml, 1 kcal/ml) was administered. Endoluminal image analysis was performed by computer vision and machine learning techniques to define the normal range and to identify clusters of abnormal function. After training the algorithm, we used 196 patients and 48 healthy subjects, completely naive, as test set. In the test set, 51 patients (26%) were detected outside the normal range (P < 0.001 vs. 3 healthy subjects) and clustered into hypo- and hyperdynamic subgroups compared with healthy subjects. Patients with hypodynamic behavior (n = 38) exhibited less luminal closure sequences (41 ± 2% of the recording time vs. 61 ± 2%; P < 0.001) and more static sequences (38 ± 3 vs. 20 ± 2%; P < 0.001); in contrast, patients with hyperdynamic behavior (n = 13) had an increased proportion of luminal closure sequences (73 ± 4 vs. 61 ± 2%; P = 0.029) and more high-motion sequences (3 ± 1 vs. 0.5 ± 0.1%; P < 0.001). Applying an original methodology, we have developed a novel classification of functional gut disorders based on objective, physiological criteria of small bowel function.
Address
Corporate Author Thesis
Publisher American Physiological Society Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR;MV Approved no
Call Number Admin @ si @ MDS2015 Serial 2666
Permanent link to this record