toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Xavier Soria; Angel Sappa; Riad I. Hammoud edit   pdf
url  doi
openurl 
  Title Wide-Band Color Imagery Restoration for RGB-NIR Single Sensor Images Type Journal Article
  Year 2018 Publication Sensors Abbreviated Journal SENS  
  Volume 18 Issue 7 Pages 2059  
  Keywords RGB-NIR sensor; multispectral imaging; deep learning; CNNs  
  Abstract (up) Multi-spectral RGB-NIR sensors have become ubiquitous in recent years. These sensors allow the visible and near-infrared spectral bands of a given scene to be captured at the same time. With such cameras, the acquired imagery has a compromised RGB color representation due to near-infrared bands (700–1100 nm) cross-talking with the visible bands (400–700 nm).
This paper proposes two deep learning-based architectures to recover the full RGB color images, thus removing the NIR information from the visible bands. The proposed approaches directly restore the high-resolution RGB image by means of convolutional neural networks. They are evaluated with several outdoor images; both architectures reach a similar performance when evaluated in different
scenarios and using different similarity metrics. Both of them improve the state of the art approaches.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; MSIAU; 600.086; 600.130; 600.122; 600.118 Approved no  
  Call Number Admin @ si @ SSH2018 Serial 3145  
Permanent link to this record
 

 
Author Jaume Amores edit   pdf
doi  openurl
  Title Multiple Instance Classification: review, taxonomy and comparative study Type Journal Article
  Year 2013 Publication Artificial Intelligence Abbreviated Journal AI  
  Volume 201 Issue Pages 81-105  
  Keywords Multi-instance learning; Codebook; Bag-of-Words  
  Abstract (up) Multiple Instance Learning (MIL) has become an important topic in the pattern recognition community, and many solutions to this problemhave been proposed until now. Despite this fact, there is a lack of comparative studies that shed light into the characteristics and behavior of the different methods. In this work we provide such an analysis focused on the classification task (i.e.,leaving out other learning tasks such as regression). In order to perform our study, we implemented
fourteen methods grouped into three different families. We analyze the performance of the approaches across a variety of well-known databases, and we also study their behavior in synthetic scenarios in order to highlight their characteristics. As a result of this analysis, we conclude that methods that extract global bag-level information show a clearly superior performance in general. In this sense, the analysis permits us to understand why some types of methods are more successful than others, and it permits us to establish guidelines in the design of new MIL
methods.
 
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Science Publishers Ltd. Essex, UK Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0004-3702 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 601.042; 600.057 Approved no  
  Call Number Admin @ si @ Amo2013 Serial 2273  
Permanent link to this record
 

 
Author Gabriel Villalonga; Joost Van de Weijer; Antonio Lopez edit  url
doi  openurl
  Title Recognizing new classes with synthetic data in the loop: application to traffic sign recognition Type Journal Article
  Year 2020 Publication Sensors Abbreviated Journal SENS  
  Volume 20 Issue 3 Pages 583  
  Keywords  
  Abstract (up) On-board vision systems may need to increase the number of classes that can be recognized in a relatively short period. For instance, a traffic sign recognition system may suddenly be required to recognize new signs. Since collecting and annotating samples of such new classes may need more time than we wish, especially for uncommon signs, we propose a method to generate these samples by combining synthetic images and Generative Adversarial Network (GAN) technology. In particular, the GAN is trained on synthetic and real-world samples from known classes to perform synthetic-to-real domain adaptation, but applied to synthetic samples of the new classes. Using the Tsinghua dataset with a synthetic counterpart, SYNTHIA-TS, we have run an extensive set of experiments. The results show that the proposed method is indeed effective, provided that we use a proper Convolutional Neural Network (CNN) to perform the traffic sign recognition (classification) task as well as a proper GAN to transform the synthetic images. Here, a ResNet101-based classifier and domain adaptation based on CycleGAN performed extremely well for a ratio∼ 1/4 for new/known classes; even for more challenging ratios such as∼ 4/1, the results are also very positive.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; ADAS; 600.118; 600.120 Approved no  
  Call Number Admin @ si @ VWL2020 Serial 3405  
Permanent link to this record
 

 
Author Miguel Oliveira; Victor Santos; Angel Sappa edit  doi
openurl 
  Title Multimodal Inverse Perspective Mapping Type Journal Article
  Year 2015 Publication Information Fusion Abbreviated Journal IF  
  Volume 24 Issue Pages 108–121  
  Keywords Inverse perspective mapping; Multimodal sensor fusion; Intelligent vehicles  
  Abstract (up) Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.055; 600.076 Approved no  
  Call Number Admin @ si @ OSS2015c Serial 2532  
Permanent link to this record
 

 
Author David Vazquez; Javier Marin; Antonio Lopez; Daniel Ponsa; David Geronimo edit   pdf
doi  openurl
  Title Virtual and Real World Adaptation for Pedestrian Detection Type Journal Article
  Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 4 Pages 797-809  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract (up) Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number ADAS @ adas @ VML2014 Serial 2275  
Permanent link to this record
 

 
Author Enric Marti; Carme Julia; Debora Gil edit  doi
openurl 
  Title A PBL Experience in the Teaching of Computer Graphics Type Journal Article
  Year 2006 Publication Computer Graphics Forum Abbreviated Journal CGF  
  Volume 25 Issue 1 Pages 95-103  
  Keywords  
  Abstract (up) Project-Based Learning (PBL) is an educational strategy to improve student’s learning capability that, in recent years, has had a progressive acceptance in undergraduate studies. This methodology is based on solving a problem or project in a student working group. In this way, PBL focuses on learning the necessary tools to correctly find a solution to given problems. Since the learning initiative is transferred to the student, the PBL method promotes students own abilities. This allows a better assessment of the true workload that carries out the student in the subject. It follows that the methodology conforms to the guidelines of the Bologna document, which quantifies the student workload in a subject by means of the European credit transfer system (ECTS). PBL is currently applied in undergraduate studies needing strong practical training such as medicine, nursing or law sciences. Although this is also the case in engineering studies, amazingly, few experiences have been reported. In this paper we propose to use PBL in the educational organization of the Computer Graphics subjects in the Computer Science degree. Our PBL project focuses in the development of a C++ graphical environment based on the OpenGL libraries for visualization and handling of different graphical objects. The starting point is a basic skeleton that already includes lighting functions, perspective projection with mouse interaction to change the point of view and three predefined objects. Students have to complete this skeleton by adding their own functions to solve the project. A total number of 10 projects have been proposed and successfully solved. The exercises range from human face rendering to articulated objects, such as robot arms or puppets. In the present paper we extensively report the statement and educational objectives for two of the projects: solar system visualization and a chess game. We report our earlier educational experience based on the standard classroom theoretical, problem and practice sessions and the reasons that motivated searching for other learning methods. We have mainly chosen PBL because it improves the student learning initiative. We have applied the PBL educational model since the beginning of the second semester. The student’s feedback increases in his interest for the subject. We present a comparative study of the teachers’ and students’ workload between PBL and the classic teaching approach, which suggests that the workload increase in PBL is not as high as it seems.  
  Address  
  Corporate Author Thesis  
  Publisher Computer Graphics Forum Place of Publication Computer Vision CenterComputer Science Department Escola Tcnica Superior d’Enginyeria (UAB), Edifi Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM;ADAS; Approved no  
  Call Number IAM @ iam @ MJG2006a Serial 1607  
Permanent link to this record
 

 
Author Gabriel Villalonga; Antonio Lopez edit   pdf
doi  openurl
  Title Co-Training for On-Board Deep Object Detection Type Journal Article
  Year 2020 Publication IEEE Access Abbreviated Journal ACCESS  
  Volume Issue Pages 194441 - 194456  
  Keywords  
  Abstract (up) Providing ground truth supervision to train visual models has been a bottleneck over the years, exacerbated by domain shifts which degenerate the performance of such models. This was the case when visual tasks relied on handcrafted features and shallow machine learning and, despite its unprecedented performance gains, the problem remains open within the deep learning paradigm due to its data-hungry nature. Best performing deep vision-based object detectors are trained in a supervised manner by relying on human-labeled bounding boxes which localize class instances (i.e. objects) within the training images. Thus, object detection is one of such tasks for which human labeling is a major bottleneck. In this article, we assess co-training as a semi-supervised learning method for self-labeling objects in unlabeled images, so reducing the human-labeling effort for developing deep object detectors. Our study pays special attention to a scenario involving domain shift; in particular, when we have automatically generated virtual-world images with object bounding boxes and we have real-world images which are unlabeled. Moreover, we are particularly interested in using co-training for deep object detection in the context of driver assistance systems and/or self-driving vehicles. Thus, using well-established datasets and protocols for object detection in these application contexts, we will show how co-training is a paradigm worth to pursue for alleviating object labeling, working both alone and together with task-agnostic domain adaptation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ ViL2020 Serial 3488  
Permanent link to this record
 

 
Author Jiaolong Xu; Liang Xiao; Antonio Lopez edit   pdf
doi  openurl
  Title Self-supervised Domain Adaptation for Computer Vision Tasks Type Journal Article
  Year 2019 Publication IEEE Access Abbreviated Journal ACCESS  
  Volume 7 Issue Pages 156694 - 156706  
  Keywords  
  Abstract (up) Recent progress of self-supervised visual representation learning has achieved remarkable success on many challenging computer vision benchmarks. However, whether these techniques can be used for domain adaptation has not been explored. In this work, we propose a generic method for self-supervised domain adaptation, using object recognition and semantic segmentation of urban scenes as use cases. Focusing on simple pretext/auxiliary tasks (e.g. image rotation prediction), we assess different learning strategies to improve domain adaptation effectiveness by self-supervision. Additionally, we propose two complementary strategies to further boost the domain adaptation accuracy on semantic segmentation within our method, consisting of prediction layer alignment and batch normalization calibration. The experimental results show adaptation levels comparable to most studied domain adaptation methods, thus, bringing self-supervision as a new alternative for reaching domain adaptation. The code is available at this link. https://github.com/Jiaolong/self-supervised-da.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ XXL2019 Serial 3302  
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Oriol Ramos Terrades; Sergi Robles; Gemma Sanchez edit  doi
openurl 
  Title CVC-FP and SGT: a new database for structural floor plan analysis and its groundtruthing tool Type Journal Article
  Year 2015 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR  
  Volume 18 Issue 1 Pages 15-30  
  Keywords  
  Abstract (up) Recent results on structured learning methods have shown the impact of structural information in a wide range of pattern recognition tasks. In the field of document image analysis, there is a long experience on structural methods for the analysis and information extraction of multiple types of documents. Yet, the lack of conveniently annotated and free access databases has not benefited the progress in some areas such as technical drawing understanding. In this paper, we present a floor plan database, named CVC-FP, that is annotated for the architectural objects and their structural relations. To construct this database, we have implemented a groundtruthing tool, the SGT tool, that allows to make specific this sort of information in a natural manner. This tool has been made for general purpose groundtruthing: It allows to define own object classes and properties, multiple labeling options are possible, grants the cooperative work, and provides user and version control. We finally have collected some of the recent work on floor plan interpretation and present a quantitative benchmark for this database. Both CVC-FP database and the SGT tool are freely released to the research community to ease comparisons between methods and boost reproducible research.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1433-2833 ISBN Medium  
  Area Expedition Conference  
  Notes DAG; ADAS; 600.061; 600.076; 600.077 Approved no  
  Call Number Admin @ si @ HRR2015 Serial 2567  
Permanent link to this record
 

 
Author Idoia Ruiz; Joan Serrat edit  doi
openurl 
  Title Hierarchical Novelty Detection for Traffic Sign Recognition Type Journal Article
  Year 2022 Publication Sensors Abbreviated Journal SENS  
  Volume 22 Issue 12 Pages 4389  
  Keywords Novelty detection; hierarchical classification; deep learning; traffic sign recognition; autonomous driving; computer vision  
  Abstract (up) Recent works have made significant progress in novelty detection, i.e., the problem of detecting samples of novel classes, never seen during training, while classifying those that belong to known classes. However, the only information this task provides about novel samples is that they are unknown. In this work, we leverage hierarchical taxonomies of classes to provide informative outputs for samples of novel classes. We predict their closest class in the taxonomy, i.e., its parent class. We address this problem, known as hierarchical novelty detection, by proposing a novel loss, namely Hierarchical Cosine Loss that is designed to learn class prototypes along with an embedding of discriminative features consistent with the taxonomy. We apply it to traffic sign recognition, where we predict the parent class semantics for new types of traffic signs. Our model beats state-of-the art approaches on two large scale traffic sign benchmarks, Mapillary Traffic Sign Dataset (MTSD) and Tsinghua-Tencent 100K (TT100K), and performs similarly on natural images benchmarks (AWA2, CUB). For TT100K and MTSD, our approach is able to detect novel samples at the correct nodes of the hierarchy with 81% and 36% of accuracy, respectively, at 80% known class accuracy.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.154 Approved no  
  Call Number Admin @ si @ Serial 3684  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: