Home | [161–170] << 171 172 173 174 175 176 177 178 179 180 >> [181–190] |
Records | |||||
---|---|---|---|---|---|
Author | G. Lisanti; I. Masi; Andrew Bagdanov; Alberto del Bimbo | ||||
Title | Person Re-identification by Iterative Re-weighted Sparse Ranking | Type | Journal Article | ||
Year | 2015 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 37 | Issue | 8 | Pages | 1629 - 1642 |
Keywords | |||||
Abstract | In this paper we introduce a method for person re-identification based on discriminative, sparse basis expansions of targets in terms of a labeled gallery of known individuals. We propose an iterative extension to sparse discriminative classifiers capable of ranking many candidate targets. The approach makes use of soft- and hard- re-weighting to redistribute energy among the most relevant contributing elements and to ensure that the best candidates are ranked at each iteration. Our approach also leverages a novel visual descriptor which we show to be discriminative while remaining robust to pose and illumination variations. An extensive comparative evaluation is given demonstrating that our approach achieves state-of-the-art performance on single- and multi-shot person re-identification scenarios on the VIPeR, i-LIDS, ETHZ, and CAVIAR4REID datasets. The combination of our descriptor and iterative sparse basis expansion improves state-of-the-art rank-1 performance by six percentage points on VIPeR and by 20 on CAVIAR4REID compared to other methods with a single gallery image per person. With multiple gallery and probe images per person our approach improves by 17 percentage points the state-of-the-art on i-LIDS and by 72 on CAVIAR4REID at rank-1. The approach is also quite efficient, capable of single-shot person re-identification over galleries containing hundreds of individuals at about 30 re-identifications per second. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | LAMP; 601.240; 600.079 | Approved | no | ||
Call Number | Admin @ si @ LMB2015 | Serial | 2557 | ||
Permanent link to this record | |||||
Author | Adriana Romero; Petia Radeva; Carlo Gatta | ||||
Title | Meta-parameter free unsupervised sparse feature learning | Type | Journal Article | ||
Year | 2015 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 37 | Issue | 8 | Pages | 1716-1722 |
Keywords | |||||
Abstract | We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised feature learning algorithm, which exploits a new way of optimizing for sparsity. Experiments on CIFAR-10, STL- 10 and UCMerced show that the method achieves the state-of-theart performance, providing discriminative features that generalize well. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; 600.068; 600.079; 601.160 | Approved | no | ||
Call Number | Admin @ si @ RRG2014b | Serial | 2594 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; Josep Llados | ||||
Title | Flowchart Recognition in Patent Information Retrieval | Type | Book Chapter | ||
Year | 2017 | Publication | Current Challenges in Patent Information Retrieval | Abbreviated Journal | |
Volume | 37 | Issue | Pages | 351-368 | |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | M. Lupu; K. Mayer; N. Kando; A.J. Trippe | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RuL2017 | Serial | 2896 | ||
Permanent link to this record | |||||
Author | Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell | ||||
Title | Deep intrinsic decomposition trained on surreal scenes yet with realistic light effects | Type | Journal Article | ||
Year | 2020 | Publication | Journal of the Optical Society of America A | Abbreviated Journal | JOSA A |
Volume | 37 | Issue | 1 | Pages | 1-15 |
Keywords | |||||
Abstract | Estimation of intrinsic images still remains a challenging task due to weaknesses of ground-truth datasets, which either are too small or present non-realistic issues. On the other hand, end-to-end deep learning architectures start to achieve interesting results that we believe could be improved if important physical hints were not ignored. In this work, we present a twofold framework: (a) a flexible generation of images overcoming some classical dataset problems such as larger size jointly with coherent lighting appearance; and (b) a flexible architecture tying physical properties through intrinsic losses. Our proposal is versatile, presents low computation time, and achieves state-of-the-art results. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC; 600.140; 600.12; 600.118 | Approved | no | ||
Call Number | Admin @ si @ SBV2019 | Serial | 3311 | ||
Permanent link to this record | |||||
Author | Mohamed Ali Souibgui; Sanket Biswas; Andres Mafla; Ali Furkan Biten; Alicia Fornes; Yousri Kessentini; Josep Llados; Lluis Gomez; Dimosthenis Karatzas | ||||
Title | Text-DIAE: a self-supervised degradation invariant autoencoder for text recognition and document enhancement | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 37th AAAI Conference on Artificial Intelligence | Abbreviated Journal | |
Volume | 37 | Issue | 2 | Pages | |
Keywords | Representation Learning for Vision; CV Applications; CV Language and Vision; ML Unsupervised; Self-Supervised Learning | ||||
Abstract | In this paper, we propose a Text-Degradation Invariant Auto Encoder (Text-DIAE), a self-supervised model designed to tackle two tasks, text recognition (handwritten or scene-text) and document image enhancement. We start by employing a transformer-based architecture that incorporates three pretext tasks as learning objectives to be optimized during pre-training without the usage of labelled data. Each of the pretext objectives is specifically tailored for the final downstream tasks. We conduct several ablation experiments that confirm the design choice of the selected pretext tasks. Importantly, the proposed model does not exhibit limitations of previous state-of-the-art methods based on contrastive losses, while at the same time requiring substantially fewer data samples to converge. Finally, we demonstrate that our method surpasses the state-of-the-art in existing supervised and self-supervised settings in handwritten and scene text recognition and document image enhancement. Our code and trained models will be made publicly available at https://github.com/dali92002/SSL-OCR | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AAAI | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ SBM2023 | Serial | 3848 | ||
Permanent link to this record | |||||
Author | Khanh Nguyen; Ali Furkan Biten; Andres Mafla; Lluis Gomez; Dimosthenis Karatzas | ||||
Title | Show, Interpret and Tell: Entity-Aware Contextualised Image Captioning in Wikipedia | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 37th AAAI Conference on Artificial Intelligence | Abbreviated Journal | |
Volume | 37 | Issue | 2 | Pages | 1940-1948 |
Keywords | |||||
Abstract | Humans exploit prior knowledge to describe images, and are able to adapt their explanation to specific contextual information given, even to the extent of inventing plausible explanations when contextual information and images do not match. In this work, we propose the novel task of captioning Wikipedia images by integrating contextual knowledge. Specifically, we produce models that jointly reason over Wikipedia articles, Wikimedia images and their associated descriptions to produce contextualized captions. The same Wikimedia image can be used to illustrate different articles, and the produced caption needs to be adapted to the specific context allowing us to explore the limits of the model to adjust captions to different contextual information. Dealing with out-of-dictionary words and Named Entities is a challenging task in this domain. To address this, we propose a pre-training objective, Masked Named Entity Modeling (MNEM), and show that this pretext task results to significantly improved models. Furthermore, we verify that a model pre-trained in Wikipedia generalizes well to News Captioning datasets. We further define two different test splits according to the difficulty of the captioning task. We offer insights on the role and the importance of each modality and highlight the limitations of our model. | ||||
Address | Washington; USA; February 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AAAI | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ NBM2023 | Serial | 3860 | ||
Permanent link to this record | |||||
Author | Hugo Berti; Angel Sappa; Osvaldo Agamennoni | ||||
Title | Improved Dynamic Window Approach by Using Lyapunov Stability Criteria | Type | Journal | ||
Year | 2008 | Publication | Latin American Applied Research | Abbreviated Journal | |
Volume | 38 | Issue | 4 | Pages | 289–298 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ BSA2008 | Serial | 1056 | ||
Permanent link to this record | |||||
Author | Javier Vazquez; C. Alejandro Parraga; Maria Vanrell | ||||
Title | Ordinal pairwise method for natural images comparison | Type | Journal Article | ||
Year | 2009 | Publication | Perception | Abbreviated Journal | PER |
Volume | 38 | Issue | Pages | 180 | |
Keywords | |||||
Abstract | 38(Suppl.)ECVP Abstract Supplement
We developed a new psychophysical method to compare different colour appearance models when applied to natural scenes. The method was as follows: two images (processed by different algorithms) were displayed on a CRT monitor and observers were asked to select the most natural of them. The original images were gathered by means of a calibrated trichromatic digital camera and presented one on top of the other on a calibrated screen. The selection was made by pressing on a 6-button IR box, which allowed observers to consider not only the most natural but to rate their selection. The rating system allowed observers to register how much more natural was their chosen image (eg, much more, definitely more, slightly more), which gave us valuable extra information on the selection process. The results were analysed considering both the selection as a binary choice (using Thurstone's law of comparative judgement) and using Bradley-Terry method for ordinal comparison. Our results show a significant difference in the rating scales obtained. Although this method has been used in colour constancy algorithm comparisons, its uses are much wider, eg to compare algorithms of image compression, rendering, recolouring, etc. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ VPV2009b | Serial | 1191 | ||
Permanent link to this record | |||||
Author | Robert Benavente; C. Alejandro Parraga; Maria Vanrell | ||||
Title | Colour categories boundaries are better defined in contextual conditions | Type | Journal Article | ||
Year | 2009 | Publication | Perception | Abbreviated Journal | PER |
Volume | 38 | Issue | Pages | 36 | |
Keywords | |||||
Abstract | In a previous experiment [Parraga et al, 2009 Journal of Imaging Science and Technology 53(3)] the boundaries between basic colour categories were measured by asking subjects to categorize colour samples presented in isolation (ie on a dark background) using a YES/NO paradigm. Results showed that some boundaries (eg green – blue) were very diffuse and the subjects' answers presented bimodal distributions, which were attributed to the emergence of non-basic categories in those regions (eg turquoise). To confirm these results we performed a new experiment focussed on the boundaries where bimodal distributions were more evident. In this new experiment rectangular colour samples were presented surrounded by random colour patches to simulate contextual conditions on a calibrated CRT monitor. The names of two neighbouring colours were shown at the bottom of the screen and subjects selected the boundary between these colours by controlling the chromaticity of the central patch, sliding it across these categories' frontier. Results show that in this new experimental paradigm, the formerly uncertain inter-colour category boundaries are better defined and the dispersions (ie the bimodal distributions) that occurred in the previous experiment disappear. These results may provide further support to Berlin and Kay's basic colour terms theory. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ BPV2009 | Serial | 1192 | ||
Permanent link to this record | |||||
Author | Carles Fernandez; Pau Baiget; Xavier Roca; Jordi Gonzalez | ||||
Title | Determining the Best Suited Semantic Events for Cognitive Surveillance | Type | Journal Article | ||
Year | 2011 | Publication | Expert Systems with Applications | Abbreviated Journal | EXSY |
Volume | 38 | Issue | 4 | Pages | 4068–4079 |
Keywords | Cognitive surveillance; Event modeling; Content-based video retrieval; Ontologies; Advanced user interfaces | ||||
Abstract | State-of-the-art systems on cognitive surveillance identify and describe complex events in selected domains, thus providing end-users with tools to easily access the contents of massive video footage. Nevertheless, as the complexity of events increases in semantics and the types of indoor/outdoor scenarios diversify, it becomes difficult to assess which events describe better the scene, and how to model them at a pixel level to fulfill natural language requests. We present an ontology-based methodology that guides the identification, step-by-step modeling, and generalization of the most relevant events to a specific domain. Our approach considers three steps: (1) end-users provide textual evidence from surveilled video sequences; (2) transcriptions are analyzed top-down to build the knowledge bases for event description; and (3) the obtained models are used to generalize event detection to different image sequences from the surveillance domain. This framework produces user-oriented knowledge that improves on existing advanced interfaces for video indexing and retrieval, by determining the best suited events for video understanding according to end-users. We have conducted experiments with outdoor and indoor scenes showing thefts, chases, and vandalism, demonstrating the feasibility and generalization of this proposal. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ FBR2011a | Serial | 1722 | ||
Permanent link to this record | |||||
Author | Kaida Xiao; Chenyang Fu; D.Mylonas; Dimosthenis Karatzas; S. Wuerger | ||||
Title | Unique Hue Data for Colour Appearance Models. Part ii: Chromatic Adaptation Transform | Type | Journal Article | ||
Year | 2013 | Publication | Color Research & Application | Abbreviated Journal | CRA |
Volume | 38 | Issue | 1 | Pages | 22-29 |
Keywords | |||||
Abstract | Unique hue settings of 185 observers under three room-lighting conditions were used to evaluate the accuracy of full and mixed chromatic adaptation transform models of CIECAM02 in terms of unique hue reproduction. Perceptual hue shifts in CIECAM02 were evaluated for both models with no clear difference using the current Commission Internationale de l'Éclairage (CIE) recommendation for mixed chromatic adaptation ratio. Using our large dataset of unique hue data as a benchmark, an optimised parameter is proposed for chromatic adaptation under mixed illumination conditions that produces more accurate results in unique hue reproduction. © 2011 Wiley Periodicals, Inc. Col Res Appl, 2013 | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ XFM2013 | Serial | 1822 | ||
Permanent link to this record | |||||
Author | Simone Balocco; Carlo Gatta; Francesco Ciompi; A. Wahle; Petia Radeva; S. Carlier; G. Unal; E. Sanidas; J. Mauri; X. Carillo; T. Kovarnik; C. Wang; H. Chen; T. P. Exarchos; D. I. Fotiadis; F. Destrempes; G. Cloutier; Oriol Pujol; Marina Alberti; E. G. Mendizabal-Ruiz; M. Rivera; T. Aksoy; R. W. Downe; I. A. Kakadiaris | ||||
Title | Standardized evaluation methodology and reference database for evaluating IVUS image segmentation | Type | Journal Article | ||
Year | 2014 | Publication | Computerized Medical Imaging and Graphics | Abbreviated Journal | CMIG |
Volume | 38 | Issue | 2 | Pages | 70-90 |
Keywords | IVUS (intravascular ultrasound); Evaluation framework; Algorithm comparison; Image segmentation | ||||
Abstract | This paper describes an evaluation framework that allows a standardized and quantitative comparison of IVUS lumen and media segmentation algorithms. This framework has been introduced at the MICCAI 2011 Computing and Visualization for (Intra)Vascular Imaging (CVII) workshop, comparing the results of eight teams that participated.
We describe the available data-base comprising of multi-center, multi-vendor and multi-frequency IVUS datasets, their acquisition, the creation of the reference standard and the evaluation measures. The approaches address segmentation of the lumen, the media, or both borders; semi- or fully-automatic operation; and 2-D vs. 3-D methodology. Three performance measures for quantitative analysis have been proposed. The results of the evaluation indicate that segmentation of the vessel lumen and media is possible with an accuracy that is comparable to manual annotation when semi-automatic methods are used, as well as encouraging results can be obtained also in case of fully-automatic segmentation. The analysis performed in this paper also highlights the challenges in IVUS segmentation that remains to be solved. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; LAMP; HuPBA; 600.046; 600.063; 600.079 | Approved | no | ||
Call Number | Admin @ si @ BGC2013 | Serial | 2314 | ||
Permanent link to this record | |||||
Author | Simeon Petkov; Xavier Carrillo; Petia Radeva; Carlo Gatta | ||||
Title | Diaphragm border detection in coronary X-ray angiographies: New method and applications | Type | Journal Article | ||
Year | 2014 | Publication | Computerized Medical Imaging and Graphics | Abbreviated Journal | CMIG |
Volume | 38 | Issue | 4 | Pages | 296-305 |
Keywords | |||||
Abstract | X-ray angiography is widely used in cardiac disease diagnosis during or prior to intravascular interventions. The diaphragm motion and the heart beating induce gray-level changes, which are one of the main obstacles in quantitative analysis of myocardial perfusion. In this paper we focus on detecting the diaphragm border in both single images or whole X-ray angiography sequences. We show that the proposed method outperforms state of the art approaches. We extend a previous publicly available data set, adding new ground truth data. We also compose another set of more challenging images, thus having two separate data sets of increasing difficulty. Finally, we show three applications of our method: (1) a strategy to reduce false positives in vessel enhanced images; (2) a digital diaphragm removal algorithm; (3) an improvement in Myocardial Blush Grade semi-automatic estimation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; LAMP; 600.079 | Approved | no | ||
Call Number | Admin @ si @ PCR2014 | Serial | 2468 | ||
Permanent link to this record | |||||
Author | Pejman Rasti; Salma Samiei; Mary Agoyi; Sergio Escalera; Gholamreza Anbarjafari | ||||
Title | Robust non-blind color video watermarking using QR decomposition and entropy analysis | Type | Journal Article | ||
Year | 2016 | Publication | Journal of Visual Communication and Image Representation | Abbreviated Journal | JVCIR |
Volume | 38 | Issue | Pages | 838-847 | |
Keywords | Video watermarking; QR decomposition; Discrete Wavelet Transformation; Chirp Z-transform; Singular value decomposition; Orthogonal–triangular decomposition | ||||
Abstract | Issues such as content identification, document and image security, audience measurement, ownership and copyright among others can be settled by the use of digital watermarking. Many recent video watermarking methods show drops in visual quality of the sequences. The present work addresses the aforementioned issue by introducing a robust and imperceptible non-blind color video frame watermarking algorithm. The method divides frames into moving and non-moving parts. The non-moving part of each color channel is processed separately using a block-based watermarking scheme. Blocks with an entropy lower than the average entropy of all blocks are subject to a further process for embedding the watermark image. Finally a watermarked frame is generated by adding moving parts to it. Several signal processing attacks are applied to each watermarked frame in order to perform experiments and are compared with some recent algorithms. Experimental results show that the proposed scheme is imperceptible and robust against common signal processing attacks. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @RSA2016 | Serial | 2766 | ||
Permanent link to this record | |||||
Author | Antonio Lopez; Gabriel Villalonga; Laura Sellart; German Ros; David Vazquez; Jiaolong Xu; Javier Marin; Azadeh S. Mozafari | ||||
Title | Training my car to see using virtual worlds | Type | Journal Article | ||
Year | 2017 | Publication | Image and Vision Computing | Abbreviated Journal | IMAVIS |
Volume | 38 | Issue | Pages | 102-118 | |
Keywords | |||||
Abstract | Computer vision technologies are at the core of different advanced driver assistance systems (ADAS) and will play a key role in oncoming autonomous vehicles too. One of the main challenges for such technologies is to perceive the driving environment, i.e. to detect and track relevant driving information in a reliable manner (e.g. pedestrians in the vehicle route, free space to drive through). Nowadays it is clear that machine learning techniques are essential for developing such a visual perception for driving. In particular, the standard working pipeline consists of collecting data (i.e. on-board images), manually annotating the data (e.g. drawing bounding boxes around pedestrians), learning a discriminative data representation taking advantage of such annotations (e.g. a deformable part-based model, a deep convolutional neural network), and then assessing the reliability of such representation with the acquired data. In the last two decades most of the research efforts focused on representation learning (first, designing descriptors and learning classifiers; later doing it end-to-end). Hence, collecting data and, especially, annotating it, is essential for learning good representations. While this has been the case from the very beginning, only after the disruptive appearance of deep convolutional neural networks that it became a serious issue due to their data hungry nature. In this context, the problem is that manual data annotation is a tiresome work prone to errors. Accordingly, in the late 00’s we initiated a research line consisting of training visual models using photo-realistic computer graphics, especially focusing on assisted and autonomous driving. In this paper, we summarize such a work and show how it has become a new tendency with increasing acceptance. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ LVS2017 | Serial | 2985 | ||
Permanent link to this record |