|
Records |
Links |
|
Author |
Jon Almazan; Lluis Gomez; Suman Ghosh; Ernest Valveny; Dimosthenis Karatzas |
|
|
Title |
WATTS: A common representation of word images and strings using embedded attributes for text recognition and retrieval |
Type |
Book Chapter |
|
Year |
2020 |
Publication |
Visual Text Interpretation – Algorithms and Applications in Scene Understanding and Document Analysis |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
|
Editor |
Analysis”, K. Alahari; C.V. Jawahar |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
Series on Advances in Computer Vision and Pattern Recognition |
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ AGG2020 |
Serial |
3496 |
|
Permanent link to this record |
|
|
|
|
Author |
Jialuo Chen; M.A.Souibgui; Alicia Fornes; Beata Megyesi |
|
|
Title |
A Web-based Interactive Transcription Tool for Encrypted Manuscripts |
Type |
Conference Article |
|
Year |
2020 |
Publication |
3rd International Conference on Historical Cryptology |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
52-59 |
|
|
Keywords |
|
|
|
Abstract |
Manual transcription of handwritten text is a time consuming task. In the case of encrypted manuscripts, the recognition is even more complex due to the huge variety of alphabets and symbol sets. To speed up and ease this process, we present a web-based tool aimed to (semi)-automatically transcribe the encrypted sources. The user uploads one or several images of the desired encrypted document(s) as input, and the system returns the transcription(s). This process is carried out in an interactive fashion with
the user to obtain more accurate results. For discovering and testing, the developed web tool is freely available. |
|
|
Address |
Virtual; June 2020 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
HistoCrypt |
|
|
Notes |
DAG; 600.140; 602.230; 600.121 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CSF2020 |
Serial |
3447 |
|
Permanent link to this record |
|
|
|
|
Author |
Ivet Rafegas; Maria Vanrell; Luis A Alexandre; G. Arias |
|
|
Title |
Understanding trained CNNs by indexing neuron selectivity |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
PRL |
|
|
Volume |
136 |
Issue |
|
Pages |
318-325 |
|
|
Keywords |
|
|
|
Abstract |
The impressive performance of Convolutional Neural Networks (CNNs) when solving different vision problems is shadowed by their black-box nature and our consequent lack of understanding of the representations they build and how these representations are organized. To help understanding these issues, we propose to describe the activity of individual neurons by their Neuron Feature visualization and quantify their inherent selectivity with two specific properties. We explore selectivity indexes for: an image feature (color); and an image label (class membership). Our contribution is a framework to seek or classify neurons by indexing on these selectivity properties. It helps to find color selective neurons, such as a red-mushroom neuron in layer Conv4 or class selective neurons such as dog-face neurons in layer Conv5 in VGG-M, and establishes a methodology to derive other selectivity properties. Indexing on neuron selectivity can statistically draw how features and classes are represented through layers in a moment when the size of trained nets is growing and automatic tools to index neurons can be helpful. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC; 600.087; 600.140; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RVL2019 |
Serial |
3310 |
|
Permanent link to this record |
|
|
|
|
Author |
Idoia Ruiz; Joan Serrat |
|
|
Title |
Rank-based ordinal classification |
Type |
Conference Article |
|
Year |
2020 |
Publication |
25th International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
8069-8076 |
|
|
Keywords |
|
|
|
Abstract |
Differently from the regular classification task, in ordinal classification there is an order in the classes. As a consequence not all classification errors matter the same: a predicted class close to the groundtruth one is better than predicting a farther away class. To account for this, most previous works employ loss functions based on the absolute difference between the predicted and groundtruth class labels. We argue that there are many cases in ordinal classification where label values are arbitrary (for instance 1. . . C, being C the number of classes) and thus such loss functions may not be the best choice. We instead propose a network architecture that produces not a single class prediction but an ordered vector, or ranking, of all the possible classes from most to least likely. This is thanks to a loss function that compares groundtruth and predicted rankings of these class labels, not the labels themselves. Another advantage of this new formulation is that we can enforce consistency in the predictions, namely, predicted rankings come from some unimodal vector of scores with mode at the groundtruth class. We compare with the state of the art ordinal classification methods, showing
that ours attains equal or better performance, as measured by common ordinal classification metrics, on three benchmark datasets. Furthermore, it is also suitable for a new task on image aesthetics assessment, i.e. most voted score prediction. Finally, we also apply it to building damage assessment from satellite images, providing an analysis of its performance depending on the degree of imbalance of the dataset. |
|
|
Address |
Virtual; January 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
ADAS; 600.118; 600.124 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RuS2020 |
Serial |
3549 |
|
Permanent link to this record |
|
|
|
|
Author |
Idoia Ruiz; Bogdan Raducanu; Rakesh Mehta; Jaume Amores |
|
|
Title |
Optimizing speed/accuracy trade-off for person re-identification via knowledge distillation |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Engineering Applications of Artificial Intelligence |
Abbreviated Journal |
EAAI |
|
|
Volume |
87 |
Issue |
|
Pages |
103309 |
|
|
Keywords |
Person re-identification; Network distillation; Image retrieval; Model compression; Surveillance |
|
|
Abstract |
Finding a person across a camera network plays an important role in video surveillance. For a real-world person re-identification application, in order to guarantee an optimal time response, it is crucial to find the balance between accuracy and speed. We analyse this trade-off, comparing a classical method, that comprises hand-crafted feature description and metric learning, in particular, LOMO and XQDA, to deep learning based techniques, using image classification networks, ResNet and MobileNets. Additionally, we propose and analyse network distillation as a learning strategy to reduce the computational cost of the deep learning approach at test time. We evaluate both methods on the Market-1501 and DukeMTMC-reID large-scale datasets, showing that distillation helps reducing the computational cost at inference time while even increasing the accuracy performance. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; 600.109; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RRM2020 |
Serial |
3401 |
|
Permanent link to this record |
|
|
|
|
Author |
Hugo Bertiche; Meysam Madadi; Sergio Escalera |
|
|
Title |
CLOTH3D: Clothed 3D Humans |
Type |
Conference Article |
|
Year |
2020 |
Publication |
16th European Conference on Computer Vision |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This work presents CLOTH3D, the first big scale synthetic dataset of 3D clothed human sequences. CLOTH3D contains a large variability on garment type, topology, shape, size, tightness and fabric. Clothes are simulated on top of thousands of different pose sequences and body shapes, generating realistic cloth dynamics. We provide the dataset with a generative model for cloth generation. We propose a Conditional Variational Auto-Encoder (CVAE) based on graph convolutions (GCVAE) to learn garment latent spaces. This allows for realistic generation of 3D garments on top of SMPL model for any pose and shape. |
|
|
Address |
Virtual; August 2020 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCV |
|
|
Notes |
HUPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ BME2020 |
Serial |
3519 |
|
Permanent link to this record |
|
|
|
|
Author |
Henry Velesaca; Steven Araujo; Patricia Suarez; Angel Sanchez; Angel Sappa |
|
|
Title |
Off-the-Shelf Based System for Urban Environment Video Analytics |
Type |
Conference Article |
|
Year |
2020 |
Publication |
27th International Conference on Systems, Signals and Image Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
greenhouse gases; carbon footprint; object detection; object tracking; website framework; off-the-shelf video analytics |
|
|
Abstract |
This paper presents the design and implementation details of a system build-up by using off-the-shelf algorithms for urban video analytics. The system allows the connection to
public video surveillance camera networks to obtain the necessary information to generate statistics from urban scenarios (e.g., amount of vehicles, type of cars, direction, numbers of persons, etc.). The obtained information could be used not only for traffic management but also to estimate the carbon footprint of urban scenarios. As a case study, a university campus is selected to evaluate the performance of the proposed system. The system is implemented in a modular way so that it is being used as a testbed to evaluate different algorithms. Implementation results are provided showing the validity and utility of the proposed approach. |
|
|
Address |
Virtual IWSSIP |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IWSSIP |
|
|
Notes |
MSIAU; 600.130; 601.349; 600.122 |
Approved |
no |
|
|
Call Number |
Admin @ si @ VAS2020 |
Serial |
3429 |
|
Permanent link to this record |
|
|
|
|
Author |
Henry Velesaca; Raul Mira; Patricia Suarez; Christian X. Larrea; Angel Sappa |
|
|
Title |
Deep Learning Based Corn Kernel Classification |
Type |
Conference Article |
|
Year |
2020 |
Publication |
1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learningbased approach, the Mask R-CNN architecture, while the classification is performed hrough a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered. As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and the classification modules. Quantitative evaluations have been
performed and comparisons with other approaches are provided showing improvements with the proposed pipeline. |
|
|
Address |
Virtual CVPR |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPRW |
|
|
Notes |
MSIAU; 600.130; 600.122 |
Approved |
no |
|
|
Call Number |
Admin @ si @ VMS2020 |
Serial |
3430 |
|
Permanent link to this record |
|
|
|
|
Author |
Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell; Dimitris Samaras |
|
|
Title |
Light Direction and Color Estimation from Single Image with Deep Regression |
Type |
Conference Article |
|
Year |
2020 |
Publication |
London Imaging Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes. |
|
|
Address |
Virtual; September 2020 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
LIM |
|
|
Notes |
CIC; 600.118; 600.140; |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBV2020 |
Serial |
3460 |
|
Permanent link to this record |
|
|
|
|
Author |
Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell |
|
|
Title |
Deep intrinsic decomposition trained on surreal scenes yet with realistic light effects |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Journal of the Optical Society of America A |
Abbreviated Journal |
JOSA A |
|
|
Volume |
37 |
Issue |
1 |
Pages |
1-15 |
|
|
Keywords |
|
|
|
Abstract |
Estimation of intrinsic images still remains a challenging task due to weaknesses of ground-truth datasets, which either are too small or present non-realistic issues. On the other hand, end-to-end deep learning architectures start to achieve interesting results that we believe could be improved if important physical hints were not ignored. In this work, we present a twofold framework: (a) a flexible generation of images overcoming some classical dataset problems such as larger size jointly with coherent lighting appearance; and (b) a flexible architecture tying physical properties through intrinsic losses. Our proposal is versatile, presents low computation time, and achieves state-of-the-art results. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
CIC; 600.140; 600.12; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBV2019 |
Serial |
3311 |
|
Permanent link to this record |
|
|
|
|
Author |
Hannes Mueller; Andre Groger; Jonathan Hersh; Andrea Matranga; Joan Serrat |
|
|
Title |
Monitoring War Destruction from Space: A Machine Learning Approach |
Type |
Miscellaneous |
|
Year |
2020 |
Publication |
Arxiv |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection, which makes it generally scarce, incomplete and potentially biased. This lack of reliable data imposes severe limitations for media reporting, humanitarian relief efforts, human rights monitoring, reconstruction initiatives, and academic studies of violent conflict. This article introduces an automated method of measuring destruction in high-resolution satellite images using deep learning techniques combined with data augmentation to expand training samples. We apply this method to the Syrian civil war and reconstruct the evolution of damage in major cities across the country. The approach allows generating destruction data with unprecedented scope, resolution, and frequency – only limited by the available satellite imagery – which can alleviate data limitations decisively. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ MGH2020 |
Serial |
3489 |
|
Permanent link to this record |
|
|
|
|
Author |
Guillermo Torres; Debora Gil |
|
|
Title |
A multi-shape loss function with adaptive class balancing for the segmentation of lung structures |
Type |
Journal Article |
|
Year |
2020 |
Publication |
International Journal of Computer Assisted Radiology and Surgery |
Abbreviated Journal |
IJCAR |
|
|
Volume |
15 |
Issue |
1 |
Pages |
S154-55 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
IAM |
Approved |
no |
|
|
Call Number |
Admin @ si @ ToG2020 |
Serial |
3590 |
|
Permanent link to this record |
|
|
|
|
Author |
Giovanni Maria Farinella; Petia Radeva; Jose Braz |
|
|
Title |
Proceedings of the 15th International Joint Conference on Computer Vision; Imaging and Computer Graphics Theory and Applications |
Type |
Book Whole |
|
Year |
2020 |
Publication |
Proceedings of the 15th International Joint Conference on Computer Vision; Imaging and Computer Graphics Theory and Applications; VISIGRAPP 2020 |
Abbreviated Journal |
|
|
|
Volume |
4 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ FRB2020a |
Serial |
3546 |
|
Permanent link to this record |
|
|
|
|
Author |
Giovanni Maria Farinella; Petia Radeva; Jose Braz |
|
|
Title |
Proceedings of the 15th International Joint Conference on Computer Vision; Imaging and Computer Graphics Theory and Applications |
Type |
Book Whole |
|
Year |
2020 |
Publication |
Proceedings of the 15th International Joint Conference on Computer Vision; Imaging and Computer Graphics Theory and Applications; VISIGRAPP 2020 |
Abbreviated Journal |
|
|
|
Volume |
5 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ FRB2020b |
Serial |
3547 |
|
Permanent link to this record |
|
|
|
|
Author |
Gabriel Villalonga; Joost Van de Weijer; Antonio Lopez |
|
|
Title |
Recognizing new classes with synthetic data in the loop: application to traffic sign recognition |
Type |
Journal Article |
|
Year |
2020 |
Publication |
Sensors |
Abbreviated Journal |
SENS |
|
|
Volume |
20 |
Issue |
3 |
Pages |
583 |
|
|
Keywords |
|
|
|
Abstract |
On-board vision systems may need to increase the number of classes that can be recognized in a relatively short period. For instance, a traffic sign recognition system may suddenly be required to recognize new signs. Since collecting and annotating samples of such new classes may need more time than we wish, especially for uncommon signs, we propose a method to generate these samples by combining synthetic images and Generative Adversarial Network (GAN) technology. In particular, the GAN is trained on synthetic and real-world samples from known classes to perform synthetic-to-real domain adaptation, but applied to synthetic samples of the new classes. Using the Tsinghua dataset with a synthetic counterpart, SYNTHIA-TS, we have run an extensive set of experiments. The results show that the proposed method is indeed effective, provided that we use a proper Convolutional Neural Network (CNN) to perform the traffic sign recognition (classification) task as well as a proper GAN to transform the synthetic images. Here, a ResNet101-based classifier and domain adaptation based on CycleGAN performed extremely well for a ratio∼ 1/4 for new/known classes; even for more challenging ratios such as∼ 4/1, the results are also very positive. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
LAMP; ADAS; 600.118; 600.120 |
Approved |
no |
|
|
Call Number |
Admin @ si @ VWL2020 |
Serial |
3405 |
|
Permanent link to this record |