Records |
Author |
Siyang Song; Micol Spitale; Cheng Luo; German Barquero; Cristina Palmero; Sergio Escalera; Michel Valstar; Tobias Baur; Fabien Ringeval; Elisabeth Andre; Hatice Gunes |
Title |
REACT2023: The First Multiple Appropriate Facial Reaction Generation Challenge |
Type |
Conference Article |
Year |
2023 |
Publication |
Proceedings of the 31st ACM International Conference on Multimedia |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
9620–9624 |
Keywords |
|
Abstract |
The Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios, with all participants competing strictly under the same conditions. The goal of the challenge is to provide the first benchmark test set for multi-modal information processing and to foster collaboration among the audio, visual, and audio-visual behaviour analysis and behaviour generation (a.k.a generative AI) communities, to compare the relative merits of the approaches to automatic appropriate facial reaction generation under different spontaneous dyadic interaction conditions. This paper presents: (i) the novelties, contributions and guidelines of the REACT2023 challenge; (ii) the dataset utilized in the challenge; and (iii) the performance of the baseline systems on the two proposed sub-challenges: Offline Multiple Appropriate Facial Reaction Generation and Online Multiple Appropriate Facial Reaction Generation, respectively. The challenge baseline code is publicly available at https://github.com/reactmultimodalchallenge/baseline_react2023. |
Address |
Otawa; Canada; October 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
MM |
Notes |
HUPBA |
Approved |
no |
Call Number |
Admin @ si @ SSL2023 |
Serial |
3931 |
Permanent link to this record |
|
|
|
Author |
Yael Tudela; Ana Garcia Rodriguez; Gloria Fernandez Esparrach; Jorge Bernal |
Title |
Towards Fine-Grained Polyp Segmentation and Classification |
Type |
Conference Article |
Year |
2023 |
Publication |
Workshop on Clinical Image-Based Procedures |
Abbreviated Journal |
|
Volume |
14242 |
Issue |
|
Pages |
32-42 |
Keywords |
Medical image segmentation; Colorectal Cancer; Vision Transformer; Classification |
Abstract |
Colorectal cancer is one of the main causes of cancer death worldwide. Colonoscopy is the gold standard screening tool as it allows lesion detection and removal during the same procedure. During the last decades, several efforts have been made to develop CAD systems to assist clinicians in lesion detection and classification. Regarding the latter, and in order to be used in the exploration room as part of resect and discard or leave-in-situ strategies, these systems must identify correctly all different lesion types. This is a challenging task, as the data used to train these systems presents great inter-class similarity, high class imbalance, and low representation of clinically relevant histology classes such as serrated sessile adenomas.
In this paper, a new polyp segmentation and classification method, Swin-Expand, is introduced. Based on Swin-Transformer, it uses a simple and lightweight decoder. The performance of this method has been assessed on a novel dataset, comprising 1126 high-definition images representing the three main histological classes. Results show a clear improvement in both segmentation and classification performance, also achieving competitive results when tested in public datasets. These results confirm that both the method and the data are important to obtain more accurate polyp representations. |
Address |
Vancouver; October 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
MICCAIW |
Notes |
ISE |
Approved |
no |
Call Number |
Admin @ si @ TGF2023 |
Serial |
3837 |
Permanent link to this record |
|
|
|
Author |
Debora Gil; Guillermo Torres; Carles Sanchez |
Title |
Transforming radiomic features into radiological words |
Type |
Conference Article |
Year |
2023 |
Publication |
IEEE International Symposium on Biomedical Imaging |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
Pòster |
Address |
Cartagena de Indias; Colombia; April 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ISBI |
Notes |
IAM |
Approved |
no |
Call Number |
Admin @ si @ GTS2023 |
Serial |
3952 |
Permanent link to this record |
|
|
|
Author |
Pau Cano; Debora Gil; Eva Musulen |
Title |
Towards automatic detection of helicobacter pylori in histological samples of gastric tissue |
Type |
Conference Article |
Year |
2023 |
Publication |
IEEE International Symposium on Biomedical Imaging |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
Cartagena de Indias; Colombia; April 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ISBI |
Notes |
IAM |
Approved |
no |
Call Number |
Admin @ si @ CGM2023 |
Serial |
3953 |
Permanent link to this record |
|
|
|
Author |
Guillermo Torres; Debora Gil; Antonio Rosell; Sonia Baeza; Carles Sanchez |
Title |
A radiomic biopsy for virtual histology of pulmonary nodules |
Type |
Conference Article |
Year |
2023 |
Publication |
IEEE International Symposium on Biomedical Imaging |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
Pòster |
Address |
Cartagena de Indias; Colombia; April 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ISBI |
Notes |
IAM |
Approved |
no |
Call Number |
Admin @ si @ TGR2023b |
Serial |
3954 |
Permanent link to this record |
|
|
|
Author |
Yi Xiao; Felipe Codevilla; Diego Porres; Antonio Lopez |
Title |
Scaling Vision-Based End-to-End Autonomous Driving with Multi-View Attention Learning |
Type |
Conference Article |
Year |
2023 |
Publication |
International Conference on Intelligent Robots and Systems |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
On end-to-end driving, human driving demonstrations are used to train perception-based driving models by imitation learning. This process is supervised on vehicle signals (e.g., steering angle, acceleration) but does not require extra costly supervision (human labeling of sensor data). As a representative of such vision-based end-to-end driving models, CILRS is commonly used as a baseline to compare with new driving models. So far, some latest models achieve better performance than CILRS by using expensive sensor suites and/or by using large amounts of human-labeled data for training. Given the difference in performance, one may think that it is not worth pursuing vision-based pure end-to-end driving. However, we argue that this approach still has great value and potential considering cost and maintenance. In this paper, we present CIL++, which improves on CILRS by both processing higher-resolution images using a human-inspired HFOV as an inductive bias and incorporating a proper attention mechanism. CIL++ achieves competitive performance compared to models which are more costly to develop. We propose to replace CILRS with CIL++ as a strong vision-based pure end-to-end driving baseline supervised by only vehicle signals and trained by conditional imitation learning. |
Address |
Detroit; USA; October 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
IROS |
Notes |
ADAS |
Approved |
no |
Call Number |
Admin @ si @ XCP2023 |
Serial |
3930 |
Permanent link to this record |
|
|
|
Author |
Asma Bensalah; Antonio Parziale; Giuseppe De Gregorio; Angelo Marcelli; Alicia Fornes; Josep Llados |
Title |
I Can’t Believe It’s Not Better: In-air Movement for Alzheimer Handwriting Synthetic Generation |
Type |
Conference Article |
Year |
2023 |
Publication |
21st International Graphonomics Conference |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
136–148 |
Keywords |
|
Abstract |
During recent years, there here has been a boom in terms of deep learning use for handwriting analysis and recognition. One main application for handwriting analysis is early detection and diagnosis in the health field. Unfortunately, most real case problems still suffer a scarcity of data, which makes difficult the use of deep learning-based models. To alleviate this problem, some works resort to synthetic data generation. Lately, more works are directed towards guided data synthetic generation, a generation that uses the domain and data knowledge to generate realistic data that can be useful to train deep learning models. In this work, we combine the domain knowledge about the Alzheimer’s disease for handwriting and use it for a more guided data generation. Concretely, we have explored the use of in-air movements for synthetic data generation. |
Address |
Evora; Portugal; October 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
IGS |
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ BPG2023 |
Serial |
3838 |
Permanent link to this record |
|
|
|
Author |
Roberto Morales; Juan Quispe; Eduardo Aguilar |
Title |
Exploring multi-food detection using deep learning-based algorithms |
Type |
Conference Article |
Year |
2023 |
Publication |
13th International Conference on Pattern Recognition Systems |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1-7 |
Keywords |
|
Abstract |
People are becoming increasingly concerned about their diet, whether for disease prevention, medical treatment or other purposes. In meals served in restaurants, schools or public canteens, it is not easy to identify the ingredients and/or the nutritional information they contain. Currently, technological solutions based on deep learning models have facilitated the recording and tracking of food consumed based on the recognition of the main dish present in an image. Considering that sometimes there may be multiple foods served on the same plate, food analysis should be treated as a multi-class object detection problem. EfficientDet and YOLOv5 are object detection algorithms that have demonstrated high mAP and real-time performance on general domain data. However, these models have not been evaluated and compared on public food datasets. Unlike general domain objects, foods have more challenging features inherent in their nature that increase the complexity of detection. In this work, we performed a performance evaluation of Efficient-Det and YOLOv5 on three public food datasets: UNIMIB2016, UECFood256 and ChileanFood64. From the results obtained, it can be seen that YOLOv5 provides a significant difference in terms of both mAP and response time compared to EfficientDet in all datasets. Furthermore, YOLOv5 outperforms the state-of-the-art on UECFood256, achieving an improvement of more than 4% in terms of mAP@.50 over the best reported. |
Address |
Guayaquil; Ecuador; July 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICPRS |
Notes |
MILAB |
Approved |
no |
Call Number |
Admin @ si @ MQA2023 |
Serial |
3843 |
Permanent link to this record |
|
|
|
Author |
Gisel Bastidas-Guacho; Patricio Moreno; Boris X. Vintimilla; Angel Sappa |
Title |
Application on the Loop of Multimodal Image Fusion: Trends on Deep-Learning Based Approaches |
Type |
Conference Article |
Year |
2023 |
Publication |
13th International Conference on Pattern Recognition Systems |
Abbreviated Journal |
|
Volume |
14234 |
Issue |
|
Pages |
25–36 |
Keywords |
|
Abstract |
Multimodal image fusion allows the combination of information from different modalities, which is useful for tasks such as object detection, edge detection, and tracking, to name a few. Using the fused representation for applications results in better task performance. There are several image fusion approaches, which have been summarized in surveys. However, the existing surveys focus on image fusion approaches where the application on the loop of multimodal image fusion is not considered. On the contrary, this study summarizes deep learning-based multimodal image fusion for computer vision (e.g., object detection) and image processing applications (e.g., semantic segmentation), that is, approaches where the application module leverages the multimodal fusion process to enhance the final result. Firstly, we introduce image fusion and the existing general frameworks for image fusion tasks such as multifocus, multiexposure and multimodal. Then, we describe the multimodal image fusion approaches. Next, we review the state-of-the-art deep learning multimodal image fusion approaches for vision applications. Finally, we conclude our survey with the trends of task-driven multimodal image fusion. |
Address |
Guayaquil; Ecuador; July 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICPRS |
Notes |
MSIAU |
Approved |
no |
Call Number |
Admin @ si @ BMV2023 |
Serial |
3932 |
Permanent link to this record |
|
|
|
Author |
Simone Zini; Alex Gomez-Villa; Marco Buzzelli; Bartlomiej Twardowski; Andrew D. Bagdanov; Joost Van de Weijer |
Title |
Planckian Jitter: countering the color-crippling effects of color jitter on self-supervised training |
Type |
Conference Article |
Year |
2023 |
Publication |
11th International Conference on Learning Representations |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
Several recent works on self-supervised learning are trained by mapping different augmentations of the same image to the same feature representation. The data augmentations used are of crucial importance to the quality of learned feature representations. In this paper, we analyze how the color jitter traditionally used in data augmentation negatively impacts the quality of the color features in learned feature representations. To address this problem, we propose a more realistic, physics-based color data augmentation – which we call Planckian Jitter – that creates realistic variations in chromaticity and produces a model robust to illumination changes that can be commonly observed in real life, while maintaining the ability to discriminate image content based on color information. Experiments confirm that such a representation is complementary to the representations learned with the currently-used color jitter augmentation and that a simple concatenation leads to significant performance gains on a wide range of downstream datasets. In addition, we present a color sensitivity analysis that documents the impact of different training methods on model neurons and shows that the performance of the learned features is robust with respect to illuminant variations. |
Address |
1 -5 May 2023, Kigali, Ruanda |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICLR |
Notes |
LAMP; 600.147; 611.008; 5300006 |
Approved |
no |
Call Number |
Admin @ si @ ZGB2023 |
Serial |
3820 |
Permanent link to this record |
|
|
|
Author |
Patricia Suarez; Dario Carpio; Angel Sappa |
Title |
A Deep Learning Based Approach for Synthesizing Realistic Depth Maps |
Type |
Conference Article |
Year |
2023 |
Publication |
22nd International Conference on Image Analysis and Processing |
Abbreviated Journal |
|
Volume |
14234 |
Issue |
|
Pages |
369–380 |
Keywords |
|
Abstract |
This paper presents a novel cycle generative adversarial network (CycleGAN) architecture for synthesizing high-quality depth maps from a given monocular image. The proposed architecture uses multiple loss functions, including cycle consistency, contrastive, identity, and least square losses, to enable the generation of realistic and high-fidelity depth maps. The proposed approach addresses this challenge by synthesizing depth maps from RGB images without requiring paired training data. Comparisons with several state-of-the-art approaches are provided showing the proposed approach overcome other approaches both in terms of quantitative metrics and visual quality. |
Address |
Udine; Italia; Setember 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICIAP |
Notes |
MSIAU |
Approved |
no |
Call Number |
Admin @ si @ SCS2023a |
Serial |
3968 |
Permanent link to this record |
|
|
|
Author |
Pau Torras; Mohamed Ali Souibgui; Sanket Biswas; Alicia Fornes |
Title |
Segmentation-Free Alignment of Arbitrary Symbol Transcripts to Images |
Type |
Conference Article |
Year |
2023 |
Publication |
Document Analysis and Recognition – ICDAR 2023 Workshops |
Abbreviated Journal |
|
Volume |
14193 |
Issue |
|
Pages |
83-93 |
Keywords |
Historical Manuscripts; Symbol Alignment |
Abstract |
Developing arbitrary symbol recognition systems is a challenging endeavour. Even using content-agnostic architectures such as few-shot models, performance can be substantially improved by providing a number of well-annotated examples into training. In some contexts, transcripts of the symbols are available without any position information associated to them, which enables using line-level recognition architectures. A way of providing this position information to detection-based architectures is finding systems that can align the input symbols with the transcription. In this paper we discuss some symbol alignment techniques that are suitable for low-data scenarios and provide an insight on their perceived strengths and weaknesses. In particular, we study the usage of Connectionist Temporal Classification models, Attention-Based Sequence to Sequence models and we compare them with the results obtained on a few-shot recognition system. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ TSS2023 |
Serial |
3850 |
Permanent link to this record |
|
|
|
Author |
Mickael Coustaty; Alicia Fornes |
Title |
Document Analysis and Recognition – ICDAR 2023 Workshops |
Type |
Book Whole |
Year |
2023 |
Publication |
Document Analysis and Recognition – ICDAR 2023 Workshops |
Abbreviated Journal |
|
Volume |
14194 |
Issue |
2 |
Pages |
|
Keywords |
|
Abstract |
|
Address |
San Jose; USA; August 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ CoF2023 |
Serial |
3852 |
Permanent link to this record |
|
|
|
Author |
Francesc Net; Marc Folia; Pep Casals; Lluis Gomez |
Title |
Transductive Learning for Near-Duplicate Image Detection in Scanned Photo Collections |
Type |
Conference Article |
Year |
2023 |
Publication |
17th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
Volume |
14191 |
Issue |
|
Pages |
3-17 |
Keywords |
Image deduplication; Near-duplicate images detection; Transductive Learning; Photographic Archives; Deep Learning |
Abstract |
This paper presents a comparative study of near-duplicate image detection techniques in a real-world use case scenario, where a document management company is commissioned to manually annotate a collection of scanned photographs. Detecting duplicate and near-duplicate photographs can reduce the time spent on manual annotation by archivists. This real use case differs from laboratory settings as the deployment dataset is available in advance, allowing the use of transductive learning. We propose a transductive learning approach that leverages state-of-the-art deep learning architectures such as convolutional neural networks (CNNs) and Vision Transformers (ViTs). Our approach involves pre-training a deep neural network on a large dataset and then fine-tuning the network on the unlabeled target collection with self-supervised learning. The results show that the proposed approach outperforms the baseline methods in the task of near-duplicate image detection in the UKBench and an in-house private dataset. |
Address |
San Jose; CA; USA; August 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ NFC2023 |
Serial |
3859 |
Permanent link to this record |
|
|
|
Author |
Ayan Banerjee; Sanket Biswas; Josep Llados; Umapada Pal |
Title |
SwinDocSegmenter: An End-to-End Unified Domain Adaptive Transformer for Document Instance Segmentation |
Type |
Conference Article |
Year |
2023 |
Publication |
17th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
Volume |
14187 |
Issue |
|
Pages |
307–325 |
Keywords |
|
Abstract |
Instance-level segmentation of documents consists in assigning a class-aware and instance-aware label to each pixel of the image. It is a key step in document parsing for their understanding. In this paper, we present a unified transformer encoder-decoder architecture for en-to-end instance segmentation of complex layouts in document images. The method adapts a contrastive training with a mixed query selection for anchor initialization in the decoder. Later on, it performs a dot product between the obtained query embeddings and the pixel embedding map (coming from the encoder) for semantic reasoning. Extensive experimentation on competitive benchmarks like PubLayNet, PRIMA, Historical Japanese (HJ), and TableBank demonstrate that our model with SwinL backbone achieves better segmentation performance than the existing state-of-the-art approaches with the average precision of 93.72, 54.39, 84.65 and 98.04 respectively under one billion parameters. The code is made publicly available at: github.com/ayanban011/SwinDocSegmenter . |
Address |
San Jose; CA; USA; August 2023 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ BBL2023 |
Serial |
3893 |
Permanent link to this record |