Records |
Author |
Maya Dimitrova; Ch. Roumenin; Petia Radeva; David Rotger; Juan J. Villanueva |
Title |
Multimodal Intelligent System for Cardiovascular Diagnosis |
Type |
Miscellaneous |
Year |
2003 |
Publication |
Automation and Informatics, any XXXVII, num. 3 |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ DRR2003 |
Serial |
374 |
Permanent link to this record |
|
|
|
Author |
Maya Dimitrova; Ch. Roumenin; Siya Lozanova; David Rotger; Petia Radeva |
Title |
An Interface System Based on Multimodal Principle for Cardiological Diagnosis Assistance |
Type |
Conference Article |
Year |
2007 |
Publication |
International Conference On Computer Systems And Technologies |
Abbreviated Journal |
|
Volume |
IIIB.4 |
Issue |
|
Pages |
1–6 |
Keywords |
|
Abstract |
|
Address |
Bulgaria |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CompSysTech’07 |
Notes |
MILAB |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ DRL2007 |
Serial |
833 |
Permanent link to this record |
|
|
|
Author |
Maya Dimitrova; I. Terziev; Petia Radeva; Juan J. Villanueva |
Title |
Java-Servlet Technology for Building New Web Document Classifiers |
Type |
Miscellaneous |
Year |
2004 |
Publication |
|
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
Varna (Bulgaria) |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ DTR2004 |
Serial |
476 |
Permanent link to this record |
|
|
|
Author |
Maya Dimitrova; N. Kushmerick; Petia Radeva; Juan J. Villanueva |
Title |
User Assesment of a Visual Genre Classifier |
Type |
Miscellaneous |
Year |
2003 |
Publication |
Proceedings of the 3rd IASTED Int. Conference Visualization, Imaging and Image Processing |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ DKR2003 |
Serial |
372 |
Permanent link to this record |
|
|
|
Author |
Maya Dimitrova; Petia Radeva; David Rotger; D. Boyadjiev; Juan J. Villanueva |
Title |
Advanced Cardiological Diagnosis via Intelligent Image Analysis |
Type |
Miscellaneous |
Year |
2004 |
Publication |
|
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
Varna (Bulgaria) |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ DRR2004 |
Serial |
477 |
Permanent link to this record |
|
|
|
Author |
Md. Mostafa Kamal Sarker; Hatem A. Rashwan; Farhan Akram; Estefania Talavera; Syeda Furruka Banu; Petia Radeva; Domenec Puig |
Title |
Recognizing Food Places in Egocentric Photo-Streams Using Multi-Scale Atrous Convolutional Networks and Self-Attention Mechanism |
Type |
Journal Article |
Year |
2019 |
Publication |
IEEE Access |
Abbreviated Journal |
ACCESS |
Volume |
7 |
Issue |
|
Pages |
39069-39082 |
Keywords |
|
Abstract |
Wearable sensors (e.g., lifelogging cameras) represent very useful tools to monitor people's daily habits and lifestyle. Wearable cameras are able to continuously capture different moments of the day of their wearers, their environment, and interactions with objects, people, and places reflecting their personal lifestyle. The food places where people eat, drink, and buy food, such as restaurants, bars, and supermarkets, can directly affect their daily dietary intake and behavior. Consequently, developing an automated monitoring system based on analyzing a person's food habits from daily recorded egocentric photo-streams of the food places can provide valuable means for people to improve their eating habits. This can be done by generating a detailed report of the time spent in specific food places by classifying the captured food place images to different groups. In this paper, we propose a self-attention mechanism with multi-scale atrous convolutional networks to generate discriminative features from image streams to recognize a predetermined set of food place categories. We apply our model on an egocentric food place dataset called “EgoFoodPlaces” that comprises of 43 392 images captured by 16 individuals using a lifelogging camera. The proposed model achieved an overall classification accuracy of 80% on the “EgoFoodPlaces” dataset, respectively, outperforming the baseline methods, such as VGG16, ResNet50, and InceptionV3. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB; no menciona |
Approved |
no |
Call Number |
Admin @ si @ SRA2019 |
Serial |
3296 |
Permanent link to this record |
|
|
|
Author |
Md. Mostafa Kamal Sarker; Hatem A. Rashwan; Farhan Akram; Syeda Furruka Banu; Adel Saleh; Vivek Kumar Singh; Forhad U. H. Chowdhury; Saddam Abdulwahab; Santiago Romani; Petia Radeva; Domenec Puig |
Title |
SLSDeep: Skin Lesion Segmentation Based on Dilated Residual and Pyramid Pooling Networks. |
Type |
Conference Article |
Year |
2018 |
Publication |
21st International Conference on Medical Image Computing & Computer Assisted Intervention |
Abbreviated Journal |
|
Volume |
2 |
Issue |
|
Pages |
21-29 |
Keywords |
|
Abstract |
Skin lesion segmentation (SLS) in dermoscopic images is a crucial task for automated diagnosis of melanoma. In this paper, we present a robust deep learning SLS model, so-called SLSDeep, which is represented as an encoder-decoder network. The encoder network is constructed by dilated residual layers, in turn, a pyramid pooling network followed by three convolution layers is used for the decoder. Unlike the traditional methods employing a cross-entropy loss, we investigated a loss function by combining both Negative Log Likelihood (NLL) and End Point Error (EPE) to accurately segment the melanoma regions with sharp boundaries. The robustness of the proposed model was evaluated on two public databases: ISBI 2016 and 2017 for skin lesion analysis towards melanoma detection challenge. The proposed model outperforms the state-of-the-art methods in terms of segmentation accuracy. Moreover, it is capable to segment more than 100 images of size 384x384 per second on a recent GPU. |
Address |
Granada; Espanya; September 2018 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
MICCAI |
Notes |
MILAB; no proj |
Approved |
no |
Call Number |
Admin @ si @ SRA2018 |
Serial |
3112 |
Permanent link to this record |
|
|
|
Author |
Md. Mostafa Kamal Sarker; Hatem A. Rashwan; Farhan Akram; Vivek Kumar Singh; Syeda Furruka Banu; Forhad U H Chowdhury; Kabir Ahmed Choudhury; Sylvie Chambon; Petia Radeva; Domenec Puig; Mohamed Abdel-Nasser |
Title |
SLSNet: Skin lesion segmentation using a lightweight generative adversarial network |
Type |
Journal Article |
Year |
2021 |
Publication |
Expert Systems With Applications |
Abbreviated Journal |
ESWA |
Volume |
183 |
Issue |
|
Pages |
115433 |
Keywords |
|
Abstract |
The determination of precise skin lesion boundaries in dermoscopic images using automated methods faces many challenges, most importantly, the presence of hair, inconspicuous lesion edges and low contrast in dermoscopic images, and variability in the color, texture and shapes of skin lesions. Existing deep learning-based skin lesion segmentation algorithms are expensive in terms of computational time and memory. Consequently, running such segmentation algorithms requires a powerful GPU and high bandwidth memory, which are not available in dermoscopy devices. Thus, this article aims to achieve precise skin lesion segmentation with minimum resources: a lightweight, efficient generative adversarial network (GAN) model called SLSNet, which combines 1-D kernel factorized networks, position and channel attention, and multiscale aggregation mechanisms with a GAN model. The 1-D kernel factorized network reduces the computational cost of 2D filtering. The position and channel attention modules enhance the discriminative ability between the lesion and non-lesion feature representations in spatial and channel dimensions, respectively. A multiscale block is also used to aggregate the coarse-to-fine features of input skin images and reduce the effect of the artifacts. SLSNet is evaluated on two publicly available datasets: ISBI 2017 and the ISIC 2018. Although SLSNet has only 2.35 million parameters, the experimental results demonstrate that it achieves segmentation results on a par with the state-of-the-art skin lesion segmentation methods with an accuracy of 97.61%, and Dice and Jaccard similarity coefficients of 90.63% and 81.98%, respectively. SLSNet can run at more than 110 frames per second (FPS) in a single GTX1080Ti GPU, which is faster than well-known deep learning-based image segmentation models, such as FCN. Therefore, SLSNet can be used for practical dermoscopic applications. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB; no proj |
Approved |
no |
Call Number |
Admin @ si @ SRA2021 |
Serial |
3633 |
Permanent link to this record |
|
|
|
Author |
Md. Mostafa Kamal Sarker; Hatem A. Rashwan; Hatem A. Rashwan; Estefania Talavera; Syeda Furruka Banu; Petia Radeva; Domenec Puig |
Title |
MACNet: Multi-scale Atrous Convolution Networks for Food Places Classification in Egocentric Photo-streams |
Type |
Conference Article |
Year |
2018 |
Publication |
European Conference on Computer Vision workshops |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
423-433 |
Keywords |
|
Abstract |
First-person (wearable) camera continually captures unscripted interactions of the camera user with objects, people, and scenes reflecting his personal and relational tendencies. One of the preferences of people is their interaction with food events. The regulation of food intake and its duration has a great importance to protect against diseases. Consequently, this work aims to develop a smart model that is able to determine the recurrences of a person on food places during a day. This model is based on a deep end-to-end model for automatic food places recognition by analyzing egocentric photo-streams. In this paper, we apply multi-scale Atrous convolution networks to extract the key features related to food places of the input images. The proposed model is evaluated on an in-house private dataset called “EgoFoodPlaces”. Experimental results shows promising results of food places classification recognition in egocentric photo-streams. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LCNS |
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ECCVW |
Notes |
MILAB; no menciona |
Approved |
no |
Call Number |
Admin @ si @ SRR2018b |
Serial |
3185 |
Permanent link to this record |
|
|
|
Author |
Md. Mostafa Kamal Sarker; Hatem A. Rashwan; Mohamed Abdel-Nasser; Vivek Kumar Singh; Syeda Furruka Banu; Farhan Akram; Forhad U. H. Chowdhury; Kabir Ahmed Choudhury; Sylvie Chambon; Petia Radeva; Domenec Puig |
Title |
MobileGAN: Skin Lesion Segmentation Using a Lightweight Generative Adversarial Network |
Type |
Miscellaneous |
Year |
2019 |
Publication |
Arxiv |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
CoRR abs/1907.00856
Skin lesion segmentation in dermoscopic images is a challenge due to their blurry and irregular boundaries. Most of the segmentation approaches based on deep learning are time and memory consuming due to the hundreds of millions of parameters. Consequently, it is difficult to apply them to real dermatoscope devices with limited GPU and memory resources. In this paper, we propose a lightweight and efficient Generative Adversarial Networks (GAN) model, called MobileGAN for skin lesion segmentation. More precisely, the MobileGAN combines 1D non-bottleneck factorization networks with position and channel attention modules in a GAN model. The proposed model is evaluated on the test dataset of the ISBI 2017 challenges and the validation dataset of ISIC 2018 challenges. Although the proposed network has only 2.35 millions of parameters, it is still comparable with the state-of-the-art. The experimental results show that our MobileGAN obtains comparable performance with an accuracy of 97.61%. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB; no menciona |
Approved |
no |
Call Number |
Admin @ si @ MRA2019 |
Serial |
3384 |
Permanent link to this record |
|
|
|
Author |
Md. Mostafa Kamal Sarker; Mohammed Jabreel; Hatem A. Rashwan; Syeda Furruka Banu; Antonio Moreno; Petia Radeva; Domenec Puig |
Title |
CuisineNet: Food Attributes Classification using Multi-scale Convolution Network. |
Type |
Miscellaneous |
Year |
2018 |
Publication |
Arxiv |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB; no proj |
Approved |
no |
Call Number |
Admin @ si @ KJR2018 |
Serial |
3235 |
Permanent link to this record |
|
|
|
Author |
Md. Mostafa Kamal Sarker; Mohammed Jabreel; Hatem A. Rashwan; Syeda Furruka Banu; Petia Radeva; Domenec Puig |
Title |
CuisineNet: Food Attributes Classification using Multi-scale Convolution Network |
Type |
Conference Article |
Year |
2018 |
Publication |
21st International Conference of the Catalan Association for Artificial Intelligence |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
365-372 |
Keywords |
|
Abstract |
Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models. |
Address |
Roses; catalonia; October 2018 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CCIA |
Notes |
MILAB; no menciona |
Approved |
no |
Call Number |
Admin @ si @ SJR2018 |
Serial |
3113 |
Permanent link to this record |
|
|
|
Author |
Md. Mostafa Kamal Sarker; Syeda Furruka Banu; Hatem A. Rashwan; Mohamed Abdel-Nasser; Vivek Kumar Singh; Sylvie Chambon; Petia Radeva; Domenec Puig |
Title |
Food Places Classification in Egocentric Images Using Siamese Neural Networks |
Type |
Conference Article |
Year |
2019 |
Publication |
22nd International Conference of the Catalan Association of Artificial Intelligence |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
145-151 |
Keywords |
|
Abstract |
Wearable cameras are become more popular in recent years for capturing the unscripted moments of the first-person that help to analyze the users lifestyle. In this work, we aim to recognize the places related to food in egocentric images during a day to identify the daily food patterns of the first-person. Thus, this system can assist to improve their eating behavior to protect users against food-related diseases. In this paper, we use Siamese Neural Networks to learn the similarity between images from corresponding inputs for one-shot food places classification. We tested our proposed method with ‘MiniEgoFoodPlaces’ with 15 food related places. The proposed Siamese Neural Networks model with MobileNet achieved an overall classification accuracy of 76.74% and 77.53% on the validation and test sets of the “MiniEgoFoodPlaces” dataset, respectively outperforming with the base models, such as ResNet50, InceptionV3, and InceptionResNetV2. |
Address |
Illes Balears; October 2019 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CCIA |
Notes |
MILAB; no proj |
Approved |
no |
Call Number |
Admin @ si @ SBR2019 |
Serial |
3368 |
Permanent link to this record |
|
|
|
Author |
Mehdi Mirza-Mohammadi; Sergio Escalera; Petia Radeva |
Title |
Contextual-Guided Bag-of-Visual-Words Model for Multi-class Object Categorization |
Type |
Conference Article |
Year |
2009 |
Publication |
13th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
Volume |
5702 |
Issue |
|
Pages |
748–756 |
Keywords |
|
Abstract |
Bag-of-words model (BOW) is inspired by the text classification problem, where a document is represented by an unsorted set of contained words. Analogously, in the object categorization problem, an image is represented by an unsorted set of discrete visual words (BOVW). In these models, relations among visual words are performed after dictionary construction. However, close object regions can have far descriptions in the feature space, being grouped as different visual words. In this paper, we present a method for considering geometrical information of visual words in the dictionary construction step. Object interest regions are obtained by means of the Harris-Affine detector and then described using the SIFT descriptor. Afterward, a contextual-space and a feature-space are defined, and a merging process is used to fuse feature words based on their proximity in the contextual-space. Moreover, we use the Error Correcting Output Codes framework to learn the new dictionary in order to perform multi-class classification. Results show significant classification improvements when spatial information is taken into account in the dictionary construction step. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
0302-9743 |
ISBN |
978-3-642-03766-5 |
Medium |
|
Area |
|
Expedition |
|
Conference |
CAIP |
Notes |
HuPBA; MILAB |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ MEP2009 |
Serial |
1185 |
Permanent link to this record |
|
|
|
Author |
Meritxell Vinyals; Arnau Ramisa; Ricardo Toledo |
Title |
An Evaluation of an Object Recognition Schema using Multiple Region Detectors |
Type |
Book Chapter |
Year |
2007 |
Publication |
Artificial Intelligence Research and Development, 163:213–222, ISBN: 978–1–58603–798–7, Proceedings of the 10th International Conference of the ACIA (CCIA’07) |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ADAS |
Approved |
no |
Call Number |
Admin @ si @ VRT2007 |
Serial |
898 |
Permanent link to this record |