|   | 
Details
   web
Records
Author Egils Avots; Meysam Madadi; Sergio Escalera; Jordi Gonzalez; Xavier Baro; Paul Pallin; Gholamreza Anbarjafari
Title From 2D to 3D geodesic-based garment matching Type Journal Article
Year 2019 Publication Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 78 Issue 18 Pages 25829–25853
Keywords Shape matching; Geodesic distance; Texture mapping; RGBD image processing; Gaussian mixture model
Abstract A new approach for 2D to 3D garment retexturing is proposed based on Gaussian mixture models and thin plate splines (TPS). An automatically segmented garment of an individual is matched to a new source garment and rendered, resulting in augmented images in which the target garment has been retextured using the texture of the source garment. We divide the problem into garment boundary matching based on Gaussian mixture models and then interpolate inner points using surface topology extracted through geodesic paths, which leads to a more realistic result than standard approaches. We evaluated and compared our system quantitatively by root mean square error (RMS) and qualitatively using the mean opinion score (MOS), showing the benefits of the proposed methodology on our gathered dataset.
Address
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA; ISE; 600.098; 600.119; 602.133 Approved no
Call Number Admin @ si @ AME2019 Serial 3317
Permanent link to this record
 

 
Author Andre Litvin; Kamal Nasrollahi; Sergio Escalera; Cagri Ozcinar; Thomas B. Moeslund; Gholamreza Anbarjafari
Title A Novel Deep Network Architecture for Reconstructing RGB Facial Images from Thermal for Face Recognition Type Journal Article
Year 2019 Publication Multimedia Tools and Applications Abbreviated Journal MTAP
Volume 78 Issue 18 Pages 25259–25271
Keywords Fully convolutional networks; FusionNet; Thermal imaging; Face recognition
Abstract This work proposes a fully convolutional network architecture for RGB face image generation from a given input thermal face image to be applied in face recognition scenarios. The proposed method is based on the FusionNet architecture and increases robustness against overfitting using dropout after bridge connections, randomised leaky ReLUs (RReLUs), and orthogonal regularization. Furthermore, we propose to use a decoding block with resize convolution instead of transposed convolution to improve final RGB face image generation. To validate our proposed network architecture, we train a face classifier and compare its face recognition rate on the reconstructed RGB images from the proposed architecture, to those when reconstructing images with the original FusionNet, as well as when using the original RGB images. As a result, we are introducing a new architecture which leads to a more accurate network.
Address
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA; no menciona Approved no
Call Number Admin @ si @ LNE2019 Serial 3318
Permanent link to this record
 

 
Author Ikechukwu Ofodile; Ahmed Helmi; Albert Clapes; Egils Avots; Kerttu Maria Peensoo; Sandhra Mirella Valdma; Andreas Valdmann; Heli Valtna Lukner; Sergey Omelkov; Sergio Escalera; Cagri Ozcinar; Gholamreza Anbarjafari
Title Action recognition using single-pixel time-of-flight detection Type Journal Article
Year 2019 Publication Entropy Abbreviated Journal ENTROPY
Volume 21 Issue 4 Pages 414
Keywords single pixel single photon image acquisition; time-of-flight; action recognition
Abstract Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject’s privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene.
Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47% accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent
neural network.
Address
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA; no proj Approved no
Call Number Admin @ si @ OHC2019 Serial 3319
Permanent link to this record
 

 
Author Parichehr Behjati Ardakani; Pau Rodriguez; Armin Mehri; Isabelle Hupont; Carles Fernandez; Jordi Gonzalez
Title OverNet: Lightweight Multi-Scale Super-Resolution with Overscaling Network Type Conference Article
Year 2021 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal
Volume Issue Pages 2693-2702
Keywords
Abstract Super-resolution (SR) has achieved great success due to the development of deep convolutional neural networks (CNNs). However, as the depth and width of the networks increase, CNN-based SR methods have been faced with the challenge of computational complexity in practice. More- over, most SR methods train a dedicated model for each target resolution, losing generality and increasing memory requirements. To address these limitations we introduce OverNet, a deep but lightweight convolutional network to solve SISR at arbitrary scale factors with a single model. We make the following contributions: first, we introduce a lightweight feature extractor that enforces efficient reuse of information through a novel recursive structure of skip and dense connections. Second, to maximize the performance of the feature extractor, we propose a model agnostic reconstruction module that generates accurate high-resolution images from overscaled feature maps obtained from any SR architecture. Third, we introduce a multi-scale loss function to achieve generalization across scales. Experiments show that our proposal outperforms previous state-of-the-art approaches in standard benchmarks, while maintaining relatively low computation and memory requirements.
Address Virtual; January 2021
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference WACV
Notes ISE; 600.119; 600.098 Approved no
Call Number Admin @ si @ BRM2021 Serial 3512
Permanent link to this record
 

 
Author Hamed H. Aghdam; Abel Gonzalez-Garcia; Joost Van de Weijer; Antonio Lopez
Title Active Learning for Deep Detection Neural Networks Type Conference Article
Year 2019 Publication 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 3672-3680
Keywords
Abstract The cost of drawing object bounding boxes (ie labeling) for millions of images is prohibitively high. For instance, labeling pedestrians in a regular urban image could take 35 seconds on average. Active learning aims to reduce the cost of labeling by selecting only those images that are informative to improve the detection network accuracy. In this paper, we propose a method to perform active learning of object detectors based on convolutional neural networks. We propose a new image-level scoring process to rank unlabeled images for their automatic selection, which clearly outperforms classical scores. The proposed method can be applied to videos and sets of still images. In the former case, temporal selection rules can complement our scoring process. As a relevant use case, we extensively study the performance of our method on the task of pedestrian detection. Overall, the experiments show that the proposed method performs better than random selection.
Address Seul; Korea; October 2019
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes ADAS; LAMP; 600.124; 600.109; 600.141; 600.120; 600.118 Approved no
Call Number Admin @ si @ AGW2019 Serial 3321
Permanent link to this record
 

 
Author Felipe Codevilla; Eder Santana; Antonio Lopez; Adrien Gaidon
Title Exploring the Limitations of Behavior Cloning for Autonomous Driving Type Conference Article
Year 2019 Publication 18th IEEE International Conference on Computer Vision Abbreviated Journal
Volume Issue Pages 9328-9337
Keywords
Abstract Driving requires reacting to a wide variety of complex environment conditions and agent behaviors. Explicitly modeling each possible scenario is unrealistic. In contrast, imitation learning can, in theory, leverage data from large fleets of human-driven cars. Behavior cloning in particular has been successfully used to learn simple visuomotor policies end-to-end, but scaling to the full spectrum of driving behaviors remains an unsolved problem. In this paper, we propose a new benchmark to experimentally investigate the scalability and limitations of behavior cloning. We show that behavior cloning leads to state-of-the-art results, executing complex lateral and longitudinal maneuvers, even in unseen environments, without being explicitly programmed to do so. However, we confirm some limitations of the behavior cloning approach: some well-known limitations (eg, dataset bias and overfitting), new generalization issues (eg, dynamic objects and the lack of a causal modeling), and training instabilities, all requiring further research before behavior cloning can graduate to real-world driving. The code, dataset, benchmark, and agent studied in this paper can be found at github.
Address Seul; Korea; October 2019
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCV
Notes ADAS; 600.124; 600.118 Approved no
Call Number Admin @ si @ CSL2019 Serial 3322
Permanent link to this record
 

 
Author Zhengying Liu; Isabelle Guyon; Julio C. S. Jacques Junior; Meysam Madadi; Sergio Escalera; Adrien Pavao; Hugo Jair Escalante; Wei-Wei Tu; Zhen Xu; Sebastien Treguer
Title AutoCV Challenge Design and Baseline Results Type Conference Article
Year 2019 Publication La Conference sur l’Apprentissage Automatique Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We present the design and beta tests of a new machine learning challenge called AutoCV (for Automated Computer Vision), which is the first event in a series of challenges we are planning on the theme of Automated Deep Learning. We target applications for which Deep Learning methods have had great success in the past few years, with the aim of pushing the state of the art in fully automated methods to design the architecture of neural networks and train them without any human intervention. The tasks are restricted to multi-label image classification problems, from domains including medical, areal, people, object, and handwriting imaging. Thus the type of images will vary a lot in scales, textures, and structure. Raw data are provided (no features extracted), but all datasets are formatted in a uniform tensor manner (although images may have fixed or variable sizes within a dataset). The participants's code will be blind tested on a challenge platform in a controlled manner, with restrictions on training and test time and memory limitations. The challenge is part of the official selection of IJCNN 2019.
Address Toulouse; Francia; July 2019
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ LGJ2019 Serial 3323
Permanent link to this record
 

 
Author Reza Azad; Maryam Asadi Aghbolaghi; Mahmood Fathy; Sergio Escalera
Title Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions Type Conference Article
Year 2019 Publication Visual Recognition for Medical Images workshop Abbreviated Journal
Volume Issue Pages 406-415
Keywords
Abstract In recent years, deep learning-based networks have achieved state-of-the-art performance in medical image segmentation. Among the existing networks, U-Net has been successfully applied on medical image segmentation. In this paper, we propose an extension of U-Net, Bi-directional ConvLSTM U-Net with Densely connected convolutions (BCDU-Net), for medical image segmentation, in which we take full advantages of U-Net, bi-directional ConvLSTM (BConvLSTM) and the mechanism of dense convolutions. Instead of a simple concatenation in the skip connection of U-Net, we employ BConvLSTM to combine the feature maps extracted from the corresponding encoding path and the previous decoding up-convolutional layer in a non-linear way. To strengthen feature propagation and encourage feature reuse, we use densely connected convolutions in the last convolutional layer of the encoding path. Finally, we can accelerate the convergence speed of the proposed network by employing batch normalization (BN). The proposed model is evaluated on three datasets of: retinal blood vessel segmentation, skin lesion segmentation, and lung nodule segmentation, achieving state-of-the-art performance.
Address Seul; Korea; October 2019
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICCVW
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ AAF2019 Serial 3324
Permanent link to this record
 

 
Author Maria Ines Torres; Javier Mikel Olaso; Cesar Montenegro; Riberto Santana; A.Vazquez; Raquel Justo; J.A.Lozano; Stephan Schogl; Gerard Chollet; Nazim Dugan; M.Irvine; N.Glackin; C.Pickard; Anna Esposito; Gennaro Cordasco; Alda Troncone; Dijana Petrovska Delacretaz; Aymen Mtibaa; Mohamed Amine Hmani; M.S.Korsnes; L.J.Martinussen; Sergio Escalera; C.Palmero Cantariño; Olivier Deroo; O.Gordeeva; Jofre Tenorio Laranga; E.Gonzalez Fraile; Begoña Fernandez Ruanova; A.Gonzalez Pinto
Title The EMPATHIC project: mid-term achievements Type Conference Article
Year 2019 Publication 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments Abbreviated Journal
Volume Issue Pages 629-638
Keywords
Abstract Maria Ines Torres; Javier Mikel Olaso, César Montenegro, Riberto Santana, A. Vázquez, Raquel Justo, J. A. Lozano, Stephan Schlögl, Gérard Chollet, Nazim Dugan, M. Irvine, N. Glackin, C. Pickard, Anna Esposito, Gennaro Cordasco, Alda Troncone, Dijana Petrovska-Delacrétaz, Aymen Mtibaa, Mohamed Amine Hmani, M. S. Korsnes, L. J. Martinussen, Sergio Escalera, C. Palmero Cantariño, Olivier Deroo, O. Gordeeva, Jofre Tenorio-Laranga, E. Gonzalez-Fraile, Begoña Fernández-Ruanova, A. Gonzalez-Pinto
Address Rhodes Greece; June 2019
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference PETRA
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ TOM2019 Serial 3325
Permanent link to this record
 

 
Author Daniel Sanchez; Meysam Madadi; Marc Oliu; Sergio Escalera
Title Multi-task human analysis in still images: 2D/3D pose, depth map, and multi-part segmentation Type Conference Article
Year 2019 Publication 14th IEEE International Conference on Automatic Face and Gesture Recognition Abbreviated Journal
Volume Issue Pages
Keywords
Abstract While many individual tasks in the domain of human analysis have recently received an accuracy boost from deep learning approaches, multi-task learning has mostly been ignored due to a lack of data. New synthetic datasets are being released, filling this gap with synthetic generated data. In this work, we analyze four related human analysis tasks in still images in a multi-task scenario by leveraging such datasets. Specifically, we study the correlation of 2D/3D pose estimation, body part segmentation and full-body depth estimation. These tasks are learned via the well-known Stacked Hourglass module such that each of the task-specific streams shares information with the others. The main goal is to analyze how training together these four related tasks can benefit each individual task for a better generalization. Results on the newly released SURREAL dataset show that all four tasks benefit from the multi-task approach, but with different combinations of tasks: while combining all four tasks improves 2D pose estimation the most, 2D pose improves neither 3D pose nor full-body depth estimation. On the other hand 2D parts segmentation can benefit from 2D pose but not from 3D pose. In all cases, as expected, the maximum improvement is achieved on those human body parts that show more variability in terms of spatial distribution, appearance and shape, e.g. wrists and ankles.
Address Lille; France; May 2019
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference FG
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ SMO2019 Serial 3326
Permanent link to this record
 

 
Author Sergio Escalera; Marti Soler; Stephane Ayache; Umut Guçlu; Jun Wan; Meysam Madadi; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon
Title ChaLearn Looking at People: Inpainting and Denoising Challenges Type Book Chapter
Year 2019 Publication The Springer Series on Challenges in Machine Learning Abbreviated Journal
Volume Issue Pages 23-44
Keywords
Abstract Dealing with incomplete information is a well studied problem in the context of machine learning and computational intelligence. However, in the context of computer vision, the problem has only been studied in specific scenarios (e.g., certain types of occlusions in specific types of images), although it is common to have incomplete information in visual data. This chapter describes the design of an academic competition focusing on inpainting of images and video sequences that was part of the competition program of WCCI2018 and had a satellite event collocated with ECCV2018. The ChaLearn Looking at People Inpainting Challenge aimed at advancing the state of the art on visual inpainting by promoting the development of methods for recovering missing and occluded information from images and video. Three tracks were proposed in which visual inpainting might be helpful but still challenging: human body pose estimation, text overlays removal and fingerprint denoising. This chapter describes the design of the challenge, which includes the release of three novel datasets, and the description of evaluation metrics, baselines and evaluation protocol. The results of the challenge are analyzed and discussed in detail and conclusions derived from this event are outlined.
Address
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA; no proj Approved no
Call Number Admin @ si @ ESA2019 Serial 3327
Permanent link to this record
 

 
Author Sergio Escalera; Ralf Herbrich
Title The NeurIPS’18 Competition: From Machine Learning to Intelligent Conversations Type Book Whole
Year 2020 Publication The Springer Series on Challenges in Machine Learning Abbreviated Journal
Volume Issue Pages
Keywords
Abstract This volume presents the results of the Neural Information Processing Systems Competition track at the 2018 NeurIPS conference. The competition follows the same format as the 2017 competition track for NIPS. Out of 21 submitted proposals, eight competition proposals were selected, spanning the area of Robotics, Health, Computer Vision, Natural Language Processing, Systems and Physics. Competitions have become an integral part of advancing state-of-the-art in artificial intelligence (AI). They exhibit one important difference to benchmarks: Competitions test a system end-to-end rather than evaluating only a single component; they assess the practicability of an algorithmic solution in addition to assessing feasibility.
Address
Corporate Author Thesis
Publisher (down) Place of Publication Editor Sergio Escalera; Ralf Hebrick
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2520-1328 ISBN 978-3-030-29134-1 Medium
Area Expedition Conference
Notes HuPBA; no menciona Approved no
Call Number Admin @ si @ HeE2020 Serial 3328
Permanent link to this record
 

 
Author Ajian Liu; Jun Wan; Sergio Escalera; Hugo Jair Escalante; Zichang Tan; Qi Yuan; Kai Wang; Chi Lin; Guodong Guo; Isabelle Guyon; Stan Z. Li
Title Multi-Modal Face Anti-Spoofing Attack Detection Challenge at CVPR2019 Type Conference Article
Year 2019 Publication IEEE International Conference on Computer Vision and Pattern Recognition-Workshop Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Anti-spoofing attack detection is critical to guarantee the security of face-based authentication and facial analysis systems. Recently, a multi-modal face anti-spoofing dataset, CASIA-SURF, has been released with the goal of boosting research in this important topic. CASIA-SURF is the largest public data set for facial anti-spoofing attack detection in terms of both, diversity and modalities: it comprises 1,000 subjects and 21,000 video samples. We organized a challenge around this novel resource to boost research in the subject. The Chalearn LAP multi-modal face anti-spoofing attack detection challenge attracted more than 300 teams for the development phase with a total of 13 teams qualifying for the final round. This paper presents an overview of the challenge, including its design, evaluation protocol and a summary of results. We analyze the top ranked solutions and draw conclusions derived from the competition. In addition we outline future work directions.
Address California; June 2019
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes HuPBA; no proj Approved no
Call Number Admin @ si @ LWE2019 Serial 3329
Permanent link to this record
 

 
Author Shifeng Zhang; Xiaobo Wang; Ajian Liu; Chenxu Zhao; Jun Wan; Sergio Escalera; Hailin Shi; Zezheng Wang; Stan Z. Li
Title A Dataset and Benchmark for Large-scale Multi-modal Face Anti-spoofing Type Conference Article
Year 2019 Publication 32nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 919-928
Keywords
Abstract Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects (≤170) and modalities (≤2), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and visual modalities. Specifically, it consists of 1,000 subjects with 21,000 videos and each sample has 3 modalities (i.e., RGB, Depth and IR). We also provide a measurement set, evaluation protocol and training/validation/testing subsets, developing a new benchmark for face anti-spoofing. Moreover, we present a new multi-modal fusion method as baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modal. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at https://sites.google.com/qq.com/chalearnfacespoofingattackdete/.
Address California; June 2019
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes HuPBA; no proj Approved no
Call Number Admin @ si @ ZWL2019 Serial 3331
Permanent link to this record
 

 
Author Ciprian Corneanu; Meysam Madadi; Sergio Escalera; Aleix M. Martinez
Title What does it mean to learn in deep networks? And, how does one detect adversarial attacks? Type Conference Article
Year 2019 Publication 32nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal
Volume Issue Pages 4752-4761
Keywords
Abstract The flexibility and high-accuracy of Deep Neural Networks (DNNs) has transformed computer vision. But, the fact that we do not know when a specific DNN will work and when it will fail has resulted in a lack of trust. A clear example is self-driving cars; people are uncomfortable sitting in a car driven by algorithms that may fail under some unknown, unpredictable conditions. Interpretability and explainability approaches attempt to address this by uncovering what a DNN models, i.e., what each node (cell) in the network represents and what images are most likely to activate it. This can be used to generate, for example, adversarial attacks. But these approaches do not generally allow us to determine where a DNN will succeed or fail and why. i.e., does this learned representation generalize to unseen samples? Here, we derive a novel approach to define what it means to learn in deep networks, and how to use this knowledge to detect adversarial attacks. We show how this defines the ability of a network to generalize to unseen testing samples and, most importantly, why this is the case.
Address California; June 2019
Corporate Author Thesis
Publisher (down) Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPR
Notes HuPBA; no proj Approved no
Call Number Admin @ si @ CME2019 Serial 3332
Permanent link to this record