Home | [71–80] << 81 82 83 84 85 86 87 88 89 90 >> [91–100] |
Records | |||||
---|---|---|---|---|---|
Author | Isabelle Guyon; Imad Chaabane; Hugo Jair Escalante; Sergio Escalera; Damir Jajetic; James Robert Lloyd; Nuria Macia; Bisakha Ray; Lukasz Romaszko; Michele Sebag; Alexander Statnikov; Sebastien Treguer; Evelyne Viegas | ||||
Title | A brief Review of the ChaLearn AutoML Challenge: Any-time Any-dataset Learning without Human Intervention | Type | Conference Article | ||
Year | 2016 | Publication | AutoML Workshop | Abbreviated Journal | |
Volume | Issue | 1 | Pages | 1-8 | |
Keywords | AutoML Challenge; machine learning; model selection; meta-learning; repre- sentation learning; active learning | ||||
Abstract | The ChaLearn AutoML Challenge team conducted a large scale evaluation of fully automatic, black-box learning machines for feature-based classification and regression problems. The test bed was composed of 30 data sets from a wide variety of application domains and ranged across different types of complexity. Over six rounds, participants succeeded in delivering AutoML software capable of being trained and tested without human intervention. Although improvements can still be made to close the gap between human-tweaked and AutoML models, this competition contributes to the development of fully automated environments by challenging practitioners to solve problems under specific constraints and sharing their approaches; the platform will remain available for post-challenge submissions at http://codalab.org/AutoML. | ||||
Address | New York; USA; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICML | ||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ GCE2016 | Serial | 2769 | ||
Permanent link to this record | |||||
Author | Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera | ||||
Title | Action Recognition by Pairwise Proximity Function Support Vector Machines with Dynamic Time Warping Kernels | Type | Conference Article | ||
Year | 2016 | Publication | 29th Canadian Conference on Artificial Intelligence | Abbreviated Journal | |
Volume | 9673 | Issue | Pages | 3-14 | |
Keywords | |||||
Abstract | In the context of human action recognition using skeleton data, the 3D trajectories of joint points may be considered as multi-dimensional time series. The traditional recognition technique in the literature is based on time series dis(similarity) measures (such as Dynamic Time Warping). For these general dis(similarity) measures, k-nearest neighbor algorithms are a natural choice. However, k-NN classifiers are known to be sensitive to noise and outliers. In this paper, a new class of Support Vector Machine that is applicable to trajectory classification, such as action recognition, is developed by incorporating an efficient time-series distances measure into the kernel function. More specifically, the derivative of Dynamic Time Warping (DTW) distance measure is employed as the SVM kernel. In addition, the pairwise proximity learning strategy is utilized in order to make use of non-positive semi-definite (PSD) kernels in the SVM formulation. The recognition results of the proposed technique on two action recognition datasets demonstrates the ourperformance of our methodology compared to the state-of-the-art methods. Remarkably, we obtained 89 % accuracy on the well-known MSRAction3D dataset using only 3D trajectories of body joints obtained by Kinect | ||||
Address | Victoria; Canada; May 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AI | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ BGE2016b | Serial | 2770 | ||
Permanent link to this record | |||||
Author | Jun Wan; Yibing Zhao; Shuai Zhou; Isabelle Guyon; Sergio Escalera | ||||
Title | ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition | Type | Conference Article | ||
Year | 2016 | Publication | 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD)and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset
(CGD) that has a total of more than 50000 gestures for the “one-shot-learning” competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences.Using these datasets we will open two competitions on the CodaLab platform so that researchers can test and compare their methods for “user independent” gesture recognition. The first challenge is designed for gesture spotting and recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented. |
||||
Address | Las Vegas; USA; July 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ WZZ2016 | Serial | 2771 | ||
Permanent link to this record | |||||
Author | Florin Popescu; Stephane Ayache; Sergio Escalera; Xavier Baro; Cecile Capponi; Patrick Panciatici; Isabelle Guyon | ||||
Title | From geospatial observations of ocean currents to causal predictors of spatio-economic activity using computer vision and machine learning | Type | Conference Article | ||
Year | 2016 | Publication | European Geosciences Union General Assembly | Abbreviated Journal | |
Volume | 18 | Issue | Pages | ||
Keywords | |||||
Abstract | The big data transformation currently revolutionizing science and industry forges novel possibilities in multimodal analysis scarcely imaginable only a decade ago. One of the important economic and industrial problems that stand to benefit from the recent expansion of data availability and computational prowess is the prediction of electricity demand and renewable energy generation. Both are correlates of human activity: spatiotemporal energy consumption patterns in society are a factor of both demand (weather dependent) and supply, which determine cost – a relation expected to strengthen along with increasing renewable energy dependence. One of the main drivers of European weather patterns is the activity of the Atlantic Ocean and in particular its dominant Northern Hemisphere current: the Gulf Stream. We choose this particular current as a test case in part due to larger amount of relevant data and scientific literature available for refinement of analysis techniques.
This data richness is due not only to its economic importance but also to its size being clearly visible in radar and infrared satellite imagery, which makes it easier to detect using Computer Vision (CV). The power of CV techniques makes basic analysis thus developed scalable to other smaller and less known, but still influential, currents, which are not just curves on a map, but complex, evolving, moving branching trees in 3D projected onto a 2D image. We investigate means of extracting, from several image modalities (including recently available Copernicus radar and earlier Infrared satellites), a parameterized presentation of the state of the Gulf Stream and its environment that is useful as feature space representation in a machine learning context, in this case with the EC’s H2020-sponsored ‘See.4C’ project, in the context of which data scientists may find novel predictors of spatiotemporal energy flow. Although automated extractors of Gulf Stream position exist, they differ in methodology and result. We shall attempt to extract more complex feature representation including branching points, eddies and parameterized changes in transport and velocity. Other related predictive features will be similarly developed, such as inference of deep water flux long the current path and wider spatial scale features such as Hough transform, surface turbulence indicators and temperature gradient indexes along with multi-time scale analysis of ocean height and temperature dynamics. The geospatial imaging and ML community may therefore benefit from a baseline of open-source techniques useful and expandable to other related prediction and/or scientific analysis tasks. |
||||
Address | Vienna; Austria; April 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | EGU | ||
Notes | HuPBA;MV; | Approved | no | ||
Call Number | Admin @ si @ PAE2016 | Serial | 2772 | ||
Permanent link to this record | |||||
Author | Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera | ||||
Title | Support Vector Machines with Time Series Distance Kernels for Action Classification | Type | Conference Article | ||
Year | 2016 | Publication | IEEE Winter Conference on Applications of Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 1-7 | ||
Keywords | |||||
Abstract | Despite the outperformance of Support Vector Machine (SVM) on many practical classification problems, the algorithm is not directly applicable to multi-dimensional trajectories having different lengths. In this paper, a new class of SVM that is applicable to trajectory classification, such as action recognition, is developed by incorporating two efficient time-series distances measures into the kernel function.
Dynamic Time Warping and Longest Common Subsequence distance measures along with their derivatives are employed as the SVM kernel. In addition, the pairwise proximity learning strategy is utilized in order to make use of non-positive semi-definite kernels in the SVM formulation. The proposed method is employed for a challenging classification problem: action recognition by depth cameras using only skeleton data; and evaluated on three benchmark action datasets. Experimental results demonstrate the outperformance of our methodology compared to the state-ofthe-art on the considered datasets. |
||||
Address | Lake Placid; NY (USA); March 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WACV | ||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @ BGE2016a | Serial | 2773 | ||
Permanent link to this record | |||||
Author | Gloria Fernandez Esparrach; Jorge Bernal; Cristina Rodriguez de Miguel; Debora Gil; Fernando Vilariño; Henry Cordova; Cristina Sanchez Montes; Isis Ara | ||||
Title | Utilidad de la visión por computador para la localización de pólipos pequeños y planos | Type | Conference Article | ||
Year | 2016 | Publication | XIX Reunión Nacional de la Asociación Española de Gastroenterología, Gastroenterology Hepatology | Abbreviated Journal | |
Volume | 39 | Issue | 2 | Pages | 94 |
Keywords | |||||
Abstract | |||||
Address | Madrid (Spain) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AEGASTRO | ||
Notes | MV; IAM; 600.097;SIAI | Approved | no | ||
Call Number | Admin @ si @FBR2016 | Serial | 2779 | ||
Permanent link to this record | |||||
Author | Maria Oliver; Gloria Haro; Mariella Dimiccoli; Baptiste Mazin; Coloma Ballester | ||||
Title | A computational model of amodal completion | Type | Conference Article | ||
Year | 2016 | Publication | SIAM Conference on Imaging Science | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper presents a computational model to recover the most likely interpretation of the 3D scene structure from a planar image, where some objects may occlude others. The estimated scene interpretation is obtained by integrating some global and local cues and provides both the complete disoccluded objects that form the scene and their ordering according to depth. Our method first computes several distal scenes which are compatible with the proximal planar image. To compute these different hypothesized scenes, we propose a perceptually inspired object disocclusion method, which works by minimizing the Euler's elastica as well as by incorporating the relatability of partially occluded contours and the convexity of the disoccluded objects. Then, to estimate the preferred scene we rely on a Bayesian model and define probabilities taking into account the global complexity of the objects in the hypothesized scenes as well as the effort of bringing these objects in their relative position in the planar image, which is also measured by an Euler's elastica-based quantity. The model is illustrated with numerical experiments on, both, synthetic and real images showing the ability of our model to reconstruct the occluded objects and the preferred perceptual order among them. We also present results on images of the Berkeley dataset with provided figure-ground ground-truth labeling. | ||||
Address | Albuquerque; New Mexico; USA; May 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IS | ||
Notes | MILAB; 601.235 | Approved | no | ||
Call Number | Admin @ si @OHD2016a | Serial | 2788 | ||
Permanent link to this record | |||||
Author | G. de Oliveira; A. Cartas; Marc Bolaños; Mariella Dimiccoli; Xavier Giro; Petia Radeva | ||||
Title | LEMoRe: A Lifelog Engine for Moments Retrieval at the NTCIR-Lifelog LSAT Task | Type | Conference Article | ||
Year | 2016 | Publication | 12th NTCIR Conference on Evaluation of Information Access Technologies | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Semantic image retrieval from large amounts of egocentric visual data requires to leverage powerful techniques for filling in the semantic gap. This paper introduces LEMoRe, a Lifelog Engine for Moments Retrieval, developed in the context of the Lifelog Semantic Access Task (LSAT) of the the NTCIR-12 challenge and discusses its performance variation on different trials. LEMoRe integrates classical image descriptors with high-level semantic concepts extracted by Convolutional Neural Networks (CNN), powered by a graphic user interface that uses natural language processing. Although this is just a first attempt towards interactive image retrieval from large egocentric datasets and there is a large room for improvement of the system components and the user interface, the structure of the system itself and the way the single components cooperate are very promising. | ||||
Address | Tokyo; Japan; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NTCIR | ||
Notes | MILAB; | Approved | no | ||
Call Number | Admin @ si @OCB2016 | Serial | 2789 | ||
Permanent link to this record | |||||
Author | G. de Oliveira; Mariella Dimiccoli; Petia Radeva | ||||
Title | Egocentric Image Retrieval With Deep Convolutional Neural Networks | Type | Conference Article | ||
Year | 2016 | Publication | 19th International Conference of the Catalan Association for Artificial Intelligence | Abbreviated Journal | |
Volume | Issue | Pages | 71-76 | ||
Keywords | |||||
Abstract | |||||
Address | Barcelona; Spain; October 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CCIA | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ODR2016 | Serial | 2790 | ||
Permanent link to this record | |||||
Author | Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva | ||||
Title | With whom do I interact with? Social interaction detection in egocentric photo-streams | Type | Conference Article | ||
Year | 2016 | Publication | 23rd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Given a user wearing a low frame rate wearable camera during a day, this work aims to automatically detect the moments when the user gets engaged into a social interaction solely by reviewing the automatically captured photos by the worn camera. The proposed method, inspired by the sociological concept of F-formation, exploits distance and orientation of the appearing individuals -with respect to the user- in the scene from a bird-view perspective. As a result, the interaction pattern over the sequence can be understood as a two-dimensional time series that corresponds to the temporal evolution of the distance and orientation features over time. A Long-Short Term Memory-based Recurrent Neural Network is then trained to classify each time series. Experimental evaluation over a dataset of 30.000 images has shown promising results on the proposed method for social interaction detection in egocentric photo-streams. | ||||
Address | Cancun; Mexico; December 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ADR2016a | Serial | 2791 | ||
Permanent link to this record | |||||
Author | Mariella Dimiccoli; Petia Radeva | ||||
Title | Lifelogging in the era of outstanding digitization | Type | Conference Article | ||
Year | 2015 | Publication | International Conference on Digital Presentation and Preservation of Cultural and Scientific Heritage | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | In this paper, we give an overview on the emerging trend of the digitized self, focusing on visual lifelogging through wearable cameras. This is about continuously recording our life from a first-person view by wearing a camera that passively captures images. On one hand, visual lifelogging has opened the door to a large number of applications, including health. On the other, it has also boosted new challenges in the field of data analysis as well as new ethical concerns. While currently increasing efforts are being devoted to exploit lifelogging data for the improvement of personal well-being, we believe there are still many interesting applications to explore, ranging from tourism to the digitization of human behavior. | ||||
Address | Verliko Tarmovo; Bulgaria; September 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DiPP | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @DiR2016 | Serial | 2792 | ||
Permanent link to this record | |||||
Author | Aniol Lidon; Xavier Giro; Marc Bolaños; Petia Radeva; Markus Seidl; Matthias Zeppelzauer | ||||
Title | UPC-UB-STP @ MediaEval 2015 diversity task: iterative reranking of relevant images | Type | Conference Article | ||
Year | 2015 | Publication | 2015 MediaEval Retrieving Diverse Images Task | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper presents the results of the UPC-UB-STP team in the 2015 MediaEval Retrieving Diverse Images Task. The goal of the challenge is to provide a ranked list of Flickr photos for a predefined set of queries. Our approach firstly generates a ranking of images based on a query-independent estimation of its relevance. Only top results are kept and iteratively re-ranked based on their intra-similarity to introduce diversity. | ||||
Address | Wurzen; Germany; September 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MediaEval | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @LGB2016 | Serial | 2793 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Antonio Lopez | ||||
Title | Evaluating Color Representation for Online Road Detection | Type | Conference Article | ||
Year | 2013 | Publication | ICCV Workshop on Computer Vision in Vehicle Technology: From Earth to Mars | Abbreviated Journal | |
Volume | Issue | Pages | 594-595 | ||
Keywords | |||||
Abstract | Detecting traversable road areas ahead a moving vehicle is a key process for modern autonomous driving systems. Most existing algorithms use color to classify pixels as road or background. These algorithms reduce the effect of lighting variations and weather conditions by exploiting the discriminant/invariant properties of different color representations. However, up to date, no comparison between these representations have been conducted. Therefore, in this paper, we perform an evaluation of existing color representations for road detection. More specifically, we focus on color planes derived from RGB data and their most com-
mon combinations. The evaluation is done on a set of 7000 road images acquired using an on-board camera in different real-driving situations. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVVT:E2M | ||
Notes | ADAS;ISE | Approved | no | ||
Call Number | Admin @ si @ AGL2013 | Serial | 2794 | ||
Permanent link to this record | |||||
Author | Y. Patel; Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas | ||||
Title | Dynamic Lexicon Generation for Natural Scene Images | Type | Conference Article | ||
Year | 2016 | Publication | 14th European Conference on Computer Vision Workshops | Abbreviated Journal | |
Volume | Issue | Pages | 395-410 | ||
Keywords | scene text; photo OCR; scene understanding; lexicon generation; topic modeling; CNN | ||||
Abstract | Many scene text understanding methods approach the endtoend recognition problem from a word-spotting perspective and take huge benet from using small per-image lexicons. Such customized lexicons are normally assumed as given and their source is rarely discussed.
In this paper we propose a method that generates contextualized lexicons for scene images using only visual information. For this, we exploit the correlation between visual and textual information in a dataset consisting of images and textual content associated with them. Using the topic modeling framework to discover a set of latent topics in such a dataset allows us to re-rank a xed dictionary in a way that prioritizes the words that are more likely to appear in a given image. Moreover, we train a CNN that is able to reproduce those word rankings but using only the image raw pixels as input. We demonstrate that the quality of the automatically obtained custom lexicons is superior to a generic frequency-based baseline. |
||||
Address | Amsterdam; The Netherlands; October 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | DAG; 600.084 | Approved | no | ||
Call Number | Admin @ si @ PGR2016 | Serial | 2825 | ||
Permanent link to this record | |||||
Author | Dan Norton; Fernando Vilariño; Onur Ferhat | ||||
Title | Memory Field – Creative Engagement in Digital Collections | Type | Conference Article | ||
Year | 2015 | Publication | Internet Librarian International Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | “Memory Fields” is a trans-disciplinary project aiming at the (re)valorisation of digital collections.Its main deliverable is an interface for a dual screen installation, used to access and mix the public library digital collections. The collections being used in this case are a collection of digitised posters from the Spanish Civil War, belonging to the Arxiu General de Catalunya, and a collection of field recordings made by Dan Norton. The system generates visualisations, and the images and sounds are mixed together using narrative primitives of video dj. Users contribute to the digital collections by adding personal memories and observations. The comments and recollections appear as flowers growing in a “memory field” and memories remain public in a Twitter feed (@Memoryfields). | ||||
Address | London; UK; October 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ILI | ||
Notes | MV;SIAI | Approved | no | ||
Call Number | Admin @ si @NVF2015 | Serial | 2796 | ||
Permanent link to this record |