Home | [131–140] << 141 142 143 144 145 146 147 148 149 150 >> [151–160] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Oriol Pujol; Debora Gil; Petia Radeva | ||||
Title | Fundamentals of Stop and Go active models | Type | Journal Article | ||
Year | 2005 | Publication | Image and Vision Computing | Abbreviated Journal | |
Volume | 23 | Issue | 8 | Pages | 681-691 |
Keywords | Deformable models; Geodesic snakes; Region-based segmentation | ||||
Abstract ![]() |
An efficient snake formulation should conform to the idea of picking the smoothest curve among all the shapes approximating an object of interest. In current geodesic snakes, the regularizing curvature also affects the convergence stage, hindering the latter at concave regions. In the present work, we make use of characteristic functions to define a novel geodesic formulation that decouples regularity and convergence. This term decoupling endows the snake with higher adaptability to non-convex shapes. Convergence is ensured by splitting the definition of the external force into an attractive vector field and a repulsive one. In our paper, we propose to use likelihood maps as approximation of characteristic functions of object appearance. The better efficiency and accuracy of our decoupled scheme are illustrated in the particular case of feature space-based segmentation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Butterworth-Heinemann | Place of Publication | Newton, MA, USA | Editor | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0262-8856 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | IAM;MILAB;HuPBA | Approved | no | ||
Call Number | IAM @ iam @ PGR2005 | Serial | 1629 | ||
Permanent link to this record | |||||
Author | Mikhail Mozerov; Ariel Amato; Xavier Roca; Jordi Gonzalez | ||||
Title | Solving the Multi Object Occlusion Problem in a Multiple Camera Tracking System | Type | Journal | ||
Year | 2009 | Publication | Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 19 | Issue | 1 | Pages | 165-171 |
Keywords | |||||
Abstract ![]() |
An efficient method to overcome adverse effects of occlusion upon object tracking is presented. The method is based on matching paths of objects in time and solves a complex occlusion-caused problem of merging separate segments of the same path. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1054-6618 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | ISE @ ise @ MAR2009a | Serial | 1160 | ||
Permanent link to this record | |||||
Author | Katerine Diaz; Francesc J. Ferri; W. Diaz | ||||
Title | Fast Approximated Discriminative Common Vectors using rank-one SVD updates | Type | Conference Article | ||
Year | 2013 | Publication | 20th International Conference On Neural Information Processing | Abbreviated Journal | |
Volume | 8228 | Issue | III | Pages | 368-375 |
Keywords | |||||
Abstract ![]() |
An efficient incremental approach to the discriminative common vector (DCV) method for dimensionality reduction and classification is presented. The proposal consists of a rank-one update along with an adaptive restriction on the rank of the null space which leads to an approximate but convenient solution. The algorithm can be implemented very efficiently in terms of matrix operations and space complexity, which enables its use in large-scale dynamic application domains. Deep comparative experimentation using publicly available high dimensional image datasets has been carried out in order to properly assess the proposed algorithm against several recent incremental formulations.
K. Diaz-Chito, F.J. Ferri, W. Diaz |
||||
Address | Daegu; Korea; November 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-42050-4 | Medium | |
Area | Expedition | Conference | ICONIP | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ DFD2013 | Serial | 2439 | ||
Permanent link to this record | |||||
Author | L.Tarazon; D. Perez; N. Serrano; V. Alabau; Oriol Ramos Terrades; A. Sanchis; A. Juan | ||||
Title | Confidence Measures for Error Correction in Interactive Transcription of Handwritten Text | Type | Conference Article | ||
Year | 2009 | Publication | 15th International Conference on Image Analysis and Processing | Abbreviated Journal | |
Volume | 5716 | Issue | Pages | 567-574 | |
Keywords | |||||
Abstract ![]() |
An effective approach to transcribe old text documents is to follow an interactive-predictive paradigm in which both, the system is guided by the human supervisor, and the supervisor is assisted by the system to complete the transcription task as efficiently as possible. In this paper, we focus on a particular system prototype called GIDOC, which can be seen as a first attempt to provide user-friendly, integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. More specifically, we focus on the handwriting recognition part of GIDOC, for which we propose the use of confidence measures to guide the human supervisor in locating possible system errors and deciding how to proceed. Empirical results are reported on two datasets showing that a word error rate not larger than a 10% can be achieved by only checking the 32% of words that are recognised with less confidence. | ||||
Address | Vietri sul Mare, Italy | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-04145-7 | Medium | |
Area | Expedition | Conference | ICIAP | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ TPS2009 | Serial | 1871 | ||
Permanent link to this record | |||||
Author | David Aldavert; Ricardo Toledo; Arnau Ramisa; Ramon Lopez de Mantaras | ||||
Title | Visual Registration Method For A Low Cost Robot: Computer Vision Systems | Type | Conference Article | ||
Year | 2009 | Publication | 7th International Conference on Computer Vision Systems | Abbreviated Journal | |
Volume | 5815 | Issue | Pages | 204–214 | |
Keywords | |||||
Abstract ![]() |
An autonomous mobile robot must face the correspondence or data association problem in order to carry out tasks like place recognition or unknown environment mapping. In order to put into correspondence two maps, most methods estimate the transformation relating the maps from matches established between low level feature extracted from sensor data. However, finding explicit matches between features is a challenging and computationally expensive task. In this paper, we propose a new method to align obstacle maps without searching explicit matches between features. The maps are obtained from a stereo pair. Then, we use a vocabulary tree approach to identify putative corresponding maps followed by the Newton minimization algorithm to find the transformation that relates both maps. The proposed method is evaluated in a typical office environment showing good performance. | ||||
Address | Belgica | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-04666-7 | Medium | |
Area | Expedition | Conference | ICVS | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ ATR2009b | Serial | 1247 | ||
Permanent link to this record | |||||
Author | David Vazquez; Jiaolong Xu; Sebastian Ramos; Antonio Lopez; Daniel Ponsa | ||||
Title | Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes | Type | Conference Article | ||
Year | 2013 | Publication | CVPR Workshop on Ground Truth – What is a good dataset? | Abbreviated Journal | |
Volume | Issue | Pages | 706 - 711 | ||
Keywords | Pedestrian Detection; Domain Adaptation | ||||
Abstract ![]() |
Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs. | ||||
Address | Portland; Oregon; June 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Editor | ||
Language | English | Summary Language | English | Original Title | |
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | ADAS; 600.054; 600.057; 601.217 | Approved | no | ||
Call Number | ADAS @ adas @ VXR2013a | Serial | 2219 | ||
Permanent link to this record | |||||
Author | Debora Gil; Petia Radeva | ||||
Title | A Regularized Curvature Flow Designed for a Selective Shape Restoration | Type | Journal Article | ||
Year | 2004 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | |
Volume | 13 | Issue | Pages | 1444–1458 | |
Keywords | Geometric flows, nonlinear filtering, shape recovery. | ||||
Abstract ![]() |
Among all filtering techniques, those based exclu- sively on image level sets (geometric flows) have proven to be the less sensitive to the nature of noise and the most contrast preserving. A common feature to existent curvature flows is that they penalize high curvature, regardless of the curve regularity. This constitutes a major drawback since curvature extreme values are standard descriptors of the contour geometry. We argue that an operator designed with shape recovery purposes should include a term penalizing irregularity in the curvature rather than its magnitude. To this purpose, we present a novel geometric flow that includes a function that measures the degree of local irregularity present in the curve. A main advantage is that it achieves non-trivial steady states representing a smooth model of level curves in a noisy image. Performance of our approach is compared to classical filtering techniques in terms of quality in the restored image/shape and asymptotic behavior. We empirically prove that our approach is the technique that achieves the best compromise between image quality and evolution stabilization. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM;MILAB | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ GiR2004b | Serial | 491 | ||
Permanent link to this record | |||||
Author | Alvaro Peris; Marc Bolaños; Petia Radeva; Francisco Casacuberta | ||||
Title | Video Description Using Bidirectional Recurrent Neural Networks | Type | Conference Article | ||
Year | 2016 | Publication | 25th International Conference on Artificial Neural Networks | Abbreviated Journal | |
Volume | 2 | Issue | Pages | 3-11 | |
Keywords | Video description; Neural Machine Translation; Birectional Recurrent Neural Networks; LSTM; Convolutional Neural Networks | ||||
Abstract ![]() |
Although traditionally used in the machine translation field, the encoder-decoder framework has been recently applied for the generation of video and image descriptions. The combination of Convolutional and Recurrent Neural Networks in these models has proven to outperform the previous state of the art, obtaining more accurate video descriptions. In this work we propose pushing further this model by introducing two contributions into the encoding stage. First, producing richer image representations by combining object and location information from Convolutional Neural Networks and second, introducing Bidirectional Recurrent Neural Networks for capturing both forward and backward temporal relationships in the input frames. | ||||
Address | Barcelona; September 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICANN | ||
Notes | MILAB; | Approved | no | ||
Call Number | Admin @ si @ PBR2016 | Serial | 2833 | ||
Permanent link to this record | |||||
Author | Michael Teutsch; Angel Sappa; Riad I. Hammoud | ||||
Title | Cross-Spectral Image Processing | Type | Book Chapter | ||
Year | 2022 | Publication | Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 23-34 | ||
Keywords | |||||
Abstract ![]() |
Although this book is on IR computer vision and its main focus lies on IR image and video processing and analysis, a special attention is dedicated to cross-spectral image processing due to the increasing number of publications and applications in this domain. In these cross-spectral frameworks, IR information is used together with information from other spectral bands to tackle some specific problems by developing more robust solutions. Tasks considered for cross-spectral processing are for instance dehazing, segmentation, vegetation index estimation, or face recognition. This increasing number of applications is motivated by cross- and multi-spectral camera setups available already on the market like for example smartphones, remote sensing multispectral cameras, or multi-spectral cameras for automotive systems or drones. In this chapter, different cross-spectral image processing techniques will be reviewed together with possible applications. Initially, image registration approaches for the cross-spectral case are reviewed: the registration stage is the first image processing task, which is needed to align images acquired by different sensors within the same reference coordinate system. Then, recent cross-spectral image colorization approaches, which are intended to colorize infrared images for different applications are presented. Finally, the cross-spectral image enhancement problem is tackled by including guided super resolution techniques, image dehazing approaches, cross-spectral filtering and edge detection. Figure 3.1 illustrates cross-spectral image processing stages as well as their possible connections. Table 3.1 presents some of the available public cross-spectral datasets generally used as reference data to evaluate cross-spectral image registration, colorization, enhancement, or exploitation results. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | SLCV | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-031-00698-2 | Medium | ||
Area | Expedition | Conference | |||
Notes | MSIAU; MACO | Approved | no | ||
Call Number | Admin @ si @ TSH2022b | Serial | 3805 | ||
Permanent link to this record | |||||
Author | Jaume Gibert; Ernest Valveny | ||||
Title | Graph Embedding based on Nodes Attributes Representatives and a Graph of Words Representation. | Type | Conference Article | ||
Year | 2010 | Publication | 13th International worshop on structural and syntactic pattern recognition and 8th international worshop on statistical pattern recognition | Abbreviated Journal | |
Volume | 6218 | Issue | Pages | 223–232 | |
Keywords | |||||
Abstract ![]() |
Although graph embedding has recently been used to extend statistical pattern recognition techniques to the graph domain, some existing embeddings are usually computationally expensive as they rely on classical graph-based operations. In this paper we present a new way to embed graphs into vector spaces by first encapsulating the information stored in the original graph under another graph representation by clustering the attributes of the graphs to be processed. This new representation makes the association of graphs to vectors an easy step by just arranging both node attributes and the adjacency matrix in the form of vectors. To test our method, we use two different databases of graphs whose nodes attributes are of different nature. A comparison with a reference method permits to show that this new embedding is better in terms of classification rates, while being much more faster. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | In E.R. Hancock, R.C. Wilson, T. Windeatt, I. Ulusoy and F. Escolano, | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-14979-5 | Medium | |
Area | Expedition | Conference | S+SSPR | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ GiV2010 | Serial | 1416 | ||
Permanent link to this record | |||||
Author | Lei Kang; Pau Riba; Yaxing Wang; Marçal Rusiñol; Alicia Fornes; Mauricio Villegas | ||||
Title | GANwriting: Content-Conditioned Generation of Styled Handwritten Word Images | Type | Conference Article | ||
Year | 2020 | Publication | 16th European Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract ![]() |
Although current image generation methods have reached impressive quality levels, they are still unable to produce plausible yet diverse images of handwritten words. On the contrary, when writing by hand, a great variability is observed across different writers, and even when analyzing words scribbled by the same individual, involuntary variations are conspicuous. In this work, we take a step closer to producing realistic and varied artificially rendered handwritten words. We propose a novel method that is able to produce credible handwritten word images by conditioning the generative process with both calligraphic style features and textual content. Our generator is guided by three complementary learning objectives: to produce realistic images, to imitate a certain handwriting style and to convey a specific textual content. Our model is unconstrained to any predefined vocabulary, being able to render whatever input word. Given a sample writer, it is also able to mimic its calligraphic features in a few-shot setup. We significantly advance over prior art and demonstrate with qualitative, quantitative and human-based evaluations the realistic aspect of our synthetically produced images. | ||||
Address | Virtual; August 2020 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCV | ||
Notes | DAG; 600.140; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ KPW2020 | Serial | 3426 | ||
Permanent link to this record | |||||
Author | Thierry Brouard; Jordi Gonzalez; Caifeng Shan; Massimo Piccardi; Larry S. Davis | ||||
Title | Special issue on background modeling for foreground detection in real-world dynamic scenes | Type | Journal Article | ||
Year | 2014 | Publication | Machine Vision and Applications | Abbreviated Journal | MVAP |
Volume | 25 | Issue | 5 | Pages | 1101-1103 |
Keywords | |||||
Abstract ![]() |
Although background modeling and foreground detection are not mandatory steps for computer vision applications, they may prove useful as they separate the primal objects usually called “foreground” from the remaining part of the scene called “background”, and permits different algorithmic treatment in the video processing field such as video surveillance, optical motion capture, multimedia applications, teleconferencing and human–computer interfaces. Conventional background modeling methods exploit the temporal variation of each pixel to model the background, and the foreground detection is made using change detection. The last decade witnessed very significant publications on background modeling but recently new applications in which background is not static, such as recordings taken from mobile devices or Internet videos, need new developments to detect robustly moving objects in challenging environments. Thus, effective methods for robustness to deal both with dynamic backgrounds, i | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0932-8092 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ISE; 600.078 | Approved | no | ||
Call Number | BGS2014a | Serial | 2411 | ||
Permanent link to this record | |||||
Author | Mikhail Mozerov; Joost Van de Weijer | ||||
Title | Improved Recursive Geodesic Distance Computation for Edge Preserving Filter | Type | Journal Article | ||
Year | 2017 | Publication | IEEE Transactions on Image Processing | Abbreviated Journal | TIP |
Volume | 26 | Issue | 8 | Pages | 3696 - 3706 |
Keywords | Geodesic distance filter; color image filtering; image enhancement | ||||
Abstract ![]() |
All known recursive filters based on the geodesic distance affinity are realized by two 1D recursions applied in two orthogonal directions of the image plane. The 2D extension of the filter is not valid and has theoretically drawbacks, which lead to known artifacts. In this paper, a maximum influence propagation method is proposed to approximate the 2D extension for the
geodesic distance-based recursive filter. The method allows to partially overcome the drawbacks of the 1D recursion approach. We show that our improved recursion better approximates the true geodesic distance filter, and the application of this improved filter for image denoising outperforms the existing recursive implementation of the geodesic distance. As an application, we consider a geodesic distance-based filter for image denoising. Experimental evaluation of our denoising method demonstrates comparable and for several test images better results, than stateof-the-art approaches, while our algorithm is considerably fasterwith computational complexity O(8P). |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; ISE; 600.120; 600.098; 600.119 | Approved | no | ||
Call Number | Admin @ si @ Moz2017 | Serial | 2921 | ||
Permanent link to this record | |||||
Author | Dimosthenis Karatzas; V. Poulain d'Andecy; Marçal Rusiñol | ||||
Title | Human-Document Interaction – a new frontier for document image analysis | Type | Conference Article | ||
Year | 2016 | Publication | 12th IAPR Workshop on Document Analysis Systems | Abbreviated Journal | |
Volume | Issue | Pages | 369-374 | ||
Keywords | |||||
Abstract ![]() |
All indications show that paper documents will not cede in favour of their digital counterparts, but will instead be used increasingly in conjunction with digital information. An open challenge is how to seamlessly link the physical with the digital – how to continue taking advantage of the important affordances of paper, without missing out on digital functionality. This paper
presents the authors’ experience with developing systems for Human-Document Interaction based on augmented document interfaces and examines new challenges and opportunities arising for the document image analysis field in this area. The system presented combines state of the art camera-based document image analysis techniques with a range of complementary tech-nologies to offer fluid Human-Document Interaction. Both fixed and nomadic setups are discussed that have gone through user testing in real-life environments, and use cases are presented that span the spectrum from business to educational application |
||||
Address | Santorini; Greece; April 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.084; 600.077 | Approved | no | ||
Call Number | KPR2016 | Serial | 2756 | ||
Permanent link to this record | |||||
Author | Pierluigi Casale; Oriol Pujol; Petia Radeva | ||||
Title | Classyfing Agitation in Sedated ICU Patients | Type | Conference Article | ||
Year | 2010 | Publication | Medical Image Computing in Catalunya: Graduate Student Workshop | Abbreviated Journal | |
Volume | Issue | Pages | 19–20 | ||
Keywords | |||||
Abstract ![]() |
Agitation is a serious problem in sedated intensive care unit (ICU) patients. In this work, standard machine learning techniques working on wearable accelerometer data have been used to classifying agitation levels achieving very good classification performances. | ||||
Address | Girona | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAT | ||
Notes | MILAB;HUPBA | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ COR2010 | Serial | 1467 | ||
Permanent link to this record |