|   | 
Details
   web
Records
Author Maria Vanrell; Jordi Vitria
Title Optimal 3x3 decomposable disks for morphological transformations Type Journal Article
Year 1997 Publication Image and Vision Computing Abbreviated Journal
Volume 15 Issue 11 Pages (down) 845-854
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR;CIC;MV Approved no
Call Number BCNPCL @ bcnpcl @ VaV1997c Serial 543
Permanent link to this record
 

 
Author Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera
Title Combining Local and Global Learners in the Pairwise Multiclass Classification Type Journal Article
Year 2015 Publication Pattern Analysis and Applications Abbreviated Journal PAA
Volume 18 Issue 4 Pages (down) 845-860
Keywords Multiclass classification; Pairwise approach; One-versus-one
Abstract Pairwise classification is a well-known class binarization technique that converts a multiclass problem into a number of two-class problems, one problem for each pair of classes. However, in the pairwise technique, nuisance votes of many irrelevant classifiers may result in a wrong class prediction. To overcome this problem, a simple, but efficient method is proposed and evaluated in this paper. The proposed method is based on excluding some classes and focusing on the most probable classes in the neighborhood space, named Local Crossing Off (LCO). This procedure is performed by employing a modified version of standard K-nearest neighbor and large margin nearest neighbor algorithms. The LCO method takes advantage of nearest neighbor classification algorithm because of its local learning behavior as well as the global behavior of powerful binary classifiers to discriminate between two classes. Combining these two properties in the proposed LCO technique will avoid the weaknesses of each method and will increase the efficiency of the whole classification system. On several benchmark datasets of varying size and difficulty, we found that the LCO approach leads to significant improvements using different base learners. The experimental results show that the proposed technique not only achieves better classification accuracy in comparison to other standard approaches, but also is computationally more efficient for tackling classification problems which have a relatively large number of target classes.
Address
Corporate Author Thesis
Publisher Springer London Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1433-7541 ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ BGE2014 Serial 2441
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers
Title A Statistical Method for 2D Facial Landmarking Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 2 Pages (down) 844-858
Keywords
Abstract IF = 3.32
Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in coarse-to-fine fashion and complemented with a shape prior. We assess the accuracy and the robustness of the proposed approach in extensive cross-database conditions conducted on four face data sets (Face Recognition Grand Challenge, Cohn-Kanade, Bosphorus, and BioID). Our method has 99.33% accuracy on the Bosphorus database and 97.62% accuracy on the BioID database on the average, which improves the state of the art. We show that the method is not significantly affected by low-resolution images, small rotations, facial expressions, and natural occlusions such as beard and mustache. We further test the goodness of the landmarks in a facial expression recognition application and report landmarking-induced improvement over baseline on two separate databases for video-based expression recognition (Cohn-Kanade and BU-4DFE).
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ DSG 2012 Serial 1853
Permanent link to this record
 

 
Author Pejman Rasti; Salma Samiei; Mary Agoyi; Sergio Escalera; Gholamreza Anbarjafari
Title Robust non-blind color video watermarking using QR decomposition and entropy analysis Type Journal Article
Year 2016 Publication Journal of Visual Communication and Image Representation Abbreviated Journal JVCIR
Volume 38 Issue Pages (down) 838-847
Keywords Video watermarking; QR decomposition; Discrete Wavelet Transformation; Chirp Z-transform; Singular value decomposition; Orthogonal–triangular decomposition
Abstract Issues such as content identification, document and image security, audience measurement, ownership and copyright among others can be settled by the use of digital watermarking. Many recent video watermarking methods show drops in visual quality of the sequences. The present work addresses the aforementioned issue by introducing a robust and imperceptible non-blind color video frame watermarking algorithm. The method divides frames into moving and non-moving parts. The non-moving part of each color channel is processed separately using a block-based watermarking scheme. Blocks with an entropy lower than the average entropy of all blocks are subject to a further process for embedding the watermark image. Finally a watermarked frame is generated by adding moving parts to it. Several signal processing attacks are applied to each watermarked frame in order to perform experiments and are compared with some recent algorithms. Experimental results show that the proposed scheme is imperceptible and robust against common signal processing attacks.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB; Approved no
Call Number Admin @ si @RSA2016 Serial 2766
Permanent link to this record
 

 
Author Gloria Fernandez Esparrach; Jorge Bernal; Maria Lopez Ceron; Henry Cordova; Cristina Sanchez Montes; Cristina Rodriguez de Miguel; F. Javier Sanchez
Title Exploring the clinical potential of an automatic colonic polyp detection method based on the creation of energy maps Type Journal Article
Year 2016 Publication Endoscopy Abbreviated Journal END
Volume 48 Issue 9 Pages (down) 837-842
Keywords
Abstract Background and aims: Polyp miss-rate is a drawback of colonoscopy that increases significantly in small polyps. We explored the efficacy of an automatic computer vision method for polyp detection.
Methods: Our method relies on a model that defines polyp boundaries as valleys of image intensity. Valley information is integrated into energy maps which represent the likelihood of polyp presence.
Results: In 24 videos containing polyps from routine colonoscopies, all polyps were detected in at least one frame. Mean values of the maximum of energy map were higher in frames with polyps than without (p<0.001). Performance improved in high quality frames (AUC= 0.79, 95%CI: 0.70-0.87 vs 0.75, 95%CI: 0.66-0.83). Using 3.75 as maximum threshold value, sensitivity and specificity for detection of polyps were 70.4% (95%CI: 60.3-80.8) and 72.4% (95%CI: 61.6-84.6), respectively.
Conclusion: Energy maps showed a good performance for colonic polyp detection. This indicates a potential applicability in clinical practice.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MV; Approved no
Call Number Admin @ si @FBL2016 Serial 2778
Permanent link to this record
 

 
Author Ivet Rafegas; Javier Vazquez; Robert Benavente; Maria Vanrell; Susana Alvarez
Title Enhancing spatio-chromatic representation with more-than-three color coding for image description Type Journal Article
Year 2017 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A
Volume 34 Issue 5 Pages (down) 827-837
Keywords
Abstract Extraction of spatio-chromatic features from color images is usually performed independently on each color channel. Usual 3D color spaces, such as RGB, present a high inter-channel correlation for natural images. This correlation can be reduced using color-opponent representations, but the spatial structure of regions with small color differences is not fully captured in two generic Red-Green and Blue-Yellow channels. To overcome these problems, we propose a new color coding that is adapted to the specific content of each image. Our proposal is based on two steps: (a) setting the number of channels to the number of distinctive colors we find in each image (avoiding the problem of channel correlation), and (b) building a channel representation that maximizes contrast differences within each color channel (avoiding the problem of low local contrast). We call this approach more-than-three color coding (MTT) to enhance the fact that the number of channels is adapted to the image content. The higher color complexity an image has, the more channels can be used to represent it. Here we select distinctive colors as the most predominant in the image, which we call color pivots, and we build the new color coding using these color pivots as a basis. To evaluate the proposed approach we measure its efficiency in an image categorization task. We show how a generic descriptor improves its performance at the description level when applied on the MTT coding.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes CIC; 600.087 Approved no
Call Number Admin @ si @ RVB2017 Serial 2892
Permanent link to this record
 

 
Author Frederic Sampedro; Anna Domenech; Sergio Escalera
Title Static and dynamic computational cancer spread quantification in whole body FDG-PET/CT scans Type Journal Article
Year 2014 Publication Journal of Medical Imaging and Health Informatics Abbreviated Journal JMIHI
Volume 4 Issue 6 Pages (down) 825-831
Keywords CANCER SPREAD; COMPUTER AIDED DIAGNOSIS; MEDICAL IMAGING; TUMOR QUANTIFICATION
Abstract In this work we address the computational cancer spread quantification scenario in whole body FDG-PET/CT scans. At the static level, this setting can be modeled as a clustering problem on the set of 3D connected components of the whole body PET tumoral segmentation mask carried out by nuclear medicine physicians. At the dynamic level, and ad-hoc algorithm is proposed in order to quantify the cancer spread time evolution which, when combined with other existing indicators, gives rise to the metabolic tumor volume-aggressiveness-spread time evolution chart, a novel tool that we claim that would prove useful in nuclear medicine and oncological clinical or research scenarios. Good performance results of the proposed methodologies both at the clinical and technological level are shown using a dataset of 48 segmented whole body FDG-PET/CT scans.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ SDE2014b Serial 2548
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Zhijie Fang; Yainuvis Socarras; Joan Serrat; David Vazquez; Jiaolong Xu; Antonio Lopez
Title Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison Type Journal Article
Year 2016 Publication Sensors Abbreviated Journal SENS
Volume 16 Issue 6 Pages (down) 820
Keywords Pedestrian Detection; FIR
Abstract Despite all the significant advances in pedestrian detection brought by computer vision for driving assistance, it is still a challenging problem. One reason is the extremely varying lighting conditions under which such a detector should operate, namely day and night time. Recent research has shown that the combination of visible and non-visible imaging modalities may increase detection accuracy, where the infrared spectrum plays a critical role. The goal of this paper is to assess the accuracy gain of different pedestrian models (holistic, part-based, patch-based) when training with images in the far infrared spectrum. Specifically, we want to compare detection accuracy on test images recorded at day and nighttime if trained (and tested) using (a) plain color images, (b) just infrared images and (c) both of them. In order to obtain results for the last item we propose an early fusion approach to combine features from both modalities. We base the evaluation on a new dataset we have built for this purpose as well as on the publicly available KAIST multispectral dataset.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1424-8220 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.085; 600.076; 600.082; 601.281 Approved no
Call Number ADAS @ adas @ GFS2016 Serial 2754
Permanent link to this record
 

 
Author Bogdan Raducanu; Fadi Dornaika
Title Texture-independent recognition of facial expressions in image snapshots and videos Type Journal Article
Year 2013 Publication Machine Vision and Applications Abbreviated Journal MVA
Volume 24 Issue 4 Pages (down) 811-820
Keywords
Abstract This paper addresses the static and dynamic recognition of basic facial expressions. It has two main contributions. First, we introduce a view- and texture-independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker. We represent the learned facial actions associated with different facial expressions by time series. Second, we compare this dynamic scheme with a static one based on analyzing individual snapshots and show that the former performs better than the latter. We provide evaluations of performance using three subspace learning techniques: linear discriminant analysis, non-parametric discriminant analysis and support vector machines.
Address
Corporate Author Thesis
Publisher Springer-Verlag Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0932-8092 ISBN Medium
Area Expedition Conference
Notes OR; 600.046; 605.203;MV Approved no
Call Number Admin @ si @ RaD2013 Serial 2230
Permanent link to this record
 

 
Author Razieh Rastgoo; Kourosh Kiani; Sergio Escalera
Title Multi-Modal Deep Hand Sign Language Recognition in Still Images Using Restricted Boltzmann Machine Type Journal Article
Year 2018 Publication Entropy Abbreviated Journal ENTROPY
Volume 20 Issue 11 Pages (down) 809
Keywords hand sign language; deep learning; restricted Boltzmann machine (RBM); multi-modal; profoundly deaf; noisy image
Abstract In this paper, a deep learning approach, Restricted Boltzmann Machine (RBM), is used to perform automatic hand sign language recognition from visual data. We evaluate how RBM, as a deep generative model, is capable of generating the distribution of the input data for an enhanced recognition of unseen data. Two modalities, RGB and Depth, are considered in the model input in three forms: original image, cropped image, and noisy cropped image. Five crops of the input image are used and the hand of these cropped images are detected using Convolutional Neural Network (CNN). After that, three types of the detected hand images are generated for each modality and input to RBMs. The outputs of the RBMs for two modalities are fused in another RBM in order to recognize the output sign label of the input image. The proposed multi-modal model is trained on all and part of the American alphabet and digits of four publicly available datasets. We also evaluate the robustness of the proposal against noise. Experimental results show that the proposed multi-modal model, using crops and the RBM fusing methodology, achieves state-of-the-art results on Massey University Gesture Dataset 2012, American Sign Language (ASL). and Fingerspelling Dataset from the University of Surrey’s Center for Vision, Speech and Signal Processing, NYU, and ASL Fingerspelling A datasets.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ RKE2018 Serial 3198
Permanent link to this record
 

 
Author R. Valenti; Theo Gevers
Title Combining Head Pose and Eye Location Information for Gaze Estimation Type Journal Article
Year 2012 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 21 Issue 2 Pages (down) 802-815
Keywords
Abstract Impact factor 2010: 2.92
Impact factor 2011/12?: 3.32
Head pose and eye location for gaze estimation have been separately studied in numerous works in the literature. Previous research shows that satisfactory accuracy in head pose and eye location estimation can be achieved in constrained settings. However, in the presence of nonfrontal faces, eye locators are not adequate to accurately locate the center of the eyes. On the other hand, head pose estimation techniques are able to deal with these conditions; hence, they may be suited to enhance the accuracy of eye localization. Therefore, in this paper, a hybrid scheme is proposed to combine head pose and eye location information to obtain enhanced gaze estimation. To this end, the transformation matrix obtained from the head pose is used to normalize the eye regions, and in turn, the transformation matrix generated by the found eye location is used to correct the pose estimation procedure. The scheme is designed to enhance the accuracy of eye location estimations, particularly in low-resolution videos, to extend the operative range of the eye locators, and to improve the accuracy of the head pose tracker. These enhanced estimations are then combined to obtain a novel visual gaze estimation system, which uses both eye location and head information to refine the gaze estimates. From the experimental results, it can be derived that the proposed unified scheme improves the accuracy of eye estimations by 16% to 23%. Furthermore, it considerably extends its operating range by more than 15° by overcoming the problems introduced by extreme head poses. Moreover, the accuracy of the head pose tracker is improved by 12% to 24%. Finally, the experimentation on the proposed combined gaze estimation system shows that it is accurate (with a mean error between 2° and 5°) and that it can be used in cases where classic approaches would fail without imposing restraints on the position of the head.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1057-7149 ISBN Medium
Area Expedition Conference
Notes ALTRES;ISE Approved no
Call Number Admin @ si @ VaG 2012b Serial 1851
Permanent link to this record
 

 
Author Albert Clapes; Miguel Reyes; Sergio Escalera
Title Multi-modal User Identification and Object Recognition Surveillance System Type Journal Article
Year 2013 Publication Pattern Recognition Letters Abbreviated Journal PRL
Volume 34 Issue 7 Pages (down) 799-808
Keywords Multi-modal RGB-Depth data analysis; User identification; Object recognition; Intelligent surveillance; Visual features; Statistical learning
Abstract We propose an automatic surveillance system for user identification and object recognition based on multi-modal RGB-Depth data analysis. We model a RGBD environment learning a pixel-based background Gaussian distribution. Then, user and object candidate regions are detected and recognized using robust statistical approaches. The system robustly recognizes users and updates the system in an online way, identifying and detecting new actors in the scene. Moreover, segmented objects are described, matched, recognized, and updated online using view-point 3D descriptions, being robust to partial occlusions and local 3D viewpoint rotations. Finally, the system saves the historic of user–object assignments, being specially useful for surveillance scenarios. The system has been evaluated on a novel data set containing different indoor/outdoor scenarios, objects, and users, showing accurate recognition and better performance than standard state-of-the-art approaches.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; 600.046; 605.203;MILAB Approved no
Call Number Admin @ si @ CRE2013 Serial 2248
Permanent link to this record
 

 
Author David Vazquez; Javier Marin; Antonio Lopez; Daniel Ponsa; David Geronimo
Title Virtual and Real World Adaptation for Pedestrian Detection Type Journal Article
Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI
Volume 36 Issue 4 Pages (down) 797-809
Keywords Domain Adaptation; Pedestrian Detection
Abstract Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0162-8828 ISBN Medium
Area Expedition Conference
Notes ADAS; 600.057; 600.054; 600.076 Approved no
Call Number ADAS @ adas @ VML2014 Serial 2275
Permanent link to this record
 

 
Author Daniel Ponsa; Joan Serrat; Antonio Lopez
Title On-board image-based vehicle detection and tracking Type Journal Article
Year 2011 Publication Transactions of the Institute of Measurement and Control Abbreviated Journal TIM
Volume 33 Issue 7 Pages (down) 783-805
Keywords vehicle detection
Abstract In this paper we present a computer vision system for daytime vehicle detection and localization, an essential step in the development of several types of advanced driver assistance systems. It has a reduced processing time and high accuracy thanks to the combination of vehicle detection with lane-markings estimation and temporal tracking of both vehicles and lane markings. Concerning vehicle detection, our main contribution is a frame scanning process that inspects images according to the geometry of image formation, and with an Adaboost-based detector that is robust to the variability in the different vehicle types (car, van, truck) and lighting conditions. In addition, we propose a new method to estimate the most likely three-dimensional locations of vehicles on the road ahead. With regards to the lane-markings estimation component, we have two main contributions. First, we employ a different image feature to the other commonly used edges: we use ridges, which are better suited to this problem. Second, we adapt RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane markings to the image features. We qualitatively assess our vehicle detection system in sequences captured on several road types and under very different lighting conditions. The processed videos are available on a web page associated with this paper. A quantitative evaluation of the system has shown quite accurate results (a low number of false positives and negatives) at a reasonable computation time.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number ADAS @ adas @ PSL2011 Serial 1413
Permanent link to this record
 

 
Author Ferran Poveda; Debora Gil; Enric Marti; Albert Andaluz; Manel Ballester;Francesc Carreras Costa
Title Helical structure of the cardiac ventricular anatomy assessed by Diffusion Tensor Magnetic Resonance Imaging multi-resolution tractography Type Journal Article
Year 2013 Publication Revista Española de Cardiología Abbreviated Journal REC
Volume 66 Issue 10 Pages (down) 782-790
Keywords Heart;Diffusion magnetic resonance imaging;Diffusion tractography;Helical heart;Myocardial ventricular band.
Abstract Deep understanding of myocardial structure linking morphology and function of the heart would unravel crucial knowledge for medical and surgical clinical procedures and studies. Several conceptual models of myocardial fiber organization have been proposed but the lack of an automatic and objective methodology prevented an agreement. We sought to deepen in this knowledge through advanced computer graphic representations of the myocardial fiber architecture by diffusion tensor magnetic resonance imaging (DT-MRI).
We performed automatic tractography reconstruction of unsegmented DT-MRI canine heart datasets coming from the public database of the Johns Hopkins University. Full scale tractographies have been build with 200 seeds and are composed by streamlines computed on the vectorial field of primary eigenvectors given at the diffusion tensor volumes. Also, we introduced a novel multi-scale visualization technique in order to obtain a simplified tractography. This methodology allowed to keep the main geometric features of the fiber tracts, making easier to decipher the main properties of the architectural organization of the heart.
On the analysis of the output from our tractographic representations we found exact correlation with low-level details of myocardial architecture, but also with the more abstract conceptualization of a continuous helical ventricular myocardial fiber array.
Objective analysis of myocardial architecture by an automated method, including the entire myocardium and using several 3D levels of complexity, reveals a continuous helical myocardial fiber arrangement of both right and left ventricles, supporting the anatomical model of the helical ventricular myocardial band described by Torrent-Guasp.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.044; 600.060 Approved no
Call Number IAM @ iam @ PGM2013 Serial 2194
Permanent link to this record