toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Raul Gomez; Lluis Gomez; Jaume Gibert; Dimosthenis Karatzas edit   pdf
url  openurl
  Title Learning to Learn from Web Data through Deep Semantic Embeddings Type Conference Article
  Year 2018 Publication 15th European Conference on Computer Vision Workshops Abbreviated Journal  
  Volume 11134 Issue Pages 514-529  
  Keywords  
  Abstract In this paper we propose to learn a multimodal image and text embedding from Web and Social Media data, aiming to leverage the semantic knowledge learnt in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the pipeline can learn from images with associated text without supervision and perform a thourough analysis of five different text embeddings in three different benchmarks. We show that the embeddings learnt with Web and Social Media data have competitive performances over supervised methods in the text based image retrieval task, and we clearly outperform state of the art in the MIRFlickr dataset when training in the target data. Further we demonstrate how semantic multimodal image retrieval can be performed using the learnt embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed by Instagram images and their associated texts that can be used for fair comparison of image-text embeddings.  
  Address Munich; Alemanya; September 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference ECCVW  
  Notes DAG; 600.129; 601.338; 600.121 Approved no  
  Call Number Admin @ si @ GGG2018a Serial 3175  
Permanent link to this record
 

 
Author Arka Ujjal Dey; Suman Ghosh; Ernest Valveny edit   pdf
openurl 
  Title Don't only Feel Read: Using Scene text to understand advertisements Type Conference Article
  Year 2018 Publication IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract We propose a framework for automated classification of Advertisement Images, using not just Visual features but also Textual cues extracted from embedded text. Our approach takes inspiration from the assumption that Ad images contain meaningful textual content, that can provide discriminative semantic interpretetion, and can thus aid in classifcation tasks. To this end, we develop a framework using off-the-shelf components, and demonstrate the effectiveness of Textual cues in semantic Classfication tasks.  
  Address Salt Lake City; Utah; USA; June 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes DAG; 600.121; 600.129 Approved no  
  Call Number Admin @ si @ DGV2018 Serial 3551  
Permanent link to this record
 

 
Author I. Sorodoc; S. Pezzelle; A. Herbelot; Mariella Dimiccoli; R. Bernardi edit  url
doi  openurl
  Title Learning quantification from images: A structured neural architecture Type Journal Article
  Year 2018 Publication Natural Language Engineering Abbreviated Journal NLE  
  Volume 24 Issue 3 Pages 363-392  
  Keywords  
  Abstract Major advances have recently been made in merging language and vision representations. Most tasks considered so far have confined themselves to the processing of objects and lexicalised relations amongst objects (content words). We know, however, that humans (even pre-school children) can abstract over raw multimodal data to perform certain types of higher level reasoning, expressed in natural language by function words. A case in point is given by their ability to learn quantifiers, i.e. expressions like few, some and all. From formal semantics and cognitive linguistics, we know that quantifiers are relations over sets which, as a simplification, we can see as proportions. For instance, in most fish are red, most encodes the proportion of fish which are red fish. In this paper, we study how well current neural network strategies model such relations. We propose a task where, given an image and a query expressed by an object–property pair, the system must return a quantifier expressing which proportions of the queried object have the queried property. Our contributions are twofold. First, we show that the best performance on this task involves coupling state-of-the-art attention mechanisms with a network architecture mirroring the logical structure assigned to quantifiers by classic linguistic formalisation. Second, we introduce a new balanced dataset of image scenarios associated with quantification queries, which we hope will foster further research in this area.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no menciona Approved no  
  Call Number Admin @ si @ SPH2018 Serial 3021  
Permanent link to this record
 

 
Author Maedeh Aghaei; Mariella Dimiccoli; C. Canton-Ferrer; Petia Radeva edit   pdf
url  doi
openurl 
  Title Towards social pattern characterization from egocentric photo-streams Type Journal Article
  Year 2018 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 171 Issue Pages 104-117  
  Keywords Social pattern characterization; Social signal extraction; Lifelogging; Convolutional and recurrent neural networks  
  Abstract Following the increasingly popular trend of social interaction analysis in egocentric vision, this article presents a comprehensive pipeline for automatic social pattern characterization of a wearable photo-camera user. The proposed framework relies merely on the visual analysis of egocentric photo-streams and consists of three major steps. The first step is to detect social interactions of the user where the impact of several social signals on the task is explored. The detected social events are inspected in the second step for categorization into different social meetings. These two steps act at event-level where each potential social event is modeled as a multi-dimensional time-series, whose dimensions correspond to a set of relevant features for each task; finally, LSTM is employed to classify the time-series. The last step of the framework is to characterize social patterns of the user. Our goal is to quantify the duration, the diversity and the frequency of the user social relations in various social situations. This goal is achieved by the discovery of recurrences of the same people across the whole set of social events related to the user. Experimental evaluation over EgoSocialStyle – the proposed dataset in this work, and EGO-GROUP demonstrates promising results on the task of social pattern characterization from egocentric photo-streams.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ ADC2018 Serial 3022  
Permanent link to this record
 

 
Author Debora Gil; Rosa Maria Ortiz; Carles Sanchez; Antoni Rosell edit   pdf
doi  openurl
  Title Objective endoscopic measurements of central airway stenosis. A pilot study Type Journal Article
  Year 2018 Publication Respiration Abbreviated Journal RES  
  Volume 95 Issue Pages 63–69  
  Keywords Bronchoscopy; Tracheal stenosis; Airway stenosis; Computer-assisted analysis  
  Abstract Endoscopic estimation of the degree of stenosis in central airway obstruction is subjective and highly variable. Objective: To determine the benefits of using SENSA (System for Endoscopic Stenosis Assessment), an image-based computational software, for obtaining objective stenosis index (SI) measurements among a group of expert bronchoscopists and general pulmonologists. Methods: A total of 7 expert bronchoscopists and 7 general pulmonologists were enrolled to validate SENSA usage. The SI obtained by the physicians and by SENSA were compared with a reference SI to set their precision in SI computation. We used SENSA to efficiently obtain this reference SI in 11 selected cases of benign stenosis. A Web platform with three user-friendly microtasks was designed to gather the data. The users had to visually estimate the SI from videos with and without contours of the normal and the obstructed area provided by SENSA. The users were able to modify the SENSA contours to define the reference SI using morphometric bronchoscopy. Results: Visual SI estimation accuracy was associated with neither bronchoscopic experience (p = 0.71) nor the contours of the normal and the obstructed area provided by the system (p = 0.13). The precision of the SI by SENSA was 97.7% (95% CI: 92.4-103.7), which is significantly better than the precision of the SI by visual estimation (p < 0.001), with an improvement by at least 15%. Conclusion: SENSA provides objective SI measurements with a precision of up to 99.5%, which can be calculated from any bronchoscope using an affordable scalable interface. Providing normal and obstructed contours on bronchoscopic videos does not improve physicians' visual estimation of the SI.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.075; 600.096; 600.145 Approved no  
  Call Number Admin @ si @ GOS2018 Serial 3043  
Permanent link to this record
 

 
Author Jose M. Armingol; Jorge Alfonso; Nourdine Aliane; Miguel Clavijo; Sergio Campos-Cordobes; Arturo de la Escalera; Javier del Ser; Javier Fernandez; Fernando Garcia; Felipe Jimenez; Antonio Lopez; Mario Mata edit  url
doi  openurl
  Title Environmental Perception for Intelligent Vehicles Type Book Chapter
  Year 2018 Publication Intelligent Vehicles. Enabling Technologies and Future Developments Abbreviated Journal  
  Volume Issue Pages 23–101  
  Keywords Computer vision; laser techniques; data fusion; advanced driver assistance systems; traffic monitoring systems; intelligent vehicles  
  Abstract Environmental perception represents, because of its complexity, a challenge for Intelligent Transport Systems due to the great variety of situations and different elements that can happen in road environments and that must be faced by these systems. In connection with this, so far there are a variety of solutions as regards sensors and methods, so the results of precision, complexity, cost, or computational load obtained by these works are different. In this chapter some systems based on computer vision and laser techniques are presented. Fusion methods are also introduced in order to provide advanced and reliable perception systems.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @AAA2018 Serial 3046  
Permanent link to this record
 

 
Author Antonio Lopez; David Vazquez; Gabriel Villalonga edit  url
openurl 
  Title Data for Training Models, Domain Adaptation Type Book Chapter
  Year 2018 Publication Intelligent Vehicles. Enabling Technologies and Future Developments Abbreviated Journal  
  Volume Issue Pages 395–436  
  Keywords Driving simulator; hardware; software; interface; traffic simulation; macroscopic simulation; microscopic simulation; virtual data; training data  
  Abstract Simulation can enable several developments in the field of intelligent vehicles. This chapter is divided into three main subsections. The first one deals with driving simulators. The continuous improvement of hardware performance is a well-known fact that is allowing the development of more complex driving simulators. The immersion in the simulation scene is increased by high fidelity feedback to the driver. In the second subsection, traffic simulation is explained as well as how it can be used for intelligent transport systems. Finally, it is rather clear that sensor-based perception and action must be based on data-driven algorithms. Simulation could provide data to train and test algorithms that are afterwards implemented in vehicles. These tools are explained in the third subsection.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ LVV2018 Serial 3047  
Permanent link to this record
 

 
Author Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Marçal Rusiñol; Francesc J. Ferri edit   pdf
doi  openurl
  Title Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction Type Journal Article
  Year 2018 Publication Journal of Mathematical Imaging and Vision Abbreviated Journal JMIV  
  Volume 60 Issue 4 Pages 512-524  
  Keywords  
  Abstract This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear
problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed, a first one based on the kernel trick (KT) and a second one based on the nonlinear projection trick (NPT) for even higher efficiency. Both methodologies
have been validated on four different image datasets containing faces, objects and handwritten digits, and compared against well known non-linear state-of-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes DAG; ADAS; 600.086; 600.130; 600.121; 600.118; 600.129 Approved no  
  Call Number Admin @ si @ DMH2018a Serial 3062  
Permanent link to this record
 

 
Author Huamin Ren; Nattiya Kanhabua; Andreas Mogelmose; Weifeng Liu; Kaustubh Kulkarni; Sergio Escalera; Xavier Baro; Thomas B. Moeslund edit  url
doi  openurl
  Title Back-dropout Transfer Learning for Action Recognition Type Journal Article
  Year 2018 Publication IET Computer Vision Abbreviated Journal IETCV  
  Volume 12 Issue 4 Pages 484-491  
  Keywords Learning (artificial intelligence); Pattern Recognition  
  Abstract Transfer learning aims at adapting a model learned from source dataset to target dataset. It is a beneficial approach especially when annotating on the target dataset is expensive or infeasible. Transfer learning has demonstrated its powerful learning capabilities in various vision tasks. Despite transfer learning being a promising approach, it is still an open question how to adapt the model learned from the source dataset to the target dataset. One big challenge is to prevent the impact of category bias on classification performance. Dataset bias exists when two images from the same category, but from different datasets, are not classified as the same. To address this problem, a transfer learning algorithm has been proposed, called negative back-dropout transfer learning (NB-TL), which utilizes images that have been misclassified and further performs back-dropout strategy on them to penalize errors. Experimental results demonstrate the effectiveness of the proposed algorithm. In particular, the authors evaluate the performance of the proposed NB-TL algorithm on UCF 101 action recognition dataset, achieving 88.9% recognition rate.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ RKM2018 Serial 3071  
Permanent link to this record
 

 
Author Mark Philip Philipsen; Jacob Velling Dueholm; Anders Jorgensen; Sergio Escalera; Thomas B. Moeslund edit  doi
openurl 
  Title Organ Segmentation in Poultry Viscera Using RGB-D Type Journal Article
  Year 2018 Publication Sensors Abbreviated Journal SENS  
  Volume 18 Issue 1 Pages 117  
  Keywords semantic segmentation; RGB-D; random forest; conditional random field; 2D; 3D; CNN  
  Abstract We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ PVJ2018 Serial 3072  
Permanent link to this record
 

 
Author Sounak Dey; Anjan Dutta; Juan Ignacio Toledo; Suman Ghosh; Josep Llados; Umapada Pal edit   pdf
url  openurl
  Title SigNet: Convolutional Siamese Network for Writer Independent Offline Signature Verification Type Miscellaneous
  Year 2018 Publication Arxiv Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Offline signature verification is one of the most challenging tasks in biometrics and document forensics. Unlike other verification problems, it needs to model minute but critical details between genuine and forged signatures, because a skilled falsification might often resembles the real signature with small deformation. This verification task is even harder in writer independent scenarios which is undeniably fiscal for realistic cases. In this paper, we model an offline writer independent signature verification task with a convolutional Siamese network. Siamese networks are twin networks with shared weights, which can be trained to learn a feature space where similar observations are placed in proximity. This is achieved by exposing the network to a pair of similar and dissimilar observations and minimizing the Euclidean distance between similar pairs while simultaneously maximizing it between dissimilar pairs. Experiments conducted on cross-domain datasets emphasize the capability of our network to model forgery in different languages (scripts) and handwriting styles. Moreover, our designed Siamese network, named SigNet, exceeds the state-of-the-art results on most of the benchmark signature datasets, which paves the way for further research in this direction.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.097; 600.121 Approved no  
  Call Number Admin @ si @ DDT2018 Serial 3085  
Permanent link to this record
 

 
Author Dena Bazazian; Dimosthenis Karatzas; Andrew Bagdanov edit   pdf
openurl 
  Title Soft-PHOC Descriptor for End-to-End Word Spotting in Egocentric Scene Images Type Conference Article
  Year 2018 Publication International Workshop on Egocentric Perception, Interaction and Computing at ECCV Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Word spotting in natural scene images has many applications in scene understanding and visual assistance. We propose Soft-PHOC, an intermediate representation of images based on character probability maps. Our representation extends the concept of the Pyramidal Histogram Of Characters (PHOC) by exploiting Fully Convolutional Networks to derive a pixel-wise mapping of the character distribution within candidate word regions. We show how to use our descriptors for word spotting tasks in egocentric camera streams through an efficient text line proposal algorithm. This is based on the Hough Transform over character attribute maps followed by scoring using Dynamic Time Warping (DTW). We evaluate our results on ICDAR 2015 Challenge 4 dataset of incidental scene text captured by an egocentric camera.  
  Address Munich; Alemanya; September 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference ECCVW  
  Notes DAG; 600.129; 600.121; Approved no  
  Call Number Admin @ si @ BKB2018b Serial 3174  
Permanent link to this record
 

 
Author Lu Yu; Lichao Zhang; Joost Van de Weijer; Fahad Shahbaz Khan; Yongmei Cheng; C. Alejandro Parraga edit   pdf
doi  openurl
  Title Beyond Eleven Color Names for Image Understanding Type Journal Article
  Year 2018 Publication Machine Vision and Applications Abbreviated Journal MVAP  
  Volume 29 Issue 2 Pages 361-373  
  Keywords Color name; Discriminative descriptors; Image classification; Re-identification; Tracking  
  Abstract Color description is one of the fundamental problems of image understanding. One of the popular ways to represent colors is by means of color names. Most existing work on color names focuses on only the eleven basic color terms of the English language. This could be limiting the discriminative power of these representations, and representations based on more color names are expected to perform better. However, there exists no clear strategy to choose additional color names. We collect a dataset of 28 additional color names. To ensure that the resulting color representation has high discriminative power we propose a method to order the additional color names according to their complementary nature with the basic color names. This allows us to compute color name representations with high discriminative power of arbitrary length. In the experiments we show that these new color name descriptors outperform the existing color name descriptor on the task of visual tracking, person re-identification and image classification.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; NEUROBIT; 600.068; 600.109; 600.120 Approved no  
  Call Number Admin @ si @ YYW2018 Serial 3087  
Permanent link to this record
 

 
Author Xim Cerda-Company; C. Alejandro Parraga; Xavier Otazu edit   pdf
url  doi
openurl 
  Title Which tone-mapping operator is the best? A comparative study of perceptual quality Type Journal Article
  Year 2018 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A  
  Volume 35 Issue 4 Pages 626-638  
  Keywords  
  Abstract Tone-mapping operators (TMO) are designed to generate perceptually similar low-dynamic range images from high-dynamic range ones. We studied the performance of fifteen TMOs in two psychophysical experiments where observers compared the digitally-generated tone-mapped images to their corresponding physical scenes. All experiments were performed in a controlled environment and the setups were
designed to emphasize different image properties: in the first experiment we evaluated the local relationships among intensity-levels, and in the second one we evaluated global visual appearance among physical scenes and tone-mapped images, which were presented side by side. We ranked the TMOs according
to how well they reproduced the results obtained in the physical scene. Our results show that ranking position clearly depends on the adopted evaluation criteria, which implies that, in general, these tone-mapping algorithms consider either local or global image attributes but rarely both. Regarding the
question of which TMO is the best, KimKautz [1] and Krawczyk [2] obtained the better results across the different experiments. We conclude that a more thorough and standardized evaluation criteria is needed to study all the characteristics of TMOs, as there is ample room for improvement in future developments.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; 600.120; 600.128 Approved no  
  Call Number Admin @ si @ CPO2018 Serial 3088  
Permanent link to this record
 

 
Author Jorge Bernal; Aymeric Histace; Marc Masana; Quentin Angermann; Cristina Sanchez Montes; Cristina Rodriguez de Miguel; Maroua Hammami; Ana Garcia Rodriguez; Henry Cordova; Olivier Romain; Gloria Fernandez Esparrach; Xavier Dray; F. Javier Sanchez edit  openurl
  Title Polyp Detection Benchmark in Colonoscopy Videos using GTCreator: A Novel Fully Configurable Tool for Easy and Fast Annotation of Image Databases Type Conference Article
  Year 2018 Publication 32nd International Congress and Exhibition on Computer Assisted Radiology & Surgery Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN (down) ISBN Medium  
  Area Expedition Conference CARS  
  Notes ISE; MV; 600.119 Approved no  
  Call Number Admin @ si @ BHM2018 Serial 3089  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: