toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (down) Debora Gil; Rosa Maria Ortiz; Carles Sanchez; Antoni Rosell edit   pdf
doi  openurl
  Title Objective endoscopic measurements of central airway stenosis. A pilot study Type Journal Article
  Year 2018 Publication Respiration Abbreviated Journal RES  
  Volume 95 Issue Pages 63–69  
  Keywords Bronchoscopy; Tracheal stenosis; Airway stenosis; Computer-assisted analysis  
  Abstract Endoscopic estimation of the degree of stenosis in central airway obstruction is subjective and highly variable. Objective: To determine the benefits of using SENSA (System for Endoscopic Stenosis Assessment), an image-based computational software, for obtaining objective stenosis index (SI) measurements among a group of expert bronchoscopists and general pulmonologists. Methods: A total of 7 expert bronchoscopists and 7 general pulmonologists were enrolled to validate SENSA usage. The SI obtained by the physicians and by SENSA were compared with a reference SI to set their precision in SI computation. We used SENSA to efficiently obtain this reference SI in 11 selected cases of benign stenosis. A Web platform with three user-friendly microtasks was designed to gather the data. The users had to visually estimate the SI from videos with and without contours of the normal and the obstructed area provided by SENSA. The users were able to modify the SENSA contours to define the reference SI using morphometric bronchoscopy. Results: Visual SI estimation accuracy was associated with neither bronchoscopic experience (p = 0.71) nor the contours of the normal and the obstructed area provided by the system (p = 0.13). The precision of the SI by SENSA was 97.7% (95% CI: 92.4-103.7), which is significantly better than the precision of the SI by visual estimation (p < 0.001), with an improvement by at least 15%. Conclusion: SENSA provides objective SI measurements with a precision of up to 99.5%, which can be calculated from any bronchoscope using an affordable scalable interface. Providing normal and obstructed contours on bronchoscopic videos does not improve physicians' visual estimation of the SI.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.075; 600.096; 600.145 Approved no  
  Call Number Admin @ si @ GOS2018 Serial 3043  
Permanent link to this record
 

 
Author (down) David Aldavert; Marçal Rusiñol edit   pdf
doi  openurl
  Title Manuscript text line detection and segmentation using second-order derivatives analysis Type Conference Article
  Year 2018 Publication 13th IAPR International Workshop on Document Analysis Systems Abbreviated Journal  
  Volume Issue Pages 293 - 298  
  Keywords text line detection; text line segmentation; text region detection; second-order derivatives  
  Abstract In this paper, we explore the use of second-order derivatives to detect text lines on handwritten document images. Taking advantage that the second derivative gives a minimum response when a dark linear element over a
bright background has the same orientation as the filter, we use this operator to create a map with the local orientation and strength of putative text lines in the document. Then, we detect line segments by selecting and merging the filter responses that have a similar orientation and scale. Finally, text lines are found by merging the segments that are within the same text region. The proposed segmentation algorithm, is learning-free while showing a performance similar to the state of the art methods in publicly available datasets.
 
  Address Viena; Austria; April 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.084; 600.129; 302.065; 600.121 Approved no  
  Call Number Admin @ si @ AlR2018a Serial 3104  
Permanent link to this record
 

 
Author (down) David Aldavert; Marçal Rusiñol edit   pdf
doi  openurl
  Title Synthetically generated semantic codebook for Bag-of-Visual-Words based word spotting Type Conference Article
  Year 2018 Publication 13th IAPR International Workshop on Document Analysis Systems Abbreviated Journal  
  Volume Issue Pages 223 - 228  
  Keywords Word Spotting; Bag of Visual Words; Synthetic Codebook; Semantic Information  
  Abstract Word-spotting methods based on the Bag-ofVisual-Words framework have demonstrated a good retrieval performance even when used in a completely unsupervised manner. Although unsupervised approaches are suitable for
large document collections due to the cost of acquiring labeled data, these methods also present some drawbacks. For instance, having to train a suitable “codebook” for a certain dataset has a high computational cost. Therefore, in
this paper we present a database agnostic codebook which is trained from synthetic data. The aim of the proposed approach is to generate a codebook where the only information required is the type of script used in the document. The use of synthetic data also allows to easily incorporate semantic
information in the codebook generation. So, the proposed method is able to determine which set of codewords have a semantic representation of the descriptor feature space. Experimental results show that the resulting codebook attains a state-of-the-art performance while having a more compact representation.
 
  Address Viena; Austria; April 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference DAS  
  Notes DAG; 600.084; 600.129; 600.121 Approved no  
  Call Number Admin @ si @ AlR2018b Serial 3105  
Permanent link to this record
 

 
Author (down) Cristina Palmero; Javier Selva; Mohammad Ali Bagheri; Sergio Escalera edit   pdf
openurl 
  Title Recurrent CNN for 3D Gaze Estimation using Appearance and Shape Cues Type Conference Article
  Year 2018 Publication 29th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Gaze behavior is an important non-verbal cue in social signal processing and humancomputer interaction. In this paper, we tackle the problem of person- and head poseindependent 3D gaze estimation from remote cameras, using a multi-modal recurrent convolutional neural network (CNN). We propose to combine face, eyes region, and face landmarks as individual streams in a CNN to estimate gaze in still images. Then, we exploit the dynamic nature of gaze by feeding the learned features of all the frames in a sequence to a many-to-one recurrent module that predicts the 3D gaze vector of the last frame. Our multi-modal static solution is evaluated on a wide range of head poses and gaze directions, achieving a significant improvement of 14.6% over the state of the art on
EYEDIAP dataset, further improved by 4% when the temporal modality is included.
 
  Address Newcastle; UK; September 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ PSB2018 Serial 3208  
Permanent link to this record
 

 
Author (down) Cristhian A. Aguilera-Carrasco; C. Aguilera; Angel Sappa edit   pdf
doi  openurl
  Title Melamine Faced Panels Defect Classification beyond the Visible Spectrum Type Journal Article
  Year 2018 Publication Sensors Abbreviated Journal SENS  
  Volume 18 Issue 11 Pages 1-10  
  Keywords industrial application; infrared; machine learning  
  Abstract In this work, we explore the use of images from different spectral bands to classify defects in melamine faced panels, which could appear through the production process. Through experimental evaluation, we evaluate the use of images from the visible (VS), near-infrared (NIR), and long wavelength infrared (LWIR), to classify the defects using a feature descriptor learning approach together with a support vector machine classifier. Two descriptors were evaluated, Extended Local Binary Patterns (E-LBP) and SURF using a Bag of Words (BoW) representation. The evaluation was carried on with an image set obtained during this work, which contained five different defect categories that currently occurs in the industry. Results show that using images from beyond the visual spectrum helps to improve classification performance in contrast with a single visible spectrum solution.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MSIAU; 600.122 Approved no  
  Call Number Admin @ si @ AAS2018 Serial 3191  
Permanent link to this record
 

 
Author (down) Ciprian Corneanu; Meysam Madadi; Sergio Escalera edit   pdf
url  openurl
  Title Deep Structure Inference Network for Facial Action Unit Recognition Type Conference Article
  Year 2018 Publication 15th European Conference on Computer Vision Abbreviated Journal  
  Volume 11216 Issue Pages 309-324  
  Keywords Computer Vision; Machine Learning; Deep Learning; Facial Expression Analysis; Facial Action Units; Structure Inference  
  Abstract Facial expressions are combinations of basic components called Action Units (AU). Recognizing AUs is key for general facial expression analysis. Recently, efforts in automatic AU recognition have been dedicated to learning combinations of local features and to exploiting correlations between AUs. We propose a deep neural architecture that tackles both problems by combining learned local and global features in its initial stages and replicating a message passing algorithm between classes similar to a graphical model inference approach in later stages. We show that by training the model end-to-end with increased supervision we improve state-of-the-art by 5.3% and 8.2% performance on BP4D and DISFA datasets, respectively.  
  Address Munich; September 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ECCV  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ CME2018 Serial 3205  
Permanent link to this record
 

 
Author (down) Chenshen Wu; Luis Herranz; Xialei Liu; Joost Van de Weijer; Bogdan Raducanu edit   pdf
openurl 
  Title Memory Replay GANs: Learning to Generate New Categories without Forgetting Type Conference Article
  Year 2018 Publication 32nd Annual Conference on Neural Information Processing Systems Abbreviated Journal  
  Volume Issue Pages 5966-5976  
  Keywords  
  Abstract Previous works on sequential learning address the problem of forgetting in discriminative models. In this paper we consider the case of generative models. In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion. We first show that sequential fine tuning renders the network unable to properly generate images from previous categories (ie forgetting). Addressing this problem, we propose Memory Replay GANs (MeRGANs), a conditional GAN framework that integrates a memory replay generator. We study two methods to prevent forgetting by leveraging these replays, namely joint training with replay and replay alignment. Qualitative and quantitative experimental results in MNIST, SVHN and LSUN datasets show that our memory replay approach can generate competitive images while significantly mitigating the forgetting of previous categories.  
  Address Montreal; Canada; December 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NIPS  
  Notes LAMP; 600.106; 600.109; 602.200; 600.120 Approved no  
  Call Number Admin @ si @ WHL2018 Serial 3249  
Permanent link to this record
 

 
Author (down) Cesar de Souza; Adrien Gaidon; Eleonora Vig; Antonio Lopez edit  openurl
  Title System and method for video classification using a hybrid unsupervised and supervised multi-layer architecture Type Patent
  Year 2018 Publication US9946933B2 Abbreviated Journal  
  Volume Issue Pages  
  Keywords US9946933B2  
  Abstract A computer-implemented video classification method and system are disclosed. The method includes receiving an input video including a sequence of frames. At least one transformation of the input video is generated, each transformation including a sequence of frames. For the input video and each transformation, local descriptors are extracted from the respective sequence of frames. The local descriptors of the input video and each transformation are aggregated to form an aggregated feature vector with a first set of processing layers learned using unsupervised learning. An output classification value is generated for the input video, based on the aggregated feature vector with a second set of processing layers learned using supervised learning.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ SGV2018 Serial 3255  
Permanent link to this record
 

 
Author (down) Cesar de Souza edit  openurl
  Title Action Recognition in Videos: Data-efficient approaches for supervised learning of human action classification models for video Type Book Whole
  Year 2018 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this dissertation, we explore different ways to perform human action recognition in video clips. We focus on data efficiency, proposing new approaches that alleviate the need for laborious and time-consuming manual data annotation. In the first part of this dissertation, we start by analyzing previous state-of-the-art models, comparing their differences and similarities in order to pinpoint where their real strengths come from. Leveraging this information, we then proceed to boost the classification accuracy of shallow models to levels that rival deep neural networks. We introduce hybrid video classification architectures based on carefully designed unsupervised representations of handcrafted spatiotemporal features classified by supervised deep networks. We show in our experiments that our hybrid model combine the best of both worlds: it is data efficient (trained on 150 to 10,000 short clips) and yet improved significantly on the state of the art, including deep models trained on millions of manually labeled images and videos. In the second part of this research, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for “Procedural Human Action Videos”. It contains a total of 39,982 videos, with more than 1,000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We then introduce deep multi-task representation learning architectures to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF-101 and HMDB-51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, outperforming fine-tuning state-of-the-art unsupervised generative models of videos.  
  Address April 2018  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Antonio Lopez;Naila Murray  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ Sou2018 Serial 3127  
Permanent link to this record
 

 
Author (down) Carles Sanchez; Miguel Viñas; Coen Antens; Agnes Borras; Debora Gil edit   pdf
url  doi
openurl 
  Title Back to Front Architecture for Diagnosis as a Service Type Conference Article
  Year 2018 Publication 20th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing Abbreviated Journal  
  Volume Issue Pages 343-346  
  Keywords  
  Abstract Software as a Service (SaaS) is a cloud computing model in which a provider hosts applications in a server that customers use via internet. Since SaaS does not require to install applications on customers' own computers, it allows the use by multiple users of highly specialized software without extra expenses for hardware acquisition or licensing. A SaaS tailored for clinical needs not only would alleviate licensing costs, but also would facilitate easy access to new methods for diagnosis assistance. This paper presents a SaaS client-server architecture for Diagnosis as a Service (DaaS). The server is based on docker technology in order to allow execution of softwares implemented in different languages with the highest portability and scalability. The client is a content management system allowing the design of websites with multimedia content and interactive visualization of results allowing user editing. We explain a usage case that uses our DaaS as crowdsourcing platform in a multicentric pilot study carried out to evaluate the clinical benefits of a software for assessment of central airway obstruction.  
  Address Timisoara; Rumania; September 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference SYNASC  
  Notes IAM; 600.145 Approved no  
  Call Number Admin @ si @ SVA2018 Serial 3360  
Permanent link to this record
 

 
Author (down) Boris N. Oreshkin; Pau Rodriguez; Alexandre Lacoste edit   pdf
url  openurl
  Title TADAM: Task dependent adaptive metric for improved few-shot learning Type Conference Article
  Year 2018 Publication 32nd Annual Conference on Neural Information Processing Systems Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100.  
  Address Montreal; Canada; December 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference NIPS  
  Notes ISE; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ ORL2018 Serial 3140  
Permanent link to this record
 

 
Author (down) Bojana Gajic; Ramon Baldrich edit  doi
openurl 
  Title Cross-domain fashion image retrieval Type Conference Article
  Year 2018 Publication CVPR 2018 Workshop on Women in Computer Vision (WiCV 2018, 4th Edition) Abbreviated Journal  
  Volume Issue Pages 19500-19502  
  Keywords  
  Abstract Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products
in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task.
 
  Address Salt Lake City, USA; 22 June 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes CIC; 600.087 Approved no  
  Call Number Admin @ si @ Serial 3709  
Permanent link to this record
 

 
Author (down) Aymen Azaza; Joost Van de Weijer; Ali Douik; Marc Masana edit   pdf
url  openurl
  Title Context Proposals for Saliency Detection Type Journal Article
  Year 2018 Publication Computer Vision and Image Understanding Abbreviated Journal CVIU  
  Volume 174 Issue Pages 1-11  
  Keywords  
  Abstract One of the fundamental properties of a salient object region is its contrast
with the immediate context. The problem is that numerous object regions
exist which potentially can all be salient. One way to prevent an exhaustive
search over all object regions is by using object proposal algorithms. These
return a limited set of regions which are most likely to contain an object. Several saliency estimation methods have used object proposals. However, they focus on the saliency of the proposal only, and the importance of its immediate context has not been evaluated.
In this paper, we aim to improve salient object detection. Therefore, we extend object proposal methods with context proposals, which allow to incorporate the immediate context in the saliency computation. We propose several saliency features which are computed from the context proposals. In the experiments, we evaluate five object proposal methods for the task of saliency segmentation, and find that Multiscale Combinatorial Grouping outperforms the others. Furthermore, experiments show that the proposed context features improve performance, and that our method matches results on the FT datasets and obtains competitive results on three other datasets (PASCAL-S, MSRA-B and ECSSD).
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.109; 600.109; 600.120 Approved no  
  Call Number Admin @ si @ AWD2018 Serial 3241  
Permanent link to this record
 

 
Author (down) Aymen Azaza edit  isbn
openurl 
  Title Context, Motion and Semantic Information for Computational Saliency Type Book Whole
  Year 2018 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The main objective of this thesis is to highlight the salient object in an image or in a video sequence. We address three important—but in our opinion
insufficiently investigated—aspects of saliency detection. Firstly, we start
by extending previous research on saliency which explicitly models the information provided from the context. Then, we show the importance of
explicit context modelling for saliency estimation. Several important works
in saliency are based on the usage of object proposals. However, these methods
focus on the saliency of the object proposal itself and ignore the context.
To introduce context in such saliency approaches, we couple every object
proposal with its direct context. This allows us to evaluate the importance
of the immediate surround (context) for its saliency. We propose several
saliency features which are computed from the context proposals including
features based on omni-directional and horizontal context continuity. Secondly,
we investigate the usage of top-downmethods (high-level semantic
information) for the task of saliency prediction since most computational
methods are bottom-up or only include few semantic classes. We propose
to consider a wider group of object classes. These objects represent important
semantic information which we will exploit in our saliency prediction
approach. Thirdly, we develop a method to detect video saliency by computing
saliency from supervoxels and optical flow. In addition, we apply the
context features developed in this thesis for video saliency detection. The
method combines shape and motion features with our proposed context
features. To summarize, we prove that extending object proposals with their
direct context improves the task of saliency detection in both image and
video data. Also the importance of the semantic information in saliency
estimation is evaluated. Finally, we propose a newmotion feature to detect
saliency in video data. The three proposed novelties are evaluated on standard
saliency benchmark datasets and are shown to improve with respect to
state-of-the-art.
 
  Address October 2018  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Joost Van de Weijer;Ali Douik  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-945373-9-4 Medium  
  Area Expedition Conference  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ Aza2018 Serial 3218  
Permanent link to this record
 

 
Author (down) Arnau Baro; Pau Riba; Jorge Calvo-Zaragoza; Alicia Fornes edit   pdf
doi  isbn
openurl 
  Title Optical Music Recognition by Long Short-Term Memory Networks Type Book Chapter
  Year 2018 Publication Graphics Recognition. Current Trends and Evolutions Abbreviated Journal  
  Volume 11009 Issue Pages 81-95  
  Keywords Optical Music Recognition; Recurrent Neural Network; Long ShortTerm Memory  
  Abstract Optical Music Recognition refers to the task of transcribing the image of a music score into a machine-readable format. Many music scores are written in a single staff, and therefore, they could be treated as a sequence. Therefore, this work explores the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for reading the music score sequentially, where the LSTM helps in keeping the context. For training, we have used a synthetic dataset of more than 40000 images, labeled at primitive level. The experimental results are promising, showing the benefits of our approach.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Place of Publication Editor A. Fornes, B. Lamiroy  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-030-02283-9 Medium  
  Area Expedition Conference GREC  
  Notes DAG; 600.097; 601.302; 601.330; 600.121 Approved no  
  Call Number Admin @ si @ BRC2018 Serial 3227  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: