Home | << 1 2 3 4 5 6 7 8 9 10 >> |
Records | |||||
---|---|---|---|---|---|
Author | Christophe Rigaud; Dimosthenis Karatzas; Jean-Christophe Burie; Jean-Marc Ogier | ||||
Title | Color descriptor for content-based drawing retrieval | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 267 - 271 | ||
Keywords | |||||
Abstract | Human detection in computer vision field is an active field of research. Extending this to human-like drawings such as the main characters in comic book stories is not trivial. Comics analysis is a very recent field of research at the intersection of graphics, texts, objects and people recognition. The detection of the main comic characters is an essential step towards a fully automatic comic book understanding. This paper presents a color-based approach for comics character retrieval using content-based drawing retrieval and color palette. | ||||
Address | Tours; Francia; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.056; 600.077 | Approved | no | ||
Call Number | Admin @ si @ RKB2014 | Serial | 2479 | ||
Permanent link to this record | |||||
Author | Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades | ||||
Title | Spotting Symbol Using Sparsity over Learned Dictionary of Local Descriptors | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 156-160 | ||
Keywords | |||||
Abstract | This paper proposes a new approach to spot symbols into graphical documents using sparse representations. More specifically, a dictionary is learned from a training database of local descriptors defined over the documents. Following their sparse representations, interest points sharing similar properties are used to define interest regions. Using an original adaptation of information retrieval techniques, a vector model for interest regions and for a query symbol is built based on its sparsity in a visual vocabulary where the visual words are columns in the learned dictionary. The matching process is performed comparing the similarity between vector models. Evaluation on SESYD datasets demonstrates that our method is promising. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.077 | Approved | no | ||
Call Number | Admin @ si @ DTR2014 | Serial | 2543 | ||
Permanent link to this record | |||||
Author | Dimosthenis Karatzas; Sergi Robles; Lluis Gomez | ||||
Title | An on-line platform for ground truthing and performance evaluation of text extraction systems | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 242 - 246 | ||
Keywords | |||||
Abstract | This paper presents a set of on-line software tools for creating ground truth and calculating performance evaluation metrics for text extraction tasks such as localization, segmentation and recognition. The platform supports the definition of comprehensive ground truth information at different text representation levels while it offers centralised management and quality control of the ground truthing effort. It implements a range of state of the art performance evaluation algorithms and offers functionality for the definition of evaluation scenarios, on-line calculation of various performance metrics and visualisation of the results. The
presented platform, which comprises the backbone of the ICDAR 2011 (challenge 1) and 2013 (challenges 1 and 2) Robust Reading competitions, is now made available for public use. |
||||
Address | Tours; Francia; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.056; 600.077 | Approved | no | ||
Call Number | Admin @ si @ KRG2014 | Serial | 2491 | ||
Permanent link to this record | |||||
Author | P. Wang; V. Eglin; C. Garcia; C. Largeron; Josep Llados; Alicia Fornes | ||||
Title | A Novel Learning-free Word Spotting Approach Based on Graph Representation | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 207-211 | ||
Keywords | |||||
Abstract | Effective information retrieval on handwritten document images has always been a challenging task. In this paper, we propose a novel handwritten word spotting approach based on graph representation. The presented model comprises both topological and morphological signatures of handwriting. Skeleton-based graphs with the Shape Context labelled vertexes are established for connected components. Each word image is represented as a sequence of graphs. In order to be robust to the handwriting variations, an exhaustive merging process based on DTW alignment result is introduced in the similarity measure between word images. With respect to the computation complexity, an approximate graph edit distance approach using bipartite matching is employed for graph matching. The experiments on the George Washington dataset and the marriage records from the Barcelona Cathedral dataset demonstrate that the proposed approach outperforms the state-of-the-art structural methods. | ||||
Address | Tours; France; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.061; 602.006; 600.077 | Approved | no | ||
Call Number | Admin @ si @ WEG2014b | Serial | 2517 | ||
Permanent link to this record | |||||
Author | David Fernandez; R.Manmatha; Josep Llados; Alicia Fornes | ||||
Title | Sequential Word Spotting in Historical Handwritten Documents | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 101 - 105 | ||
Keywords | |||||
Abstract | In this work we present a handwritten word spotting approach that takes advantage of the a priori known order of appearance of the query words. Given an ordered sequence of query word instances, the proposed approach performs a
sequence alignment with the words in the target collection. Although the alignment is quite sparse, i.e. the number of words in the database is higher than the query set, the improvement in the overall performance is sensitively higher than isolated word spotting. As application dataset, we use a collection of handwritten marriage licenses taking advantage of the ordered index pages of family names. |
||||
Address | Tours; Francia; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.061; 600.056; 602.006; 600.077 | Approved | no | ||
Call Number | Admin @ si @ FML2014 | Serial | 2462 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; J. Chazalon; Jean-Marc Ogier | ||||
Title | Combining Focus Measure Operators to Predict OCR Accuracy in Mobile-Captured Document Images | Type | Conference Article | ||
Year | 2014 | Publication | 11th IAPR International Workshop on Document Analysis and Systems | Abbreviated Journal | |
Volume | Issue | Pages | 181 - 185 | ||
Keywords | |||||
Abstract | Mobile document image acquisition is a new trend raising serious issues in business document processing workflows. Such digitization procedure is unreliable, and integrates many distortions which must be detected as soon as possible, on the mobile, to avoid paying data transmission fees, and losing information due to the inability to re-capture later a document with temporary availability. In this context, out-of-focus blur is major issue: users have no direct control over it, and it seriously degrades OCR recognition. In this paper, we concentrate on the estimation of focus quality, to ensure a sufficient legibility of a document image for OCR processing. We propose two contributions to improve OCR accuracy prediction for mobile-captured document images. First, we present 24 focus measures, never tested on document images, which are fast to compute and require no training. Second, we show that a combination of those measures enables state-of-the art performance regarding the correlation with OCR accuracy. The resulting approach is fast, robust, and easy to implement in a mobile device. Experiments are performed on a public dataset, and precise details about image processing are given. | ||||
Address | Tours; France; April 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4799-3243-6 | Medium | ||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 601.223; 600.077 | Approved | no | ||
Call Number | Admin @ si @ RCO2014a | Serial | 2545 | ||
Permanent link to this record | |||||
Author | Carlo Gatta; Adriana Romero; Joost Van de Weijer | ||||
Title | Unrolling loopy top-down semantic feedback in convolutional deep networks | Type | Conference Article | ||
Year | 2014 | Publication | Workshop on Deep Vision: Deep Learning for Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 498-505 | ||
Keywords | |||||
Abstract | In this paper, we propose a novel way to perform top-down semantic feedback in convolutional deep networks for efficient and accurate image parsing. We also show how to add global appearance/semantic features, which have shown to improve image parsing performance in state-of-the-art methods, and was not present in previous convolutional approaches. The proposed method is characterised by an efficient training and a sufficiently fast testing. We use the well known SIFTflow dataset to numerically show the advantages provided by our contributions, and to compare with state-of-the-art image parsing convolutional based approaches. | ||||
Address | Columbus; Ohio; June 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | LAMP; MILAB; 601.160; 600.079 | Approved | no | ||
Call Number | Admin @ si @ GRW2014 | Serial | 2490 | ||
Permanent link to this record | |||||
Author | Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell; Dimitris Samaras | ||||
Title | The Photometry of Intrinsic Images | Type | Conference Article | ||
Year | 2014 | Publication | 27th IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1494-1501 | ||
Keywords | |||||
Abstract | Intrinsic characterization of scenes is often the best way to overcome the illumination variability artifacts that complicate most computer vision problems, from 3D reconstruction to object or material recognition. This paper examines the deficiency of existing intrinsic image models to accurately account for the effects of illuminant color and sensor characteristics in the estimation of intrinsic images and presents a generic framework which incorporates insights from color constancy research to the intrinsic image decomposition problem. The proposed mathematical formulation includes information about the color of the illuminant and the effects of the camera sensors, both of which modify the observed color of the reflectance of the objects in the scene during the acquisition process. By modeling these effects, we get a “truly intrinsic” reflectance image, which we call absolute reflectance, which is invariant to changes of illuminant or camera sensors. This model allows us to represent a wide range of intrinsic image decompositions depending on the specific assumptions on the geometric properties of the scene configuration and the spectral properties of the light source and the acquisition system, thus unifying previous models in a single general framework. We demonstrate that even partial information about sensors improves significantly the estimated reflectance images, thus making our method applicable for a wide range of sensors. We validate our general intrinsic image framework experimentally with both synthetic data and natural images. | ||||
Address | Columbus; Ohio; USA; June 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | CIC; 600.052; 600.051; 600.074 | Approved | no | ||
Call Number | Admin @ si @ SPB2014 | Serial | 2506 | ||
Permanent link to this record | |||||
Author | M. Danelljan; Fahad Shahbaz Khan; Michael Felsberg; Joost Van de Weijer | ||||
Title | Adaptive color attributes for real-time visual tracking | Type | Conference Article | ||
Year | 2014 | Publication | 27th IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1090 - 1097 | ||
Keywords | |||||
Abstract | Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object
recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power. This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms state-of-the-art tracking methods while running at more than 100 frames per second. |
||||
Address | Nottingham; UK; September 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | CIC; LAMP; 600.074; 600.079 | Approved | no | ||
Call Number | Admin @ si @ DKF2014 | Serial | 2509 | ||
Permanent link to this record | |||||
Author | E. Bondi ; L. Sidenari; Andrew Bagdanov; Alberto del Bimbo | ||||
Title | Real-time people counting from depth imagery of crowded environments | Type | Conference Article | ||
Year | 2014 | Publication | 11th IEEE International Conference on Advanced Video and Signal based Surveillance | Abbreviated Journal | |
Volume | Issue | Pages | 337 - 342 | ||
Keywords | |||||
Abstract | In this paper we describe a system for automatic people counting in crowded environments. The approach we propose is a counting-by-detection method based on depth imagery. It is designed to be deployed as an autonomous appliance for crowd analysis in video surveillance application scenarios. Our system performs foreground/background segmentation on depth image streams in order to coarsely segment persons, then depth information is used to localize head candidates which are then tracked in time on an automatically estimated ground plane. The system runs in real-time, at a frame-rate of about 20 fps. We collected a dataset of RGB-D sequences representing three typical and challenging surveillance scenarios, including crowds, queuing and groups. An extensive comparative evaluation is given between our system and more complex, Latent SVM-based head localization for person counting applications. | ||||
Address | Seoul; Korea; August 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | AVSS | ||
Notes | LAMP; 600.079 | Approved | no | ||
Call Number | Admin @ si @ BSB2014 | Serial | 2540 | ||
Permanent link to this record | |||||
Author | Mohammad Rouhani; E. Boyer; Angel Sappa | ||||
Title | Non-Rigid Registration meets Surface Reconstruction | Type | Conference Article | ||
Year | 2014 | Publication | International Conference on 3D Vision | Abbreviated Journal | |
Volume | Issue | Pages | 617-624 | ||
Keywords | |||||
Abstract | Non rigid registration is an important task in computer vision with many applications in shape and motion modeling. A fundamental step of the registration is the data association between the source and the target sets. Such association proves difficult in practice, due to the discrete nature of the information and its corruption by various types of noise, e.g. outliers and missing data. In this paper we investigate the benefit of the implicit representations for the non-rigid registration of 3D point clouds. First, the target points are described with small quadratic patches that are blended through partition of unity weighting. Then, the discrete association between the source and the target can be replaced by a continuous distance field induced by the interface. By combining this distance field with a proper deformation term, the registration energy can be expressed in a linear least square form that is easy and fast to solve. This significantly eases the registration by avoiding direct association between points. Moreover, a hierarchical approach can be easily implemented by employing coarse-to-fine representations. Experimental results are provided for point clouds from multi-view data sets. The qualitative and quantitative comparisons show the outperformance and robustness of our framework. %in presence of noise and outliers. | ||||
Address | Tokyo; Japan; December 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | 3DV | ||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ RBS2014 | Serial | 2534 | ||
Permanent link to this record | |||||
Author | Frederic Sampedro; Sergio Escalera; Anna Puig | ||||
Title | Iterative Multiclass Multiscale Stacked Sequential Learning: definition and application to medical volume segmentation | Type | Journal Article | ||
Year | 2014 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 46 | Issue | Pages | 1-10 | |
Keywords | Machine learning; Sequential learning; Multi-class problems; Contextual learning; Medical volume segmentation | ||||
Abstract | In this work we present the iterative multi-class multi-scale stacked sequential learning framework (IMMSSL), a novel learning scheme that is particularly suited for medical volume segmentation applications. This model exploits the inherent voxel contextual information of the structures of interest in order to improve its segmentation performance results. Without any feature set or learning algorithm prior assumption, the proposed scheme directly seeks to learn the contextual properties of a region from the predicted classifications of previous classifiers within an iterative scheme. Performance results regarding segmentation accuracy in three two-class and multi-class medical volume datasets show a significant improvement with respect to state of the art alternatives. Due to its easiness of implementation and its independence of feature space and learning algorithm, the presented machine learning framework could be taken into consideration as a first choice in complex volume segmentation scenarios. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ SEP2014 | Serial | 2550 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez; Miguel Angel Bautista; Xavier Perez Sala; Victor Ponce; Sergio Escalera; Xavier Baro; Oriol Pujol; Cecilio Angulo | ||||
Title | Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D | Type | Journal Article | ||
Year | 2014 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 50 | Issue | 1 | Pages | 112-121 |
Keywords | RGB-D; Bag-of-Words; Dynamic Time Warping; Human Gesture Recognition | ||||
Abstract | PATREC5825
We present a methodology to address the problem of human gesture segmentation and recognition in video and depth image sequences. A Bag-of-Visual-and-Depth-Words (BoVDW) model is introduced as an extension of the Bag-of-Visual-Words (BoVW) model. State-of-the-art RGB and depth features, including a newly proposed depth descriptor, are analysed and combined in a late fusion form. The method is integrated in a Human Gesture Recognition pipeline, together with a novel probability-based Dynamic Time Warping (PDTW) algorithm which is used to perform prior segmentation of idle gestures. The proposed DTW variant uses samples of the same gesture category to build a Gaussian Mixture Model driven probabilistic model of that gesture class. Results of the whole Human Gesture Recognition pipeline in a public data set show better performance in comparison to both standard BoVW model and DTW approach. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MV; 605.203 | Approved | no | ||
Call Number | Admin @ si @ HBP2014 | Serial | 2353 | ||
Permanent link to this record | |||||
Author | Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny | ||||
Title | Segmentation-free Word Spotting with Exemplar SVMs | Type | Journal Article | ||
Year | 2014 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 47 | Issue | 12 | Pages | 3967–3978 |
Keywords | Word spotting; Segmentation-free; Unsupervised learning; Reranking; Query expansion; Compression | ||||
Abstract | In this paper we propose an unsupervised segmentation-free method for word spotting in document images. Documents are represented with a grid of HOG descriptors, and a sliding-window approach is used to locate the document regions that are most similar to the query. We use the Exemplar SVM framework to produce a better representation of the query in an unsupervised way. Then, we use a more discriminative representation based on Fisher Vector to rerank the best regions retrieved, and the most promising ones are used to expand the Exemplar SVM training set and improve the query representation. Finally, the document descriptors are precomputed and compressed with Product Quantization. This offers two advantages: first, a large number of documents can be kept in RAM memory at the same time. Second, the sliding window becomes significantly faster since distances between quantized HOG descriptors can be precomputed. Our results significantly outperform other segmentation-free methods in the literature, both in accuracy and in speed and memory usage. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.045; 600.056; 600.061; 602.006; 600.077 | Approved | no | ||
Call Number | Admin @ si @ AGF2014b | Serial | 2485 | ||
Permanent link to this record | |||||
Author | Francesco Ciompi; Oriol Pujol; Petia Radeva | ||||
Title | ECOC-DRF: Discriminative random fields based on error correcting output codes | Type | Journal Article | ||
Year | 2014 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 47 | Issue | 6 | Pages | 2193-2204 |
Keywords | Discriminative random fields; Error-correcting output codes; Multi-class classification; Graphical models | ||||
Abstract | We present ECOC-DRF, a framework where potential functions for Discriminative Random Fields are formulated as an ensemble of classifiers. We introduce the label trick, a technique to express transitions in the pairwise potential as meta-classes. This allows to independently learn any possible transition between labels without assuming any pre-defined model. The Error Correcting Output Codes matrix is used as ensemble framework for the combination of margin classifiers. We apply ECOC-DRF to a large set of classification problems, covering synthetic, natural and medical images for binary and multi-class cases, outperforming state-of-the art in almost all the experiments. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; HuPBA; MILAB; 605.203; 600.046; 601.043; 600.079 | Approved | no | ||
Call Number | Admin @ si @ CPR2014b | Serial | 2470 | ||
Permanent link to this record |