toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Fernando Vilariño; Dimosthenis Karatzas edit  openurl
  Title The Library Living Lab Type Conference Article
  Year 2015 Publication (up) Open Living Lab Days Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Istanbul; Turkey; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference OLLD  
  Notes MV; DAG;SIAI Approved no  
  Call Number Admin @ si @ViK2015 Serial 2797  
Permanent link to this record
 

 
Author Eloi Puertas; Sergio Escalera; Oriol Pujol edit   pdf
url  doi
openurl 
  Title Generalized Multi-scale Stacked Sequential Learning for Multi-class Classification Type Journal Article
  Year 2015 Publication (up) Pattern Analysis and Applications Abbreviated Journal PAA  
  Volume 18 Issue 2 Pages 247-261  
  Keywords Stacked sequential learning; Multi-scale; Error-correct output codes (ECOC); Contextual classification  
  Abstract In many classification problems, neighbor data labels have inherent sequential relationships. Sequential learning algorithms take benefit of these relationships in order to improve generalization. In this paper, we revise the multi-scale sequential learning approach (MSSL) for applying it in the multi-class case (MMSSL). We introduce the error-correcting output codesframework in the MSSL classifiers and propose a formulation for calculating confidence maps from the margins of the base classifiers. In addition, we propose a MMSSL compression approach which reduces the number of features in the extended data set without a loss in performance. The proposed methods are tested on several databases, showing significant performance improvement compared to classical approaches.  
  Address  
  Corporate Author Thesis  
  Publisher Springer-Verlag Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1433-7541 ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ PEP2013 Serial 2251  
Permanent link to this record
 

 
Author Mohammad Ali Bagheri; Qigang Gao; Sergio Escalera edit  doi
openurl 
  Title Combining Local and Global Learners in the Pairwise Multiclass Classification Type Journal Article
  Year 2015 Publication (up) Pattern Analysis and Applications Abbreviated Journal PAA  
  Volume 18 Issue 4 Pages 845-860  
  Keywords Multiclass classification; Pairwise approach; One-versus-one  
  Abstract Pairwise classification is a well-known class binarization technique that converts a multiclass problem into a number of two-class problems, one problem for each pair of classes. However, in the pairwise technique, nuisance votes of many irrelevant classifiers may result in a wrong class prediction. To overcome this problem, a simple, but efficient method is proposed and evaluated in this paper. The proposed method is based on excluding some classes and focusing on the most probable classes in the neighborhood space, named Local Crossing Off (LCO). This procedure is performed by employing a modified version of standard K-nearest neighbor and large margin nearest neighbor algorithms. The LCO method takes advantage of nearest neighbor classification algorithm because of its local learning behavior as well as the global behavior of powerful binary classifiers to discriminate between two classes. Combining these two properties in the proposed LCO technique will avoid the weaknesses of each method and will increase the efficiency of the whole classification system. On several benchmark datasets of varying size and difficulty, we found that the LCO approach leads to significant improvements using different base learners. The experimental results show that the proposed technique not only achieves better classification accuracy in comparison to other standard approaches, but also is computationally more efficient for tackling classification problems which have a relatively large number of target classes.  
  Address  
  Corporate Author Thesis  
  Publisher Springer London Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1433-7541 ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ BGE2014 Serial 2441  
Permanent link to this record
 

 
Author Marçal Rusiñol; David Aldavert; Ricardo Toledo; Josep Llados edit  doi
openurl 
  Title Efficient segmentation-free keyword spotting in historical document collections Type Journal Article
  Year 2015 Publication (up) Pattern Recognition Abbreviated Journal PR  
  Volume 48 Issue 2 Pages 545–555  
  Keywords Historical documents; Keyword spotting; Segmentation-free; Dense SIFT features; Latent semantic analysis; Product quantization  
  Abstract In this paper we present an efficient segmentation-free word spotting method, applied in the context of historical document collections, that follows the query-by-example paradigm. We use a patch-based framework where local patches are described by a bag-of-visual-words model powered by SIFT descriptors. By projecting the patch descriptors to a topic space with the latent semantic analysis technique and compressing the descriptors with the product quantization method, we are able to efficiently index the document information both in terms of memory and time. The proposed method is evaluated using four different collections of historical documents achieving good performances on both handwritten and typewritten scenarios. The yielded performances outperform the recent state-of-the-art keyword spotting approaches.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; ADAS; 600.076; 600.077; 600.061; 601.223; 602.006; 600.055 Approved no  
  Call Number Admin @ si @ RAT2015a Serial 2544  
Permanent link to this record
 

 
Author Ivan Huerta; Marco Pedersoli; Jordi Gonzalez; Alberto Sanfeliu edit  doi
openurl 
  Title Combining where and what in change detection for unsupervised foreground learning in surveillance Type Journal Article
  Year 2015 Publication (up) Pattern Recognition Abbreviated Journal PR  
  Volume 48 Issue 3 Pages 709-719  
  Keywords Object detection; Unsupervised learning; Motion segmentation; Latent variables; Support vector machine; Multiple appearance models; Video surveillance  
  Abstract Change detection is the most important task for video surveillance analytics such as foreground and anomaly detection. Current foreground detectors learn models from annotated images since the goal is to generate a robust foreground model able to detect changes in all possible scenarios. Unfortunately, manual labelling is very expensive. Most advanced supervised learning techniques based on generic object detection datasets currently exhibit very poor performance when applied to surveillance datasets because of the unconstrained nature of such environments in terms of types and appearances of objects. In this paper, we take advantage of change detection for training multiple foreground detectors in an unsupervised manner. We use statistical learning techniques which exploit the use of latent parameters for selecting the best foreground model parameters for a given scenario. In essence, the main novelty of our proposed approach is to combine the where (motion segmentation) and what (learning procedure) in change detection in an unsupervised way for improving the specificity and generalization power of foreground detectors at the same time. We propose a framework based on latent support vector machines that, given a noisy initialization based on motion cues, learns the correct position, aspect ratio, and appearance of all moving objects in a particular scene. Specificity is achieved by learning the particular change detections of a given scenario, and generalization is guaranteed since our method can be applied to any possible scene and foreground object, as demonstrated in the experimental results outperforming the state-of-the-art.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.063; 600.078 Approved no  
  Call Number Admin @ si @ HPG2015 Serial 2589  
Permanent link to this record
 

 
Author Marco Pedersoli; Andrea Vedaldi; Jordi Gonzalez; Xavier Roca edit   pdf
doi  openurl
  Title A coarse-to-fine approach for fast deformable object detection Type Journal Article
  Year 2015 Publication (up) Pattern Recognition Abbreviated Journal PR  
  Volume 48 Issue 5 Pages 1844-1853  
  Keywords  
  Abstract We present a method that can dramatically accelerate object detection with part based models. The method is based on the observation that the cost of detection is likely to be dominated by the cost of matching each part to the image, and not by the cost of computing the optimal configuration of the parts as commonly assumed. Therefore accelerating detection requires minimizing the number of
part-to-image comparisons. To this end we propose a multiple-resolutions hierarchical part based model and a corresponding coarse-to-fine inference procedure that recursively eliminates from the search space unpromising part
placements. The method yields a ten-fold speedup over the standard dynamic programming approach and is complementary to the cascade-of-parts approach of [9]. Compared to the latter, our method does not have parameters to be determined empirically, which simplifies its use during the training of the model. Most importantly, the two techniques can be combined to obtain a very significant speedup, of two orders of magnitude in some cases. We evaluate our method extensively on the PASCAL VOC and INRIA datasets, demonstrating a very high increase in the detection speed with little degradation of the accuracy.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.078; 602.005; 605.001; 302.012 Approved no  
  Call Number Admin @ si @ PVG2015 Serial 2628  
Permanent link to this record
 

 
Author E. Tavalera; Mariella Dimiccoli; Marc Bolaños; Maedeh Aghaei; Petia Radeva edit   pdf
isbn  openurl
  Title Regularized Clustering for Egocentric Video Segmentation Type Book Chapter
  Year 2015 Publication (up) Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume Issue Pages 327-336  
  Keywords Temporal video segmentation ; Egocentric videos ; Clustering  
  Abstract In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energyminimization framework. Given the tendency of most AC methods to oversegment video sequences when clustering their frames, we combine the clustering with a concept drift detection technique (ADWIN) that has rigorous guarantee of performances. ADWIN serves as a statistical upper bound for the clustering-based video segmentation. We integrate techniques in an energy-minimization framework that serves disambiguate the decision of both techniques and to complete the segmentation taking into account the temporal continuity of video frames We present experiments over egocentric sets of more than 13.000 images acquired with different wearable cameras, showing that our method outperforms state-of-the-art clustering methods.  
  Address  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-319-19390-8 Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @TDB2015a Serial 2781  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Sebastian Ramos; David Vazquez; Antonio Lopez; Jaume Amores edit   pdf
doi  openurl
  Title Spatiotemporal Stacked Sequential Learning for Pedestrian Detection Type Conference Article
  Year 2015 Publication (up) Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume Issue Pages 3-12  
  Keywords SSL; Pedestrian Detection  
  Abstract Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera.  
  Address Santiago de Compostela; España; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IbPRIA  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number GRV2015; ADAS @ adas @ GRV2015 Serial 2454  
Permanent link to this record
 

 
Author Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title 3D-Guided Multiscale Sliding Window for Pedestrian Detection Type Conference Article
  Year 2015 Publication (up) Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume 9117 Issue Pages 560-568  
  Keywords Pedestrian Detection  
  Abstract The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy.  
  Address Santiago de Compostela; España; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area ACDC Expedition Conference IbPRIA  
  Notes ADAS; 600.076; 600.057; 600.054 Approved no  
  Call Number ADAS @ adas @ GVR2015 Serial 2585  
Permanent link to this record
 

 
Author Marc Bolaños; Maite Garolera; Petia Radeva edit  doi
isbn  openurl
  Title Object Discovery using CNN Features in Egocentric Videos Type Conference Article
  Year 2015 Publication (up) Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume 9117 Issue Pages 67-74  
  Keywords Object discovery; Egocentric videos; Lifelogging; CNN  
  Abstract Lifelogging devices based on photo/video are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his/her environment. In this paper, we propose a semi-supervised strategy for easily discovering objects relevant to the person wearing a first-person camera. The egocentric video sequence acquired by the camera, uses both the appearance extracted by means of a deep convolutional neural network and an object refill methodology that allow to discover objects even in case of small amount of object appearance in the collection of images. We validate our method on a sequence of 1000 egocentric daily images and obtain results with an F-measure of 0.5, 0.17 better than the state of the art approach.  
  Address Santiago de Compostela; España; June 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium  
  Area Expedition Conference IbPRIA  
  Notes MILAB Approved no  
  Call Number Admin @ si @ BGR2015 Serial 2596  
Permanent link to this record
 

 
Author Estefania Talavera; Mariella Dimiccoli; Marc Bolaños; Maedeh Aghaei; Petia Radeva edit  doi
isbn  openurl
  Title R-clustering for egocentric video segmentation Type Conference Article
  Year 2015 Publication (up) Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume 9117 Issue Pages 327-336  
  Keywords Temporal video segmentation; Egocentric videos; Clustering  
  Abstract In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energy-minimization framework. Given the tendency of most AC methods to oversegment video sequences when clustering their frames, we combine the clustering with a concept drift detection technique (ADWIN) that has rigorous guarantee of performances. ADWIN serves as a statistical upper bound for the clustering-based video segmentation. We integrate both techniques in an energy-minimization framework that serves to disambiguate the decision of both techniques and to complete the segmentation taking into account the temporal continuity of video frames descriptors. We present experiments over egocentric sets of more than 13.000 images acquired with different wearable cameras, showing that our method outperforms state-of-the-art clustering methods.  
  Address Santiago de Compostela; España; June 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium  
  Area Expedition Conference IbPRIA  
  Notes MILAB Approved no  
  Call Number Admin @ si @ TDB2015 Serial 2597  
Permanent link to this record
 

 
Author Onur Ferhat; Arcadi Llanza; Fernando Vilariño edit  doi
isbn  openurl
  Title A Feature-Based Gaze Estimation Algorithm for Natural Light Scenarios Type Conference Article
  Year 2015 Publication (up) Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume 9117 Issue Pages 569-576  
  Keywords Eye tracking; Gaze estimation; Natural light; Webcam  
  Abstract We present an eye tracking system that works with regular webcams. We base our work on open source CVC Eye Tracker [7] and we propose a number of improvements and a novel gaze estimation method. The new method uses features extracted from iris segmentation and it does not fall into the traditional categorization of appearance–based/model–based methods. Our experiments show that our approach reduces the gaze estimation errors by 34 % in the horizontal direction and by 12 % in the vertical direction compared to the baseline system.  
  Address Santiago de Compostela; June 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium  
  Area Expedition Conference IbPRIA  
  Notes MV;SIAI Approved no  
  Call Number Admin @ si @ FLV2015a Serial 2646  
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny edit   pdf
doi  isbn
openurl 
  Title A Sliding Window Framework for Word Spotting Based on Word Attributes Type Conference Article
  Year 2015 Publication (up) Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal  
  Volume 9117 Issue Pages 652-661  
  Keywords Word spotting; Sliding window; Word attributes  
  Abstract In this paper we propose a segmentation-free approach to word spotting. Word images are first encoded into feature vectors using Fisher Vector. Then, these feature vectors are used together with pyramidal histogram of characters labels (PHOC) to learn SVM-based attribute models. Documents are represented by these PHOC based word attributes. To efficiently compute the word attributes over a sliding window, we propose to use an integral image representation of the document using a simplified version of the attribute model. Finally we re-rank the top word candidates using the more discriminative full version of the word attributes. We show state-of-the-art results for segmentation-free query-by-example word spotting in single-writer and multi-writer standard datasets.  
  Address Santiago de Compostela; June 2015  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-19389-2 Medium  
  Area Expedition Conference IbPRIA  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ GhV2015b Serial 2716  
Permanent link to this record
 

 
Author Victor Ponce; Sergio Escalera; Marc Perez; Oriol Janes; Xavier Baro edit  doi
openurl 
  Title Non-Verbal Communication Analysis in Victim-Offender Mediations Type Journal Article
  Year 2015 Publication (up) Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 67 Issue 1 Pages 19-27  
  Keywords Victim–Offender Mediation; Multi-modal human behavior analysis; Face and gesture recognition; Social signal processing; Computer vision; Machine learning  
  Abstract We present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. We propose the use of computer vision and social signal processing technologies in real scenarios of Victim–Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real Victim–Offender Mediation sessions in Catalonia. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state of the art binary classification approaches, our system achieves recognition accuracies of 86% when predicting satisfaction, and 79% when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1–5] for the computed social signals.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MV Approved no  
  Call Number Admin @ si @ PEP2015 Serial 2583  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Michael Felsberg; J.Laaksonen edit  doi
openurl 
  Title Compact color texture description for texture classification Type Journal Article
  Year 2015 Publication (up) Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 51 Issue Pages 16-22  
  Keywords  
  Abstract Describing textures is a challenging problem in computer vision and pattern recognition. The classification problem involves assigning a category label to the texture class it belongs to. Several factors such as variations in scale, illumination and viewpoint make the problem of texture description extremely challenging. A variety of histogram based texture representations exists in literature.
However, combining multiple texture descriptors and assessing their complementarity is still an open research problem. In this paper, we first show that combining multiple local texture descriptors significantly improves the recognition performance compared to using a single best method alone. This
gain in performance is achieved at the cost of high-dimensional final image representation. To counter this problem, we propose to use an information-theoretic compression technique to obtain a compact texture description without any significant loss in accuracy. In addition, we perform a comprehensive
evaluation of pure color descriptors, popular in object recognition, for the problem of texture classification. Experiments are performed on four challenging texture datasets namely, KTH-TIPS-2a, KTH-TIPS-2b, FMD and Texture-10. The experiments clearly demonstrate that our proposed compact multi-texture approach outperforms the single best texture method alone. In all cases, discriminative color names outperforms other color features for texture classification. Finally, we show that combining discriminative color names with compact texture representation outperforms state-of-the-art methods by 7:8%, 4:3% and 5:0% on KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets respectively.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.068; 600.079;ADAS Approved no  
  Call Number Admin @ si @ KRW2015a Serial 2587  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: