toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author David Roche; Debora Gil; Jesus Giraldo edit  doi
openurl 
  Title Mechanistic analysis of the function of agonists and allosteric modulators: Reconciling two-state and operational models Type Journal Article
  Year 2013 Publication British Journal of Pharmacology Abbreviated Journal BJP  
  Volume 169 Issue 6 Pages 1189-202  
  Keywords  
  Abstract Two-state and operational models of both agonism and allosterism are compared to identify and characterize common pharmacological parameters. To account for the receptor-dependent basal response, constitutive receptor activity is considered in the operational models. By arranging two-state models as the fraction of active receptors and operational models as the fractional response relative to the maximum effect of the system, a one-by-one correspondence between parameters is found. The comparative analysis allows a better understanding of complex allosteric interactions. In particular, the inclusion of constitutive receptor activity in the operational model of allosterism allows the characterization of modulators able to lower the basal response of the system; that is, allosteric modulators with negative intrinsic efficacy. Theoretical simulations and overall goodness of fit of the models to simulated data suggest that it is feasible to apply the models to experimental data and constitute one step forward in receptor theory formalism.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.044; 605.203 Approved no  
  Call Number IAM @ iam @ RGG2013b Serial (up) 2195  
Permanent link to this record
 

 
Author Joan M. Nuñez; Debora Gil; Fernando Vilariño edit  doi
openurl 
  Title Finger joint characterization from X-ray images for rheymatoid arthritis assessment Type Conference Article
  Year 2013 Publication 6th International Conference on Biomedical Electronics and Devices Abbreviated Journal  
  Volume Issue Pages 288-292  
  Keywords Rheumatoid Arthritis; X-Ray; Hand Joint; Sclerosis; Sharp Van der Heijde  
  Abstract In this study we propose amodular systemfor automatic rheumatoid arthritis assessment which provides a joint space width measure. A hand joint model is proposed based on the accurate analysis of a X-ray finger joint image sample set. This model shows that the sclerosis and the lower bone are the main necessary features in order to perform a proper finger joint characterization. We propose sclerosis and lower bone detection methods as well as the experimental setup necessary for its performance assessment. Our characterization is used to propose and compute a joint space width score which is shown to be related to the different degrees of arthritis. This assertion is verified by comparing our proposed score with Sharp Van der Heijde score, confirming that the lower our score is the more advanced is the patient affection.  
  Address Barcelona; February 2013  
  Corporate Author Thesis  
  Publisher SciTePress Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference BIODEVICES  
  Notes IAM;MV; 600.057; 600.054;SIAI Approved no  
  Call Number IAM @ iam @ NGV2013 Serial (up) 2196  
Permanent link to this record
 

 
Author Joan M. Nuñez; Jorge Bernal; F. Javier Sanchez; Fernando Vilariño edit   pdf
doi  openurl
  Title Blood Vessel Characterization in Colonoscopy Images to Improve Polyp Localization Type Conference Article
  Year 2013 Publication Proceedings of the International Conference on Computer Vision Theory and Applications Abbreviated Journal  
  Volume 1 Issue Pages 162-171  
  Keywords Colonoscopy; Blood vessel; Linear features; Valley detection  
  Abstract This paper presents an approach to mitigate the contribution of blood vessels to the energy image used at different tasks of automatic colonoscopy image analysis. This goal is achieved by introducing a characterization of endoluminal scene objects which allows us to differentiate between the trace of 2-dimensional visual objects,such as vessels, and shades from 3-dimensional visual objects, such as folds. The proposed characterization is based on the influence that the object shape has in the resulting visual feature, and it leads to the development of a blood vessel attenuation algorithm. A database consisting of manually labelled masks was built in order to test the performance of our method, which shows an encouraging success in blood vessel mitigation while keeping other structures intact. Moreover, by extending our method to the only available polyp localization
algorithm tested on a public database, blood vessel mitigation proved to have a positive influence on the overall performance.
 
  Address Barcelona; February 2013  
  Corporate Author Thesis  
  Publisher SciTePress Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area 800 Expedition Conference VISIGRAPP  
  Notes MV; 600.054; 600.057;SIAI Approved no  
  Call Number IAM @ iam @ NBS2013 Serial (up) 2198  
Permanent link to this record
 

 
Author Angel Sappa; Jordi Vitria edit  doi
isbn  openurl
  Title Multimodal Interaction in Image and Video Applications Type Book Whole
  Year 2013 Publication Multimodal Interaction in Image and Video Applications Abbreviated Journal  
  Volume 48 Issue Pages  
  Keywords  
  Abstract Book Series Intelligent Systems Reference Library  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1868-4394 ISBN 978-3-642-35931-6 Medium  
  Area Expedition Conference  
  Notes ADAS; OR;MV Approved no  
  Call Number Admin @ si @ SaV2013 Serial (up) 2199  
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Antonio Lopez; Javier Marin; Daniel Ponsa edit   pdf
doi  isbn
openurl 
  Title Learning a Multiview Part-based Model in Virtual World for Pedestrian Detection Type Conference Article
  Year 2013 Publication IEEE Intelligent Vehicles Symposium Abbreviated Journal  
  Volume Issue Pages 467 - 472  
  Keywords Pedestrian Detection; Virtual World; Part based  
  Abstract State-of-the-art deformable part-based models based on latent SVM have shown excellent results on human detection. In this paper, we propose to train a multiview deformable part-based model with automatically generated part examples from virtual-world data. The method is efficient as: (i) the part detectors are trained with precisely extracted virtual examples, thus no latent learning is needed, (ii) the multiview pedestrian detector enhances the performance of the pedestrian root model, (iii) a top-down approach is used for part detection which reduces the searching space. We evaluate our model on Daimler and Karlsruhe Pedestrian Benchmarks with publicly available Caltech pedestrian detection evaluation framework and the result outperforms the state-of-the-art latent SVM V4.0, on both average miss rate and speed (our detector is ten times faster).  
  Address Gold Coast; Australia; June 2013  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1931-0587 ISBN 978-1-4673-2754-1 Medium  
  Area Expedition Conference IV  
  Notes ADAS; 600.054; 600.057 Approved no  
  Call Number XVL2013; ADAS @ adas @ xvl2013a Serial (up) 2214  
Permanent link to this record
 

 
Author Marina Alberti edit  openurl
  Title Detection and Alignment of Vascular Structures in Intravascular Ultrasound using Pattern Recognition Techniques Type Book Whole
  Year 2013 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract In this thesis, several methods for the automatic analysis of Intravascular Ultrasound
(IVUS) sequences are presented, aimed at assisting physicians in the diagnosis, the assessment of the intervention and the monitoring of the patients with coronary disease.
The basis for the developed frameworks are machine learning, pattern recognition and
image processing techniques.
First, a novel approach for the automatic detection of vascular bifurcations in
IVUS is presented. The task is addressed as a binary classication problem (identifying bifurcation and non-bifurcation angular sectors in the sequence images). The
multiscale stacked sequential learning algorithm is applied, to take into account the
spatial and temporal context in IVUS sequences, and the results are rened using
a-priori information about branching dimensions and geometry. The achieved performance is comparable to intra- and inter-observer variability.
Then, we propose a novel method for the automatic non-rigid alignment of IVUS
sequences of the same patient, acquired at dierent moments (before and after percutaneous coronary intervention, or at baseline and follow-up examinations). The
method is based on the description of the morphological content of the vessel, obtained by extracting temporal morphological proles from the IVUS acquisitions, by
means of methods for segmentation, characterization and detection in IVUS. A technique for non-rigid sequence alignment – the Dynamic Time Warping algorithm -
is applied to the proles and adapted to the specic clinical problem. Two dierent robust strategies are proposed to address the partial overlapping between frames
of corresponding sequences, and a regularization term is introduced to compensate
for possible errors in the prole extraction. The benets of the proposed strategy
are demonstrated by extensive validation on synthetic and in-vivo data. The results
show the interest of the proposed non-linear alignment and the clinical value of the
method.
Finally, a novel automatic approach for the extraction of the luminal border in
IVUS images is presented. The method applies the multiscale stacked sequential
learning algorithm and extends it to 2-D+T, in a rst classication phase (the identi-
cation of lumen and non-lumen regions of the images), while an active contour model
is used in a second phase, to identify the lumen contour. The method is extended
to the longitudinal dimension of the sequences and it is validated on a challenging
data-set.
 
  Address Barcelona  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Simone Balocco;Petia Radeva  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ Alb2013 Serial (up) 2215  
Permanent link to this record
 

 
Author Patricia Marquez; Debora Gil; Aura Hernandez-Sabate; Daniel Kondermann edit   pdf
url  doi
isbn  openurl
  Title When Is A Confidence Measure Good Enough? Type Conference Article
  Year 2013 Publication 9th International Conference on Computer Vision Systems Abbreviated Journal  
  Volume 7963 Issue Pages 344-353  
  Keywords Optical flow, confidence measure, performance evaluation  
  Abstract Confidence estimation has recently become a hot topic in image processing and computer vision.Yet, several definitions exist of the term “confidence” which are sometimes used interchangeably. This is a position paper, in which we aim to give an overview on existing definitions,
thereby clarifying the meaning of the used terms to facilitate further research in this field. Based on these clarifications, we develop a theory to compare confidence measures with respect to their quality.
 
  Address St Petersburg; Russia; July 2013  
  Corporate Author Thesis  
  Publisher Springer Link Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-39401-0 Medium  
  Area Expedition Conference ICVS  
  Notes IAM;ADAS; 600.044; 600.057; 600.060; 601.145 Approved no  
  Call Number IAM @ iam @ MGH2013a Serial (up) 2218  
Permanent link to this record
 

 
Author David Vazquez; Jiaolong Xu; Sebastian Ramos; Antonio Lopez; Daniel Ponsa edit   pdf
doi  openurl
  Title Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes Type Conference Article
  Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal  
  Volume Issue Pages 706 - 711  
  Keywords Pedestrian Detection; Domain Adaptation  
  Abstract Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs.  
  Address Portland; Oregon; June 2013  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.054; 600.057; 601.217 Approved no  
  Call Number ADAS @ adas @ VXR2013a Serial (up) 2219  
Permanent link to this record
 

 
Author Jiaolong Xu; David Vazquez; Sebastian Ramos; Antonio Lopez; Daniel Ponsa edit   pdf
doi  openurl
  Title Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers Type Conference Article
  Year 2013 Publication CVPR Workshop on Ground Truth – What is a good dataset? Abbreviated Journal  
  Volume Issue Pages 688 - 693  
  Keywords Pedestrian Detection; Domain Adaptation  
  Abstract Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%.  
  Address Portland; oregon; June 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes ADAS; 600.054; 600.057; 601.217 Approved yes  
  Call Number XVR2013; ADAS @ adas @ xvr2013a Serial (up) 2220  
Permanent link to this record
 

 
Author Francisco Javier Orozco; Ognjen Rudovic; Jordi Gonzalez; Maja Pantic edit   pdf
url  doi
openurl 
  Title Hierarchical On-line Appearance-Based Tracking for 3D Head Pose, Eyebrows, Lips, Eyelids and Irises Type Journal Article
  Year 2013 Publication Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 31 Issue 4 Pages 322-340  
  Keywords On-line appearance models; Levenberg–Marquardt algorithm; Line-search optimization; 3D face tracking; Facial action tracking; Eyelid tracking; Iris tracking  
  Abstract In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 605.203; 302.012; 302.018; 600.049 Approved no  
  Call Number ORG2013 Serial (up) 2221  
Permanent link to this record
 

 
Author Marc Castello; Jordi Gonzalez; Ariel Amato; Pau Baiget; Carles Fernandez; Josep M. Gonfaus; Ramon Mollineda; Marco Pedersoli; Nicolas Perez de la Blanca; Xavier Roca edit   pdf
doi  isbn
openurl 
  Title Exploiting Multimodal Interaction Techniques for Video-Surveillance Type Book Chapter
  Year 2013 Publication Multimodal Interaction in Image and Video Applications Intelligent Systems Reference Library Abbreviated Journal  
  Volume 48 Issue 8 Pages 135-151  
  Keywords  
  Abstract In this paper we present an example of a video surveillance application that exploits Multimodal Interactive (MI) technologies. The main objective of the so-called VID-Hum prototype was to develop a cognitive artificial system for both the detection and description of a particular set of human behaviours arising from real-world events. The main procedure of the prototype described in this chapter entails: (i) adaptation, since the system adapts itself to the most common behaviours (qualitative data) inferred from tracking (quantitative data) thus being able to recognize abnormal behaviors; (ii) feedback, since an advanced interface based on Natural Language understanding allows end-users the communicationwith the prototype by means of conceptual sentences; and (iii) multimodality, since a virtual avatar has been designed to describe what is happening in the scene, based on those textual interpretations generated by the prototype. Thus, the MI methodology has provided an adequate framework for all these cooperating processes.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1868-4394 ISBN 978-3-642-35931-6 Medium  
  Area Expedition Conference  
  Notes ISE; 605.203; 600.049 Approved no  
  Call Number CGA2013 Serial (up) 2222  
Permanent link to this record
 

 
Author David Aldavert; Marçal Rusiñol; Ricardo Toledo; Josep Llados edit   pdf
doi  openurl
  Title Integrating Visual and Textual Cues for Query-by-String Word Spotting Type Conference Article
  Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 511 - 515  
  Keywords  
  Abstract In this paper, we present a word spotting framework that follows the query-by-string paradigm where word images are represented both by textual and visual representations. The textual representation is formulated in terms of character $n$-grams while the visual one is based on the bag-of-visual-words scheme. These two representations are merged together and projected to a sub-vector space. This transform allows to, given a textual query, retrieve word instances that were only represented by the visual modality. Moreover, this statistical representation can be used together with state-of-the-art indexation structures in order to deal with large-scale scenarios. The proposed method is evaluated using a collection of historical documents outperforming state-of-the-art performances.  
  Address Washington; USA; August 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-5363 ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; ADAS; 600.045; 600.055; 600.061 Approved no  
  Call Number Admin @ si @ ART2013 Serial (up) 2224  
Permanent link to this record
 

 
Author German Ros; J. Guerrero; Angel Sappa; Antonio Lopez edit   pdf
doi  isbn
openurl 
  Title VSLAM pose initialization via Lie groups and Lie algebras optimization Type Conference Article
  Year 2013 Publication Proceedings of IEEE International Conference on Robotics and Automation Abbreviated Journal  
  Volume Issue Pages 5740 - 5747  
  Keywords SLAM  
  Abstract We present a novel technique for estimating initial 3D poses in the context of localization and Visual SLAM problems. The presented approach can deal with noise, outliers and a large amount of input data and still performs in real time in a standard CPU. Our method produces solutions with an accuracy comparable to those produced by RANSAC but can be much faster when the percentage of outliers is high or for large amounts of input data. On the current work we propose to formulate the pose estimation as an optimization problem on Lie groups, considering their manifold structure as well as their associated Lie algebras. This allows us to perform a fast and simple optimization at the same time that conserve all the constraints imposed by the Lie group SE(3). Additionally, we present several key design concepts related with the cost function and its Jacobian; aspects that are critical for the good performance of the algorithm.  
  Address Karlsruhe; Germany; May 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1050-4729 ISBN 978-1-4673-5641-1 Medium  
  Area Expedition Conference ICRA  
  Notes ADAS; 600.054; 600.055; 600.057 Approved no  
  Call Number Admin @ si @ RGS2013a; ADAS @ adas @ Serial (up) 2225  
Permanent link to this record
 

 
Author Ferran Diego; Joan Serrat; Antonio Lopez edit   pdf
doi  openurl
  Title Joint spatio-temporal alignment of sequences Type Journal Article
  Year 2013 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM  
  Volume 15 Issue 6 Pages 1377-1387  
  Keywords video alignment  
  Abstract Video alignment is important in different areas of computer vision such as wide baseline matching, action recognition, change detection, video copy detection and frame dropping prevention. Current video alignment methods usually deal with a relatively simple case of fixed or rigidly attached cameras or simultaneous acquisition. Therefore, in this paper we propose a joint video alignment for bringing two video sequences into a spatio-temporal alignment. Specifically, the novelty of the paper is to formulate the video alignment to fold the spatial and temporal alignment into a single alignment framework. This simultaneously satisfies a frame-correspondence and frame-alignment similarity; exploiting the knowledge among neighbor frames by a standard pairwise Markov random field (MRF). This new formulation is able to handle the alignment of sequences recorded at different times by independent moving cameras that follows a similar trajectory, and also generalizes the particular cases that of fixed geometric transformation and/or linear temporal mapping. We conduct experiments on different scenarios such as sequences recorded simultaneously or by moving cameras to validate the robustness of the proposed approach. The proposed method provides the highest video alignment accuracy compared to the state-of-the-art methods on sequences recorded from vehicles driving along the same track at different times.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-9210 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ DSL2013; ADAS @ adas @ Serial (up) 2228  
Permanent link to this record
 

 
Author Bogdan Raducanu; Fadi Dornaika edit   pdf
doi  openurl
  Title Texture-independent recognition of facial expressions in image snapshots and videos Type Journal Article
  Year 2013 Publication Machine Vision and Applications Abbreviated Journal MVA  
  Volume 24 Issue 4 Pages 811-820  
  Keywords  
  Abstract This paper addresses the static and dynamic recognition of basic facial expressions. It has two main contributions. First, we introduce a view- and texture-independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker. We represent the learned facial actions associated with different facial expressions by time series. Second, we compare this dynamic scheme with a static one based on analyzing individual snapshots and show that the former performs better than the latter. We provide evaluations of performance using three subspace learning techniques: linear discriminant analysis, non-parametric discriminant analysis and support vector machines.  
  Address  
  Corporate Author Thesis  
  Publisher Springer-Verlag Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0932-8092 ISBN Medium  
  Area Expedition Conference  
  Notes OR; 600.046; 605.203;MV Approved no  
  Call Number Admin @ si @ RaD2013 Serial (up) 2230  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: