toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Sergio Escalera; Alicia Fornes; O. Pujol; Petia Radeva; Gemma Sanchez; Josep Llados edit  doi
openurl 
  Title Blurred Shape Model for Binary and Grey-level Symbol Recognition Type Journal Article
  Year 2009 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 30 Issue 15 Pages 1424–1433  
  Keywords  
  Abstract Many symbol recognition problems require the use of robust descriptors in order to obtain rich information of the data. However, the research of a good descriptor is still an open issue due to the high variability of symbols appearance. Rotation, partial occlusions, elastic deformations, intra-class and inter-class variations, or high variability among symbols due to different writing styles, are just a few problems. In this paper, we introduce a symbol shape description to deal with the changes in appearance that these types of symbols suffer. The shape of the symbol is aligned based on principal components to make the recognition invariant to rotation and reflection. Then, we present the Blurred Shape Model descriptor (BSM), where new features encode the probability of appearance of each pixel that outlines the symbols shape. Moreover, we include the new descriptor in a system to deal with multi-class symbol categorization problems. Adaboost is used to train the binary classifiers, learning the BSM features that better split symbol classes. Then, the binary problems are embedded in an Error-Correcting Output Codes framework (ECOC) to deal with the multi-class case. The methodology is evaluated on different synthetic and real data sets. State-of-the-art descriptors and classifiers are compared, showing the robustness and better performance of the present scheme to classify symbols with high variability of appearance.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes HuPBA; DAG; MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ EFP2009a Serial 1180  
Permanent link to this record
 

 
Author Javier Vazquez; C. Alejandro Parraga; Maria Vanrell edit  openurl
  Title Ordinal pairwise method for natural images comparison Type Journal Article
  Year 2009 Publication Perception Abbreviated Journal PER  
  Volume 38 Issue Pages 180  
  Keywords  
  Abstract 38(Suppl.)ECVP Abstract Supplement
We developed a new psychophysical method to compare different colour appearance models when applied to natural scenes. The method was as follows: two images (processed by different algorithms) were displayed on a CRT monitor and observers were asked to select the most natural of them. The original images were gathered by means of a calibrated trichromatic digital camera and presented one on top of the other on a calibrated screen. The selection was made by pressing on a 6-button IR box, which allowed observers to consider not only the most natural but to rate their selection. The rating system allowed observers to register how much more natural was their chosen image (eg, much more, definitely more, slightly more), which gave us valuable extra information on the selection process. The results were analysed considering both the selection as a binary choice (using Thurstone's law of comparative judgement) and using Bradley-Terry method for ordinal comparison. Our results show a significant difference in the rating scales obtained. Although this method has been used in colour constancy algorithm comparisons, its uses are much wider, eg to compare algorithms of image compression, rendering, recolouring, etc.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes CIC Approved no  
  Call Number CAT @ cat @ VPV2009b Serial 1191  
Permanent link to this record
 

 
Author Robert Benavente; C. Alejandro Parraga; Maria Vanrell edit  openurl
  Title Colour categories boundaries are better defined in contextual conditions Type Journal Article
  Year 2009 Publication Perception Abbreviated Journal PER  
  Volume 38 Issue Pages 36  
  Keywords  
  Abstract In a previous experiment [Parraga et al, 2009 Journal of Imaging Science and Technology 53(3)] the boundaries between basic colour categories were measured by asking subjects to categorize colour samples presented in isolation (ie on a dark background) using a YES/NO paradigm. Results showed that some boundaries (eg green – blue) were very diffuse and the subjects' answers presented bimodal distributions, which were attributed to the emergence of non-basic categories in those regions (eg turquoise). To confirm these results we performed a new experiment focussed on the boundaries where bimodal distributions were more evident. In this new experiment rectangular colour samples were presented surrounded by random colour patches to simulate contextual conditions on a calibrated CRT monitor. The names of two neighbouring colours were shown at the bottom of the screen and subjects selected the boundary between these colours by controlling the chromaticity of the central patch, sliding it across these categories' frontier. Results show that in this new experimental paradigm, the formerly uncertain inter-colour category boundaries are better defined and the dispersions (ie the bimodal distributions) that occurred in the previous experiment disappear. These results may provide further support to Berlin and Kay's basic colour terms theory.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes CIC Approved no  
  Call Number CAT @ cat @ BPV2009 Serial 1192  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Javier Vazquez; Maria Vanrell edit  openurl
  Title A new cone activation-based natural images dataset Type Journal Article
  Year 2009 Publication Perception Abbreviated Journal PER  
  Volume 36 Issue Pages 180  
  Keywords  
  Abstract We generated a new dataset of digital natural images where each colour plane corresponds to the human LMS (long-, medium-, short-wavelength) cone activations. The images were chosen to represent five different visual environments (eg forest, seaside, mountain snow, urban, motorways) and were taken under natural illumination at different times of day. At the bottom-left corner of each picture there was a matte grey ball of approximately constant spectral reflectance (across the camera's response spectrum,) and nearly Lambertian reflective properties, which allows to compute (and remove, if necessary) the illuminant's colour and intensity. The camera (Sigma Foveon SD10) was calibrated by measuring its sensor's spectral responses using a set of 31 spectrally narrowband interference filters. This allowed conversion of the final camera-dependent RGB colour space into the Smith and Pokorny (1975) cone activation space by means of a polynomial transformation, optimised for a set of 1269 Munsell chip reflectances. This new method is an improvement over the usual 3 × 3 matrix transformation which is only accurate for spectrally-narrowband colours. The camera-to-LMS transformation can be recalculated to consider other non-human visual systems. The dataset is available to download from our website.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes CIC Approved no  
  Call Number CAT @ cat @ PVV2009 Serial 1193  
Permanent link to this record
 

 
Author Joost Van de Weijer; Cordelia Schmid; Jakob Verbeek; Diane Larlus edit  url
doi  openurl
  Title Learning Color Names for Real-World Applications Type Journal Article
  Year 2009 Publication IEEE Transaction in Image Processing Abbreviated Journal TIP  
  Volume 18 Issue 7 Pages 1512–1524  
  Keywords  
  Abstract Color names are required in real-world applications such as image retrieval and image annotation. Traditionally, they are learned from a collection of labelled color chips. These color chips are labelled with color names within a well-defined experimental setup by human test subjects. However naming colors in real-world images differs significantly from this experimental setting. In this paper, we investigate how color names learned from color chips compare to color names learned from real-world images. To avoid hand labelling real-world images with color names we use Google Image to collect a data set. Due to limitations of Google Image this data set contains a substantial quantity of wrongly labelled data. We propose several variants of the PLSA model to learn color names from this noisy data. Experimental results show that color names learned from real-world images significantly outperform color names learned from labelled color chips for both image retrieval and image annotation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference (down)  
  Notes Approved no  
  Call Number CAT @ cat @ WSV2009 Serial 1195  
Permanent link to this record
 

 
Author Pau Baiget edit  openurl
  Title Modeling Human Behavior for Image Sequence Understanding and Generation Type Book Whole
  Year 2009 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract The comprehension of animal behavior, especially human behavior, is one of the most ancient and studied problems since the beginning of civilization. The big list of factors that interact to determine a person action require the collaboration of different disciplines, such as psichology, biology, or sociology. In the last years the analysis of human behavior has received great attention also from the computer vision community, given the latest advances in the acquisition of human motion data from image sequences.

Despite the increasing availability of that data, there still exists a gap towards obtaining a conceptual representation of the obtained observations. Human behavior analysis is based on a qualitative interpretation of the results, and therefore the assignment of concepts to quantitative data is linked to a certain ambiguity.

This Thesis tackles the problem of obtaining a proper representation of human behavior in the contexts of computer vision and animation. On the one hand, a good behavior model should permit the recognition and explanation the observed activity in image sequences. On the other hand, such a model must allow the generation of new synthetic instances, which model the behavior of virtual agents.

First, we propose methods to automatically learn the models from observations. Given a set of quantitative results output by a vision system, a normal behavior model is learnt. This results provides a tool to determine the normality or abnormality of future observations. However, machine learning methods are unable to provide a richer description of the observations. We confront this problem by means of a new method that incorporates prior knowledge about the enviornment and about the expected behaviors. This framework, formed by the reasoning engine FMTL and the modeling tool SGT allows the generation of conceptual descriptions of activity in new image sequences. Finally, we demonstrate the suitability of the proposed framework to simulate behavior of virtual agents, which are introduced into real image sequences and interact with observed real agents, thereby easing the generation of augmented reality sequences.

The set of approaches presented in this Thesis has a growing set of potential applications. The analysis and description of behavior in image sequences has its principal application in the domain of smart video--surveillance, in order to detect suspicious or dangerous behaviors. Other applications include automatic sport commentaries, elderly monitoring, road traffic analysis, and the development of semantic video search engines. Alternatively, behavioral virtual agents allow to simulate accurate real situations, such as fires or crowds. Moreover, the inclusion of virtual agents into real image sequences has been widely deployed in the games and cinema industries.
 
  Address Bellaterra (Spain)  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Gonzalez;Xavier Roca  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes Approved no  
  Call Number Admin @ si @ Bai2009 Serial 1210  
Permanent link to this record
 

 
Author Jordi Gonzalez; Dani Rowe; Javier Varona; Xavier Roca edit  doi
openurl 
  Title Understanding Dynamic Scenes based on Human Sequence Evaluation Type Journal Article
  Year 2009 Publication Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 27 Issue 10 Pages 1433–1444  
  Keywords Image Sequence Evaluation; High-level processing of monitored scenes; Segmentation and tracking in complex scenes; Event recognition in dynamic scenes; Human motion understanding; Human behaviour interpretation; Natural-language text generation; Realistic demonstrators  
  Abstract In this paper, a Cognitive Vision System (CVS) is presented, which explains the human behaviour of monitored scenes using natural-language texts. This cognitive analysis of human movements recorded in image sequences is here referred to as Human Sequence Evaluation (HSE) which defines a set of transformation modules involved in the automatic generation of semantic descriptions from pixel values. In essence, the trajectories of human agents are obtained to generate textual interpretations of their motion, and also to infer the conceptual relationships of each agent w.r.t. its environment. For this purpose, a human behaviour model based on Situation Graph Trees (SGTs) is considered, which permits both bottom-up (hypothesis generation) and top-down (hypothesis refinement) analysis of dynamic scenes. The resulting system prototype interprets different kinds of behaviour and reports textual descriptions in multiple languages.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes ISE Approved no  
  Call Number ISE @ ise @ GRV2009 Serial 1211  
Permanent link to this record
 

 
Author Carles Fernandez; Pau Baiget; Xavier Roca; Jordi Gonzalez edit  openurl
  Title Exploiting Natural Language Generation in Scene Interpretation Type Book Chapter
  Year 2009 Publication Human–Centric Interfaces for Ambient Intelligence Abbreviated Journal  
  Volume 4 Issue Pages 71–93  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Science and Tech Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes ISE Approved no  
  Call Number ISE @ ise @ FBR2009 Serial 1212  
Permanent link to this record
 

 
Author Fadi Dornaika; Bogdan Raducanu edit  doi
openurl 
  Title Three-Dimensional Face Pose Detection and Tracking Using Monocular Videos: Tool and Application Type Journal Article
  Year 2009 Publication IEEE Transactions on Systems, Man and Cybernetics part B Abbreviated Journal TSMCB  
  Volume 39 Issue 4 Pages 935–944  
  Keywords  
  Abstract Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2009a Serial 1218  
Permanent link to this record
 

 
Author Oriol Ramos Terrades; Ernest Valveny; Salvatore Tabbone edit  doi
openurl 
  Title Optimal Classifier Fusion in a Non-Bayesian Probabilistic Framework Type Journal Article
  Year 2009 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 31 Issue 9 Pages 1630–1644  
  Keywords  
  Abstract The combination of the output of classifiers has been one of the strategies used to improve classification rates in general purpose classification systems. Some of the most common approaches can be explained using the Bayes' formula. In this paper, we tackle the problem of the combination of classifiers using a non-Bayesian probabilistic framework. This approach permits us to derive two linear combination rules that minimize misclassification rates under some constraints on the distribution of classifiers. In order to show the validity of this approach we have compared it with other popular combination rules from a theoretical viewpoint using a synthetic data set, and experimentally using two standard databases: the MNIST handwritten digit database and the GREC symbol database. Results on the synthetic data set show the validity of the theoretical approach. Indeed, results on real data show that the proposed methods outperform other common combination schemes.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference (down)  
  Notes DAG Approved no  
  Call Number DAG @ dag @ RVT2009 Serial 1220  
Permanent link to this record
 

 
Author Angel Sappa; Niki Aifanti; Sotiris Malassiotis; Michael G. Strintzis edit  openurl
  Title Prior Knowledge Based Motion Model Representation Type Book Chapter
  Year 2009 Publication Progress in Computer Vision and Image Analysis Abbreviated Journal  
  Volume 16 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor Horst Bunke; JuanJose Villanueva; Gemma Sanchez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ SAM2009 Serial 1235  
Permanent link to this record
 

 
Author Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat; Antonio Lopez edit   pdf
doi  openurl
  Title Predicting Missing Ratings in Recommender Systems: Adapted Factorization Approach Type Journal Article
  Year 2009 Publication International Journal of Electronic Commerce Abbreviated Journal  
  Volume 14 Issue 1 Pages 89-108  
  Keywords  
  Abstract The paper presents a factorization-based approach to make predictions in recommender systems. These systems are widely used in electronic commerce to help customers find products according to their preferences. Taking into account the customer's ratings of some products available in the system, the recommender system tries to predict the ratings the customer would give to other products in the system. The proposed factorization-based approach uses all the information provided to compute the predicted ratings, in the same way as approaches based on Singular Value Decomposition (SVD). The main advantage of this technique versus SVD-based approaches is that it can deal with missing data. It also has a smaller computational cost. Experimental results with public data sets are provided to show that the proposed adapted factorization approach gives better predicted ratings than a widely used SVD-based approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1086-4415 ISBN Medium  
  Area Expedition Conference (down)  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ JSL2009b Serial 1237  
Permanent link to this record
 

 
Author Arnau Ramisa; Adriana Tapus; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras edit  doi
openurl 
  Title Robust Vision-Based Localization using Combinations of Local Feature Regions Detectors Type Journal Article
  Year 2009 Publication Autonomous Robots Abbreviated Journal AR  
  Volume 27 Issue 4 Pages 373-385  
  Keywords  
  Abstract This paper presents a vision-based approach for mobile robot localization. The model of the environment is topological. The new approach characterizes a place using a signature. This signature consists of a constellation of descriptors computed over different types of local affine covariant regions extracted from an omnidirectional image acquired rotating a standard camera with a pan-tilt unit. This type of representation permits a reliable and distinctive environment modelling. Our objectives were to validate the proposed method in indoor environments and, also, to find out if the combination of complementary local feature region detectors improves the localization versus using a single region detector. Our experimental results show that if false matches are effectively rejected, the combination of different covariant affine region detectors increases notably the performance of the approach by combining the different strengths of the individual detectors. In order to reduce the localization time, two strategies are evaluated: re-ranking the map nodes using a global similarity measure and using standard perspective view field of 45°.
In order to systematically test topological localization methods, another contribution proposed in this work is a novel method to see the degradation in localization performance as the robot moves away from the point where the original signature was acquired. This allows to know the robustness of the proposed signature. In order for this to be effective, it must be done in several, variated, environments that test all the possible situations in which the robot may have to perform localization.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0929-5593 ISBN Medium  
  Area Expedition Conference (down)  
  Notes ADAS Approved no  
  Call Number Admin @ si @ RTA2009 Serial 1245  
Permanent link to this record
 

 
Author Carlo Gatta; Oriol Pujol; Oriol Rodriguez-Leor; J. M. Ferre; Petia Radeva edit  doi
openurl 
  Title Fast Rigid Registration of Vascular Structures in IVUS Sequences Type Journal Article
  Year 2009 Publication IEEE Transactions on Information Technology in Biomedicine Abbreviated Journal  
  Volume 13 Issue 6 Pages 106-1011  
  Keywords  
  Abstract Intravascular ultrasound (IVUS) technology permits visualization of high-resolution images of internal vascular structures. IVUS is a unique image-guiding tool to display longitudinal view of the vessels, and estimate the length and size of vascular structures with the goal of accurate diagnosis. Unfortunately, due to pulsatile contraction and expansion of the heart, the captured images are affected by different motion artifacts that make visual inspection difficult. In this paper, we propose an efficient algorithm that aligns vascular structures and strongly reduces the saw-shaped oscillation, simplifying the inspection of longitudinal cuts; it reduces the motion artifacts caused by the displacement of the catheter in the short-axis plane and the catheter rotation due to vessel tortuosity. The algorithm prototype aligns 3.16 frames/s and clearly outperforms state-of-the-art methods with similar computational cost. The speed of the algorithm is crucial since it allows to inspect the corrected sequence during patient intervention. Moreover, we improved an indirect methodology for IVUS rigid registration algorithm evaluation.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1089-7771 ISBN Medium  
  Area Expedition Conference (down)  
  Notes MILAB;HuPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ GPL2009 Serial 1250  
Permanent link to this record
 

 
Author Fosca De Iorio; Carolina Malagelada; Fernando Azpiroz; M. Maluenda; C. Violanti; Laura Igual; Jordi Vitria; Juan R. Malagelada edit  doi
openurl 
  Title Intestinal motor activity, endoluminal motion and transit Type Journal Article
  Year 2009 Publication Neurogastroenterology & Motility Abbreviated Journal NEUMOT  
  Volume 21 Issue 12 Pages 1264–e119  
  Keywords  
  Abstract A programme for evaluation of intestinal motility has been recently developed based on endoluminal image analysis using computer vision methodology and machine learning techniques. Our aim was to determine the effect of intestinal muscle inhibition on wall motion, dynamics of luminal content and transit in the small bowel. Fourteen healthy subjects ingested the endoscopic capsule (Pillcam, Given Imaging) in fasting conditions. Seven of them received glucagon (4.8 microg kg(-1) bolus followed by a 9.6 microg kg(-1) h(-1) infusion during 1 h) and in the other seven, fasting activity was recorded, as controls. This dose of glucagon has previously shown to inhibit both tonic and phasic intestinal motor activity. Endoluminal image and displacement was analyzed by means of a computer vision programme specifically developed for the evaluation of muscular activity (contractile and non-contractile patterns), intestinal contents, endoluminal motion and transit. Thirty-minute periods before, during and after glucagon infusion were analyzed and compared with equivalent periods in controls. No differences were found in the parameters measured during the baseline (pretest) periods when comparing glucagon and control experiments. During glucagon infusion, there was a significant reduction in contractile activity (0.2 +/- 0.1 vs 4.2 +/- 0.9 luminal closures per min, P < 0.05; 0.4 +/- 0.1 vs 3.4 +/- 1.2% of images with radial wrinkles, P < 0.05) and a significant reduction of endoluminal motion (82 +/- 9 vs 21 +/- 10% of static images, P < 0.05). Endoluminal image analysis, by means of computer vision and machine learning techniques, can reliably detect reduced intestinal muscle activity and motion.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (down)  
  Notes OR;MILAB;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DMA2009 Serial 1251  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: