toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author A. Pujol; Juan J. Villanueva edit  openurl
  Title Desarrollo de una interface basada en la utilizacion de redes neuronales aplicadas a la clasificacion de las respuestas electroencefalograficas a estimulos visuales. Type Journal Article
  Year 1996 Publication (down) XIV Congreso anual de la sociedad española de ingenieria biomedica Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number ISE @ ise @ PuV1996 Serial 215  
Permanent link to this record
 

 
Author Mariella Dimiccoli edit   pdf
doi  openurl
  Title Figure-ground segregation: A fully nonlocal approach Type Journal Article
  Year 2016 Publication (down) Vision Research Abbreviated Journal VR  
  Volume 126 Issue Pages 308-317  
  Keywords Figure-ground segregation; Nonlocal approach; Directional linear voting; Nonlinear diffusion  
  Abstract We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; Approved no  
  Call Number Admin @ si @ Dim2016b Serial 2623  
Permanent link to this record
 

 
Author Ivet Rafegas; Maria Vanrell edit   pdf
url  doi
openurl 
  Title Color encoding in biologically-inspired convolutional neural networks Type Journal Article
  Year 2018 Publication (down) Vision Research Abbreviated Journal VR  
  Volume 151 Issue Pages 7-17  
  Keywords Color coding; Computer vision; Deep learning; Convolutional neural networks  
  Abstract Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC; 600.051; 600.087 Approved no  
  Call Number Admin @ si @RaV2018 Serial 3114  
Permanent link to this record
 

 
Author David Berga; Xose R. Fernandez-Vidal; Xavier Otazu; V. Leboran; Xose M. Pardo edit   pdf
url  openurl
  Title Psychophysical evaluation of individual low-level feature influences on visual attention Type Journal Article
  Year 2019 Publication (down) Vision Research Abbreviated Journal VR  
  Volume 154 Issue Pages 60-79  
  Keywords Visual attention; Psychophysics; Saliency; Task; Context; Contrast; Center bias; Low-level; Synthetic; Dataset  
  Abstract In this study we provide the analysis of eye movement behavior elicited by low-level feature distinctiveness with a dataset of synthetically-generated image patterns. Design of visual stimuli was inspired by the ones used in previous psychophysical experiments, namely in free-viewing and visual searching tasks, to provide a total of 15 types of stimuli, divided according to the task and feature to be analyzed. Our interest is to analyze the influences of low-level feature contrast between a salient region and the rest of distractors, providing fixation localization characteristics and reaction time of landing inside the salient region. Eye-tracking data was collected from 34 participants during the viewing of a 230 images dataset. Results show that saliency is predominantly and distinctively influenced by: 1. feature type, 2. feature contrast, 3. temporality of fixations, 4. task difficulty and 5. center bias. This experimentation proposes a new psychophysical basis for saliency model evaluation using synthetic images.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes NEUROBIT; 600.128; 600.120 Approved no  
  Call Number Admin @ si @ BFO2019a Serial 3274  
Permanent link to this record
 

 
Author Simone Balocco; Carlo Gatta; Oriol Pujol; J. Mauri; Petia Radeva edit  doi
openurl 
  Title SRBF: Speckle Reducing Bilateral Filtering Type Journal Article
  Year 2010 Publication (down) Ultrasound in Medicine and Biology Abbreviated Journal UMB  
  Volume 36 Issue 8 Pages 1353-1363  
  Keywords  
  Abstract Speckle noise negatively affects medical ultrasound image shape interpretation and boundary detection. Speckle removal filters are widely used to selectively remove speckle noise without destroying important image features to enhance object boundaries. In this article, a fully automatic bilateral filter tailored to ultrasound images is proposed. The edge preservation property is obtained by embedding noise statistics in the filter framework. Consequently, the filter is able to tackle the multiplicative behavior modulating the smoothing strength with respect to local statistics. The in silico experiments clearly showed that the speckle reducing bilateral filter (SRBF) has superior performances to most of the state of the art filtering methods. The filter is tested on 50 in vivo US images and its influence on a segmentation task is quantified. The results using SRBF filtered data sets show a superior performance to using oriented anisotropic diffusion filtered images. This improvement is due to the adaptive support of SRBF and the embedded noise statistics, yielding a more homogeneous smoothing. SRBF results in a fully automatic, fast and flexible algorithm potentially suitable in wide ranges of speckle noise sizes, for different medical applications (IVUS, B-mode, 3-D matrix array US).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HUPBA Approved no  
  Call Number BCNPCL @ bcnpcl @ BGP2010 Serial 1314  
Permanent link to this record
 

 
Author Marina Alberti; Simone Balocco; Xavier Carrillo; J. Mauri; Petia Radeva edit  url
doi  openurl
  Title Automatic non-rigid temporal alignment of IVUS sequences: method and quantitative validation Type Journal Article
  Year 2013 Publication (down) Ultrasound in Medicine and Biology Abbreviated Journal UMB  
  Volume 39 Issue 9 Pages 1698-712  
  Keywords Intravascular ultrasound; Dynamic time warping; Non-rigid alignment; Sequence matching; Partial overlapping strategy  
  Abstract Clinical studies on atherosclerosis regression/progression performed by intravascular ultrasound analysis would benefit from accurate alignment of sequences of the same patient before and after clinical interventions and at follow-up. In this article, a methodology for automatic alignment of intravascular ultrasound sequences based on the dynamic time warping technique is proposed. The non-rigid alignment is adapted to the specific task by applying it to multidimensional signals describing the morphologic content of the vessel. Moreover, dynamic time warping is embedded into a framework comprising a strategy to address partial overlapping between acquisitions and a term that regularizes non-physiologic temporal compression/expansion of the sequences. Extensive validation is performed on both synthetic and in vivo data. The proposed method reaches alignment errors of approximately 0.43 mm for pairs of sequences acquired during the same intervention phase and 0.77 mm for pairs of sequences acquired at successive intervention stages.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ABC2013 Serial 2313  
Permanent link to this record
 

 
Author G. Zahnd; Simone Balocco; A. Serusclat; P. Moulin; M. Orkisz; D. Vray edit  doi
openurl 
  Title Progressive attenuation of the longitudinal kinetics in the common carotid artery: preliminary in vivo assessment Ultrasound in Medicine and Biology Type Journal Article
  Year 2015 Publication (down) Ultrasound in Medicine and Biology Abbreviated Journal UMB  
  Volume 41 Issue 1 Pages 339-345  
  Keywords Arterial stiffness; Atherosclerosis; Common carotid artery; Longitudinal kinetics; Motion tracking; Ultrasound imaging  
  Abstract Longitudinal kinetics (LOKI) of the arterial wall consists of the shearing motion of the intima-media complex over the adventitia layer in the direction parallel to the blood flow during the cardiac cycle. The aim of this study was to investigate the local variability of LOKI amplitude along the length of the vessel. By use of a previously validated motion-estimation framework, 35 in vivo longitudinal B-mode ultrasound cine loops of healthy common carotid arteries were analyzed. Results indicated that LOKI amplitude is progressively attenuated along the length of the artery, as it is larger in regions located on the proximal side of the image (i.e., toward the heart) and smaller in regions located on the distal side of the image (i.e., toward the head), with an average attenuation coefficient of -2.5 ± 2.0%/mm. Reported for the first time in this study, this phenomenon is likely to be of great importance in improving understanding of atherosclerosis mechanisms, and has the potential to be a novel index of arterial stiffness.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number Admin @ si @ ZBS2014 Serial 2556  
Permanent link to this record
 

 
Author Daniel Ponsa; Joan Serrat; Antonio Lopez edit   pdf
doi  openurl
  Title On-board image-based vehicle detection and tracking Type Journal Article
  Year 2011 Publication (down) Transactions of the Institute of Measurement and Control Abbreviated Journal TIM  
  Volume 33 Issue 7 Pages 783-805  
  Keywords vehicle detection  
  Abstract In this paper we present a computer vision system for daytime vehicle detection and localization, an essential step in the development of several types of advanced driver assistance systems. It has a reduced processing time and high accuracy thanks to the combination of vehicle detection with lane-markings estimation and temporal tracking of both vehicles and lane markings. Concerning vehicle detection, our main contribution is a frame scanning process that inspects images according to the geometry of image formation, and with an Adaboost-based detector that is robust to the variability in the different vehicle types (car, van, truck) and lighting conditions. In addition, we propose a new method to estimate the most likely three-dimensional locations of vehicles on the road ahead. With regards to the lane-markings estimation component, we have two main contributions. First, we employ a different image feature to the other commonly used edges: we use ridges, which are better suited to this problem. Second, we adapt RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane markings to the image features. We qualitatively assess our vehicle detection system in sequences captured on several road types and under very different lighting conditions. The processed videos are available on a web page associated with this paper. A quantitative evaluation of the system has shown quite accurate results (a low number of false positives and negatives) at a reasonable computation time.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number ADAS @ adas @ PSL2011 Serial 1413  
Permanent link to this record
 

 
Author Ana Garcia Rodriguez; Jorge Bernal; F. Javier Sanchez; Henry Cordova; Rodrigo Garces Duran; Cristina Rodriguez de Miguel; Gloria Fernandez Esparrach edit  url
doi  openurl
  Title Polyp fingerprint: automatic recognition of colorectal polyps’ unique features Type Journal Article
  Year 2020 Publication (down) Surgical Endoscopy and other Interventional Techniques Abbreviated Journal SEND  
  Volume 34 Issue 4 Pages 1887-1889  
  Keywords  
  Abstract BACKGROUND:
Content-based image retrieval (CBIR) is an application of machine learning used to retrieve images by similarity on the basis of features. Our objective was to develop a CBIR system that could identify images containing the same polyp ('polyp fingerprint').

METHODS:
A machine learning technique called Bag of Words was used to describe each endoscopic image containing a polyp in a unique way. The system was tested with 243 white light images belonging to 99 different polyps (for each polyp there were at least two images representing it in two different temporal moments). Images were acquired in routine colonoscopies at Hospital Clínic using high-definition Olympus endoscopes. The method provided for each image the closest match within the dataset.

RESULTS:
The system matched another image of the same polyp in 221/243 cases (91%). No differences were observed in the number of correct matches according to Paris classification (protruded: 90.7% vs. non-protruded: 91.3%) and size (< 10 mm: 91.6% vs. > 10 mm: 90%).

CONCLUSIONS:
A CBIR system can match accurately two images containing the same polyp, which could be a helpful aid for polyp image recognition.

KEYWORDS:
Artificial intelligence; Colorectal polyps; Content-based image retrieval
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MV; no menciona Approved no  
  Call Number Admin @ si @ Serial 3403  
Permanent link to this record
 

 
Author Enric Marti; Jordi Regincos;Jaime Lopez-Krahe; Juan J.Villanueva edit  url
doi  openurl
  Title Hand line drawing interpretation as three-dimensional objects Type Journal Article
  Year 1993 Publication (down) Signal Processing – Intelligent systems for signal and image understanding Abbreviated Journal  
  Volume 32 Issue 1-2 Pages 91-110  
  Keywords Line drawing interpretation; line labelling; scene analysis; man-machine interaction; CAD input; line extraction  
  Abstract In this paper we present a technique to interpret hand line drawings as objects in a three-dimensional space. The object domain considered is based on planar surfaces with straight edges, concretely, on ansextension of Origami world to hidden lines. The line drawing represents the object under orthographic projection and it is sensed using a scanner. Our method is structured in two modules: feature extraction and feature interpretation. In the first one, image processing techniques are applied under certain tolerance margins to detect lines and junctions on the hand line drawing. Feature interpretation module is founded on line labelling techniques using a labelled junction dictionary. A labelling algorithm is here proposed. It uses relaxation techniques to reduce the number of incompatible labels with the junction dictionary so that the convergence of solutions can be accelerated. We formulate some labelling hypotheses tending to eliminate elements in two sets of labelled interpretations. That is, those which are compatible with the dictionary but do not correspond to three-dimensional objects and those which represent objects not very probable to be specified by means of a line drawing. New entities arise on the line drawing as a result of the extension of Origami world. These are defined to enunciate the assumptions of our method as well as to clarify the algorithms proposed. This technique is framed in a project aimed to implement a system to create 3D objects to improve man-machine interaction in CAD systems.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier North-Holland, Inc. Place of Publication Amsterdam, The Netherlands, The Netherlands Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0165-1684 ISBN Medium  
  Area Expedition Conference  
  Notes IAM;ISE; Approved no  
  Call Number IAM @ iam @ MRL1993 Serial 1611  
Permanent link to this record
 

 
Author Thanh Ha Do; Salvatore Tabbone; Oriol Ramos Terrades edit   pdf
url  openurl
  Title Sparse representation over learned dictionary for symbol recognition Type Journal Article
  Year 2016 Publication (down) Signal Processing Abbreviated Journal SP  
  Volume 125 Issue Pages 36-47  
  Keywords Symbol Recognition; Sparse Representation; Learned Dictionary; Shape Context; Interest Points  
  Abstract In this paper we propose an original sparse vector model for symbol retrieval task. More speci cally, we apply the K-SVD algorithm for learning a visual dictionary based on symbol descriptors locally computed around interest points. Results on benchmark datasets show that the obtained sparse representation is competitive related to state-of-the-art methods. Moreover, our sparse representation is invariant to rotation and scale transforms and also robust to degraded images and distorted symbols. Thereby, the learned visual dictionary is able to represent instances of unseen classes of symbols.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.061; 600.077 Approved no  
  Call Number Admin @ si @ DTR2016 Serial 2946  
Permanent link to this record
 

 
Author Jose Luis Gomez; Gabriel Villalonga; Antonio Lopez edit  url
openurl 
  Title Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models Type Journal Article
  Year 2023 Publication (down) Sensors – Special Issue on “Machine Learning for Autonomous Driving Perception and Prediction” Abbreviated Journal SENS  
  Volume 23 Issue 2 Pages 621  
  Keywords Domain adaptation; semi-supervised learning; Semantic segmentation; Autonomous driving  
  Abstract Semantic image segmentation is a central and challenging task in autonomous driving, addressed by training deep models. Since this training draws to a curse of human-based image labeling, using synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies to address an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic
segmentation models. It consists of a self-training stage, which provides two domain-adapted models, and a model collaboration loop for the mutual improvement of these two models. These models are then used to provide the final semantic segmentation labels (pseudo-labels) for the real-world images. The overall
procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for on-board semantic segmentation. Our
procedure shows improvements ranging from ∼13 to ∼26 mIoU points over baselines, so establishing new state-of-the-art results.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; no proj Approved no  
  Call Number Admin @ si @ GVL2023 Serial 3705  
Permanent link to this record
 

 
Author O. Fors; J. Nuñez; Xavier Otazu; A. Prades; Robert D. Cardinal edit  doi
openurl 
  Title Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques Type Journal Article
  Year 2010 Publication (down) Sensors Abbreviated Journal SENS  
  Volume 10 Issue 3 Pages 1743–1752  
  Keywords image processing; image deconvolution; faint stars; space debris; wavelet transform  
  Abstract Abstract: In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number CAT @ cat @ FNO2010 Serial 1285  
Permanent link to this record
 

 
Author Albert Ali Salah; E. Pauwels; R. Tavenard; Theo Gevers edit  doi
openurl 
  Title T-Patterns Revisited: Mining for Temporal Patterns in Sensor Data Type Journal Article
  Year 2010 Publication (down) Sensors Abbreviated Journal SENS  
  Volume 10 Issue 8 Pages 7496-7513  
  Keywords sensor networks; temporal pattern extraction; T-patterns; Lempel-Ziv; Gaussian mixture model; MERL motion data  
  Abstract The trend to use large amounts of simple sensors as opposed to a few complex sensors to monitor places and systems creates a need for temporal pattern mining algorithms to work on such data. The methods that try to discover re-usable and interpretable patterns in temporal event data have several shortcomings. We contrast several recent approaches to the problem, and extend the T-Pattern algorithm, which was previously applied for detection of sequential patterns in behavioural sciences. The temporal complexity of the T-pattern approach is prohibitive in the scenarios we consider. We remedy this with a statistical model to obtain a fast and robust algorithm to find patterns in temporal data. We test our algorithm on a recent database collected with passive infrared sensors with millions of events.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ SPT2010 Serial 1845  
Permanent link to this record
 

 
Author Sergio Escalera; Xavier Baro; Jordi Vitria; Petia Radeva; Bogdan Raducanu edit   pdf
doi  openurl
  Title Social Network Extraction and Analysis Based on Multimodal Dyadic Interaction Type Journal Article
  Year 2012 Publication (down) Sensors Abbreviated Journal SENS  
  Volume 12 Issue 2 Pages 1702-1719  
  Keywords  
  Abstract IF=1.77 (2010)
Social interactions are a very important component in peopleís lives. Social network analysis has become a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from multimodal dyadic interactions. For our study, we used a set of videos belonging to New York Timesí Blogging Heads opinion blog.
The Social Network is represented as an oriented graph, whose directed links are determined by the Influence Model. The linksí weights are a measure of the ìinfluenceî a person has over the other. The states of the Influence Model encode automatically extracted audio/visual features from our videos using state-of-the art algorithms. Our results are reported in terms of accuracy of audio/visual data fusion for speaker segmentation and centrality measures used to characterize the extracted social network.
 
  Address  
  Corporate Author Thesis  
  Publisher Molecular Diversity Preservation International Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; OR;HuPBA;MV Approved no  
  Call Number Admin @ si @ EBV2012 Serial 1885  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: