Records |
Author |
Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat |
Title |
Photometric Stereo through and Adapted Alternation Approach |
Type |
Conference Article |
Year |
2008 |
Publication |
IEEE International Conference on Image Processing, |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1500–1503 |
Keywords |
|
Abstract |
|
Address |
San Diego; CA; USA |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ADAS |
Approved |
no |
Call Number |
ADAS @ adas @ JSL2008d |
Serial |
1016 |
Permanent link to this record |
|
|
|
Author |
Ivan Huerta; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez |
Title |
Detection and Removal of Chromatic Moving Shadows in Surveillance Scenarios |
Type |
Conference Article |
Year |
2009 |
Publication |
12th International Conference on Computer Vision |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1499 - 1506 |
Keywords |
|
Abstract |
Segmentation in the surveillance domain has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. In this paper we present a novel technique based on gradient and colour models for separating chromatic moving cast shadows from detected moving objects. Firstly, both a chromatic invariant colour cone model and an invariant gradient model are built to perform automatic segmentation while detecting potential shadows. In a second step, regions corresponding to potential shadows are grouped by considering “a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions. |
Address |
Kyoto, Japan |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1550-5499 |
ISBN |
978-1-4244-4420-5 |
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
|
Approved |
no |
Call Number |
ISE @ ise @ HHM2009 |
Serial |
1213 |
Permanent link to this record |
|
|
|
Author |
Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers |
Title |
Like Father, Like Son: Facial Expression Dynamics for Kinship Verification |
Type |
Conference Article |
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1497-1504 |
Keywords |
|
Abstract |
Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles. |
Address |
Sydney |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
ALTRES;ISE |
Approved |
no |
Call Number |
Admin @ si @ DSG2013 |
Serial |
2366 |
Permanent link to this record |
|
|
|
Author |
Josep M. Gonfaus; Theo Gevers; Arjan Gijsenij; Xavier Roca; Jordi Gonzalez |
Title |
Edge Classification using Photo-Geo metric features |
Type |
Conference Article |
Year |
2012 |
Publication |
21st International Conference on Pattern Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1497 - 1500 |
Keywords |
|
Abstract |
Edges are caused by several imaging cues such as shadow, material and illumination transitions. Classification methods have been proposed which are solely based on photometric information, ignoring geometry to classify the physical nature of edges in images. In this paper, the aim is to present a novel strategy to handle both photometric and geometric information for edge classification. Photometric information is obtained through the use of quasi-invariants while geometric information is derived from the orientation and contrast of edges. Different combination frameworks are compared with a new principled approach that captures both information into the same descriptor. From large scale experiments on different datasets, it is shown that, in addition to photometric information, the geometry of edges is an important visual cue to distinguish between different edge types. It is concluded that by combining both cues the performance improves by more than 7% for shadows and highlights. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1051-4651 |
ISBN |
978-1-4673-2216-4 |
Medium |
|
Area |
|
Expedition |
|
Conference |
ICPR |
Notes |
ISE |
Approved |
no |
Call Number |
Admin @ si @ GGG2012b |
Serial |
2142 |
Permanent link to this record |
|
|
|
Author |
Marc Serra; Olivier Penacchio; Robert Benavente; Maria Vanrell; Dimitris Samaras |
Title |
The Photometry of Intrinsic Images |
Type |
Conference Article |
Year |
2014 |
Publication |
27th IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1494-1501 |
Keywords |
|
Abstract |
Intrinsic characterization of scenes is often the best way to overcome the illumination variability artifacts that complicate most computer vision problems, from 3D reconstruction to object or material recognition. This paper examines the deficiency of existing intrinsic image models to accurately account for the effects of illuminant color and sensor characteristics in the estimation of intrinsic images and presents a generic framework which incorporates insights from color constancy research to the intrinsic image decomposition problem. The proposed mathematical formulation includes information about the color of the illuminant and the effects of the camera sensors, both of which modify the observed color of the reflectance of the objects in the scene during the acquisition process. By modeling these effects, we get a “truly intrinsic” reflectance image, which we call absolute reflectance, which is invariant to changes of illuminant or camera sensors. This model allows us to represent a wide range of intrinsic image decompositions depending on the specific assumptions on the geometric properties of the scene configuration and the spectral properties of the light source and the acquisition system, thus unifying previous models in a single general framework. We demonstrate that even partial information about sensors improves significantly the estimated reflectance images, thus making our method applicable for a wide range of sensors. We validate our general intrinsic image framework experimentally with both synthetic data and natural images. |
Address |
Columbus; Ohio; USA; June 2014 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPR |
Notes |
CIC; 600.052; 600.051; 600.074 |
Approved |
no |
Call Number |
Admin @ si @ SPB2014 |
Serial |
2506 |
Permanent link to this record |
|
|
|
Author |
Jordi Vitria; J. Llacer |
Title |
Reconstructing 3D light microscopic images using the EM algorithm |
Type |
Journal |
Year |
1996 |
Publication |
Pattern Recognition Letters |
Abbreviated Journal |
|
Volume |
17 |
Issue |
14 |
Pages |
1491–1498 |
Keywords |
|
Abstract |
|
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
OR;MV |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ ViL1996 |
Serial |
74 |
Permanent link to this record |
|
|
|
Author |
Sergio Escalera; Jordi Gonzalez; Xavier Baro; Jamie Shotton |
Title |
Guest Editor Introduction to the Special Issue on Multimodal Human Pose Recovery and Behavior Analysis |
Type |
Journal Article |
Year |
2016 |
Publication |
IEEE Transactions on Pattern Analysis and Machine Intelligence |
Abbreviated Journal |
TPAMI |
Volume |
28 |
Issue |
|
Pages |
1489 - 1491 |
Keywords |
|
Abstract |
The sixteen papers in this special section focus on human pose recovery and behavior analysis (HuPBA). This is one of the most challenging topics in computer vision, pattern analysis, and machine learning. It is of critical importance for application areas that include gaming, computer interaction, human robot interaction, security, commerce, assistive technologies and rehabilitation, sports, sign language recognition, and driver assistance technology, to mention just a few. In essence, HuPBA requires dealing with the articulated nature of the human body, changes in appearance due to clothing, and the inherent problems of clutter scenes, such as background artifacts, occlusions, and illumination changes. These papers represent the most recent research in this field, including new methods considering still images, image sequences, depth data, stereo vision, 3D vision, audio, and IMUs, among others. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
HuPBA; ISE;MV; |
Approved |
no |
Call Number |
Admin @ si @ |
Serial |
2851 |
Permanent link to this record |
|
|
|
Author |
Dimosthenis Karatzas; Sergi Robles; Joan Mas; Farshad Nourbakhsh; Partha Pratim Roy |
Title |
ICDAR 2011 Robust Reading Competition – Challege 1: Reading Text in Born-Digital Images (Web and Email) |
Type |
Conference Article |
Year |
2011 |
Publication |
11th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1485-1490 |
Keywords |
|
Abstract |
This paper presents the results of the first Challenge of ICDAR 2011 Robust Reading Competition. Challenge 1 is focused on the extraction of text from born-digital images, specifically from images found in Web pages and emails. The challenge was organized in terms of three tasks that look at different stages of the process: text localization, text segmentation and word recognition. In this paper we present the results of the challenge for all three tasks, and make an open call for continuous participation outside the context of ICDAR 2011. |
Address |
Beijing, China |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1520-5363 |
ISBN |
978-1-4577-1350-7 |
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ KRM2011 |
Serial |
1793 |
Permanent link to this record |
|
|
|
Author |
Dimosthenis Karatzas; Faisal Shafait; Seiichi Uchida; Masakazu Iwamura; Lluis Gomez; Sergi Robles; Joan Mas; David Fernandez; Jon Almazan; Lluis Pere de las Heras |
Title |
ICDAR 2013 Robust Reading Competition |
Type |
Conference Article |
Year |
2013 |
Publication |
12th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1484-1493 |
Keywords |
|
Abstract |
This report presents the final results of the ICDAR 2013 Robust Reading Competition. The competition is structured in three Challenges addressing text extraction in different application domains, namely born-digital images, real scene images and real-scene videos. The Challenges are organised around specific tasks covering text localisation, text segmentation and word recognition. The competition took place in the first quarter of 2013, and received a total of 42 submissions over the different tasks offered. This report describes the datasets and ground truth specification, details the performance evaluation protocols used and presents the final results along with a brief summary of the participating methods. |
Address |
Washington; USA; August 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1520-5363 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG; 600.056 |
Approved |
no |
Call Number |
Admin @ si @ KSU2013 |
Serial |
2318 |
Permanent link to this record |
|
|
|
Author |
N. Nayef; F. Yin; I. Bizid; H .Choi; Y. Feng; Dimosthenis Karatzas; Z. Luo; Umapada Pal; Christophe Rigaud; J. Chazalon; W. Khlif; Muhammad Muzzamil Luqman; Jean-Christophe Burie; C.L. Liu; Jean-Marc Ogier |
Title |
ICDAR2017 Robust Reading Challenge on Multi-Lingual Scene Text Detection and Script Identification – RRC-MLT |
Type |
Conference Article |
Year |
2017 |
Publication |
14th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1454-1459 |
Keywords |
|
Abstract |
Text detection and recognition in a natural environment are key components of many applications, ranging from business card digitization to shop indexation in a street. This competition aims at assessing the ability of state-of-the-art methods to detect Multi-Lingual Text (MLT) in scene images, such as in contents gathered from the Internet media and in modern cities where multiple cultures live and communicate together. This competition is an extension of the Robust Reading Competition (RRC) which has been held since 2003 both in ICDAR and in an online context. The proposed competition is presented as a new challenge of the RRC. The dataset built for this challenge largely extends the previous RRC editions in many aspects: the multi-lingual text, the size of the dataset, the multi-oriented text, the wide variety of scenes. The dataset is comprised of 18,000 images which contain text belonging to 9 languages. The challenge is comprised of three tasks related to text detection and script classification. We have received a total of 16 participations from the research and industrial communities. This paper presents the dataset, the tasks and the findings of this RRC-MLT challenge. |
Address |
Kyoto; Japan; November 2017 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
978-1-5386-3586-5 |
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG; 600.121 |
Approved |
no |
Call Number |
Admin @ si @ NYB2017 |
Serial |
3097 |
Permanent link to this record |
|
|
|
Author |
Debora Gil; Petia Radeva |
Title |
A Regularized Curvature Flow Designed for a Selective Shape Restoration |
Type |
Journal Article |
Year |
2004 |
Publication |
IEEE Transactions on Image Processing |
Abbreviated Journal |
|
Volume |
13 |
Issue |
|
Pages |
1444–1458 |
Keywords |
Geometric flows, nonlinear filtering, shape recovery. |
Abstract |
Among all filtering techniques, those based exclu- sively on image level sets (geometric flows) have proven to be the less sensitive to the nature of noise and the most contrast preserving. A common feature to existent curvature flows is that they penalize high curvature, regardless of the curve regularity. This constitutes a major drawback since curvature extreme values are standard descriptors of the contour geometry. We argue that an operator designed with shape recovery purposes should include a term penalizing irregularity in the curvature rather than its magnitude. To this purpose, we present a novel geometric flow that includes a function that measures the degree of local irregularity present in the curve. A main advantage is that it achieves non-trivial steady states representing a smooth model of level curves in a noisy image. Performance of our approach is compared to classical filtering techniques in terms of quality in the restored image/shape and asymptotic behavior. We empirically prove that our approach is the technique that achieves the best compromise between image quality and evolution stabilization. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
IAM;MILAB |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ GiR2004b |
Serial |
491 |
Permanent link to this record |
|
|
|
Author |
ChunYang; Xu Cheng Yin; Hong Yu; Dimosthenis Karatzas; Yu Cao |
Title |
ICDAR2017 Robust Reading Challenge on Text Extraction from Biomedical Literature Figures (DeTEXT) |
Type |
Conference Article |
Year |
2017 |
Publication |
14th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1444-1447 |
Keywords |
|
Abstract |
Hundreds of millions of figures are available in the biomedical literature, representing important biomedical experimental evidence. Since text is a rich source of information in figures, automatically extracting such text may assist in the task of mining figure information and understanding biomedical documents. Unlike images in the open domain, biomedical figures present a variety of unique challenges. For example, biomedical figures typically have complex layouts, small font sizes, short text, specific text, complex symbols and irregular text arrangements. This paper presents the final results of the ICDAR 2017 Competition on Text Extraction from Biomedical Literature Figures (ICDAR2017 DeTEXT Competition), which aims at extracting (detecting and recognizing) text from biomedical literature figures. Similar to text extraction from scene images and web pictures, ICDAR2017 DeTEXT Competition includes three major tasks, i.e., text detection, cropped word recognition and end-to-end text recognition. Here, we describe in detail the data set, tasks, evaluation protocols and participants of this competition, and report the performance of the participating methods. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
978-1-5386-3586-5 |
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG; 600.121 |
Approved |
no |
Call Number |
Admin @ si @ YCY2017 |
Serial |
3098 |
Permanent link to this record |
|
|
|
Author |
M. Visani; V.C.Kieu; Alicia Fornes; N.Journet |
Title |
The ICDAR 2013 Music Scores Competition: Staff Removal |
Type |
Conference Article |
Year |
2013 |
Publication |
12th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1439-1443 |
Keywords |
|
Abstract |
The first competition on music scores that was organized at ICDAR in 2011 awoke the interest of researchers, who participated both at staff removal and writer identification tasks. In this second edition, we focus on the staff removal task and simulate a real case scenario: old music scores. For this purpose, we have generated a new set of images using two kinds of degradations: local noise and 3D distortions. This paper describes the dataset, distortion methods, evaluation metrics, the participant's methods and the obtained results. |
Address |
Washington; USA; August 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1520-5363 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG; 600.045; 600.061 |
Approved |
no |
Call Number |
Admin @ si @ VKF2013 |
Serial |
2338 |
Permanent link to this record |
|
|
|
Author |
Josep Llados; Horst Bunke; Enric Marti |
Title |
Finding rotational symmetries by cyclic string matching |
Type |
Journal Article |
Year |
1997 |
Publication |
Pattern recognition letters |
Abbreviated Journal |
PRL |
Volume |
18 |
Issue |
14 |
Pages |
1435-1442 |
Keywords |
Rotational symmetry; Reflectional symmetry; String matching |
Abstract |
Symmetry is an important shape feature. In this paper, a simple and fast method to detect perfect and distorted rotational symmetries of 2D objects is described. The boundary of a shape is polygonally approximated and represented as a string. Rotational symmetries are found by cyclic string matching between two identical copies of the shape string. The set of minimum cost edit sequences that transform the shape string to a cyclically shifted version of itself define the rotational symmetry and its order. Finally, a modification of the algorithm is proposed to detect reflectional symmetries. Some experimental results are presented to show the reliability of the proposed algorithm |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
DAG;IAM; |
Approved |
no |
Call Number |
IAM @ iam @ LBM1997a |
Serial |
1562 |
Permanent link to this record |
|
|
|
Author |
Jordi Gonzalez; Dani Rowe; J. Varona; Xavier Roca |
Title |
Understanding Dynamic Scenes based on Human Sequence Evaluation |
Type |
Journal Article |
Year |
2009 |
Publication |
Image and Vision Computing |
Abbreviated Journal |
IMAVIS |
Volume |
27 |
Issue |
10 |
Pages |
1433–1444 |
Keywords |
Image Sequence Evaluation; High-level processing of monitored scenes; Segmentation and tracking in complex scenes; Event recognition in dynamic scenes; Human motion understanding; Human behaviour interpretation; Natural-language text generation; Realistic demonstrators |
Abstract |
In this paper, a Cognitive Vision System (CVS) is presented, which explains the human behaviour of monitored scenes using natural-language texts. This cognitive analysis of human movements recorded in image sequences is here referred to as Human Sequence Evaluation (HSE) which defines a set of transformation modules involved in the automatic generation of semantic descriptions from pixel values. In essence, the trajectories of human agents are obtained to generate textual interpretations of their motion, and also to infer the conceptual relationships of each agent w.r.t. its environment. For this purpose, a human behaviour model based on Situation Graph Trees (SGTs) is considered, which permits both bottom-up (hypothesis generation) and top-down (hypothesis refinement) analysis of dynamic scenes. The resulting system prototype interprets different kinds of behaviour and reports textual descriptions in multiple languages. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ISE |
Approved |
no |
Call Number |
ISE @ ise @ GRV2009 |
Serial |
1211 |
Permanent link to this record |