|
Records |
Links |
|
Author |
Hamdi Dibeklioglu; M.O. Hortas; I. Kosunen; P. Zuzánek; Albert Ali Salah; Theo Gevers |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Design and implementation of an affect-responsive interactive photo frame |
Type |
Journal |
|
Year |
2011 |
Publication |
Journal on Multimodal User Interfaces |
Abbreviated Journal |
JMUI |
|
|
Volume |
4 |
Issue |
2 |
Pages |
81-95 |
|
|
Keywords |
|
|
|
Abstract |
This paper describes an affect-responsive interactive photo-frame application that offers its user a different experience with every use. It relies on visual analysis of activity levels and facial expressions of its users to select responses from a database of short video segments. This ever-growing database is automatically prepared by an offline analysis of user-uploaded videos. The resulting system matches its user’s affect along dimensions of valence and arousal, and gradually adapts its response to each specific user. In an extended mode, two such systems are coupled and feed each other with visual content. The strengths and weaknesses of the system are assessed through a usability study, where a Wizard-of-Oz response logic is contrasted with the fully automatic system that uses affective and activity-based features, either alone, or in tandem. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer–Verlag |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1783-7677 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ DHK2011 |
Serial |
1842 |
|
Permanent link to this record |
|
|
|
|
Author |
A. Toet; M. Henselmans; M.P. Lucassen; Theo Gevers |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Emotional effects of dynamic textures |
Type |
Journal |
|
Year |
2011 |
Publication |
i-Perception |
Abbreviated Journal |
iPER |
|
|
Volume |
2 |
Issue |
9 |
Pages |
969 – 991 |
|
|
Keywords |
|
|
|
Abstract |
This study explores the effects of various spatiotemporal dynamic texture characteristics on human emotions. The emotional experience of auditory (eg, music) and haptic repetitive patterns has been studied extensively. In contrast, the emotional experience of visual dynamic textures is still largely unknown, despite their natural ubiquity and increasing use in digital media. Participants watched a set of dynamic textures, representing either water or various different media, and self-reported their emotional experience. Motion complexity was found to have mildly relaxing and nondominant effects. In contrast, motion change complexity was found to be arousing and dominant. The speed of dynamics had arousing, dominant, and unpleasant effects. The amplitude of dynamics was also regarded as unpleasant. The regularity of the dynamics over the textures’ area was found to be uninteresting, nondominant, mildly relaxing, and mildly pleasant. The spatial scale of the dynamics had an unpleasant, arousing, and dominant effect, which was larger for textures with diverse content than for water textures. For water textures, the effects of spatial contrast were arousing, dominant, interesting, and mildly unpleasant. None of these effects were observed for textures of diverse content. The current findings are relevant for the design and synthesis of affective multimedia content and for affective scene indexing and retrieval. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2041-6695 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @THL2011 |
Serial |
1843 |
|
Permanent link to this record |
|
|
|
|
Author |
Marcel P. Lucassen; Theo Gevers; Arjan Gijsenij |
![goto web page url](img/www.gif)
|
|
Title |
Texture Affects Color Emotion |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Color Research & Applications |
Abbreviated Journal |
CRA |
|
|
Volume |
36 |
Issue |
6 |
Pages |
426–436 |
|
|
Keywords |
color;texture;color emotion;observer variability;ranking |
|
|
Abstract |
Several studies have recorded color emotions in subjects viewing uniform color (UC) samples. We conduct an experiment to measure and model how these color emotions change when texture is added to the color samples. Using a computer monitor, our subjects arrange samples along four scales: warm–cool, masculine–feminine, hard–soft, and heavy–light. Three sample types of increasing visual complexity are used: UC, grayscale textures, and color textures (CTs). To assess the intraobserver variability, the experiment is repeated after 1 week. Our results show that texture fully determines the responses on the Hard-Soft scale, and plays a role of decreasing weight for the masculine–feminine, heavy–light, and warm–cool scales. Using some 25,000 observer responses, we derive color emotion functions that predict the group-averaged scale responses from the samples' color and texture parameters. For UC samples, the accuracy of our functions is significantly higher (average R2 = 0.88) than that of previously reported functions applied to our data. The functions derived for CT samples have an accuracy of R2 = 0.80. We conclude that when textured samples are used in color emotion studies, the psychological responses may be strongly affected by texture. © 2010 Wiley Periodicals, Inc. Col Res Appl, 2010 |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
ALTRES;ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ LGG2011 |
Serial |
1844 |
|
Permanent link to this record |
|
|
|
|
Author |
Sergio Escalera; Ana Puig; Oscar Amoros; Maria Salamo |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Intelligent GPGPU Classification in Volume Visualization: a framework based on Error-Correcting Output Codes |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Computer Graphics Forum |
Abbreviated Journal |
CGF |
|
|
Volume |
30 |
Issue |
7 |
Pages |
2107-2115 |
|
|
Keywords |
|
|
|
Abstract |
IF JCR 1.455 2010 25/99
In volume visualization, the definition of the regions of interest is inherently an iterative trial-and-error process finding out the best parameters to classify and render the final image. Generally, the user requires a lot of expertise to analyze and edit these parameters through multi-dimensional transfer functions. In this paper, we present a framework of intelligent methods to label on-demand multiple regions of interest. These methods can be split into a two-level GPU-based labelling algorithm that computes in time of rendering a set of labelled structures using the Machine Learning Error-Correcting Output Codes (ECOC) framework. In a pre-processing step, ECOC trains a set of Adaboost binary classifiers from a reduced pre-labelled data set. Then, at the testing stage, each classifier is independently applied on the features of a set of unlabelled samples and combined to perform multi-class labelling. We also propose an alternative representation of these classifiers that allows to highly parallelize the testing stage. To exploit that parallelism we implemented the testing stage in GPU-OpenCL. The empirical results on different data sets for several volume structures shows high computational performance and classification accuracy. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
MILAB; HuPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ EPA2011 |
Serial |
1881 |
|
Permanent link to this record |
|
|
|
|
Author |
Laura Igual; Joan Carles Soliva; Antonio Hernandez; Sergio Escalera; Xavier Jimenez ; Oscar Vilarroya; Petia Radeva |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
A fully-automatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attention-deficit/hyperactivity disorder |
Type |
Journal Article |
|
Year |
2011 |
Publication |
BioMedical Engineering Online |
Abbreviated Journal |
BEO |
|
|
Volume |
10 |
Issue |
105 |
Pages |
1-23 |
|
|
Keywords |
Brain caudate nucleus; segmentation; MRI; atlas-based strategy; Graph Cut framework |
|
|
Abstract |
Background
Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations.
Method
We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure.
Results
We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis.
Conclusion
CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1475-925X |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
MILAB;HuPBA |
Approved |
no |
|
|
Call Number |
Admin @ si @ ISH2011 |
Serial |
1882 |
|
Permanent link to this record |
|
|
|
|
Author |
Mario Rojas; David Masip; A. Todorov; Jordi Vitria |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models |
Type |
Journal Article |
|
Year |
2011 |
Publication |
PloS one |
Abbreviated Journal |
Plos |
|
|
Volume |
6 |
Issue |
8 |
Pages |
e23323 |
|
|
Keywords |
|
|
|
Abstract |
JCR Impact Factor 2010: 4.411
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Public Library of Science |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ RMT2011 |
Serial |
1883 |
|
Permanent link to this record |
|
|
|
|
Author |
Miguel Angel Bautista; Sergio Escalera; Xavier Baro; Oriol Pujol; Jordi Vitria; Petia Radeva |
![goto web page url](img/www.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
On the Design of Low Redundancy Error-Correcting Output Codes |
Type |
Book Chapter |
|
Year |
2011 |
Publication |
Ensembles in Machine Learning Applications |
Abbreviated Journal |
|
|
|
Volume |
373 |
Issue |
2 |
Pages |
21-38 |
|
|
Keywords |
|
|
|
Abstract |
The classification of large number of object categories is a challenging trend in the Pattern Recognition field. In the literature, this is often addressed using an ensemble of classifiers . In this scope, the Error-Correcting Output Codes framework has demonstrated to be a powerful tool for combining classifiers. However, most of the state-of-the-art ECOC approaches use a linear or exponential number of classifiers, making the discrimination of a large number of classes unfeasible. In this paper, we explore and propose a compact design of ECOC in terms of the number of classifiers. Evolutionary computation is used for tuning the parameters of the classifiers and looking for the best compact ECOC code configuration. The results over several public UCI data sets and different multi-class Computer Vision problems show that the proposed methodology obtains comparable (even better) results than the state-of-the-art ECOC methodologies with far less number of dichotomizers. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1860-949X |
ISBN |
978-3-642-22909-1 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
MILAB; OR;HuPBA;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ BEB2011b |
Serial |
1886 |
|
Permanent link to this record |
|
|
|
|
Author |
Michal Drozdzal; Santiago Segui; Petia Radeva; Jordi Vitria; Laura Igual |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
System and Method for Displaying Motility Events in an in Vivo Image Stream |
Type |
Patent |
|
Year |
2011 |
Publication |
US 61/592,786 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Given Imaging |
|
|
Corporate Author |
US Patent Office |
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
MILAB; OR;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ DSR2011 |
Serial |
1897 |
|
Permanent link to this record |
|
|
|
|
Author |
Alejandro Gonzalez Alzate |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Evaluation of spatiotemporal descriptors for pedestrian detection in video sequences |
Type |
Report |
|
Year |
2011 |
Publication |
CVC Technical Report |
Abbreviated Journal |
|
|
|
Volume |
166 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Bellaterra (Spain) |
|
|
Corporate Author |
Computer Vision Center |
Thesis |
Master's thesis |
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ Gon2011 |
Serial |
1932 |
|
Permanent link to this record |
|
|
|
|
Author |
Yainuvis Socarras |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Image segmentation for improving pedestrian detection |
Type |
Report |
|
Year |
2011 |
Publication |
CVC Technical Report |
Abbreviated Journal |
|
|
|
Volume |
167 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Bellaterra (Spain) |
|
|
Corporate Author |
Computer Vision Center |
Thesis |
Master's thesis |
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
ADAS; |
Approved |
no |
|
|
Call Number |
Admin @ si @ Soc2011 |
Serial |
1933 |
|
Permanent link to this record |
|
|
|
|
Author |
Maria del Camp Davesa |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Human action categorization in image sequences |
Type |
Report |
|
Year |
2011 |
Publication |
CVC Technical Report |
Abbreviated Journal |
|
|
|
Volume |
169 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Bellaterra (Spain) |
|
|
Corporate Author |
Computer Vision Center |
Thesis |
Master's thesis |
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
CiC;CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ Dav2011 |
Serial |
1934 |
|
Permanent link to this record |
|
|
|
|
Author |
Mirko Arnold; Stephan Ameling; Anarta Ghosh; Gerard Lacey |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Quality Improvement of Endoscopy Videos |
Type |
Conference Article |
|
Year |
2011 |
Publication |
Proceedings of the 8th IASTED International Conference on Biomedical Engineering |
Abbreviated Journal |
|
|
|
Volume |
723 |
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
800 |
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
MV |
Approved |
no |
|
|
Call Number |
fernando @ fernando @ |
Serial |
2426 |
|
Permanent link to this record |
|
|
|
|
Author |
Victor Ponce; Mario Gorga; Xavier Baro; Petia Radeva; Sergio Escalera |
![goto web page url](img/www.gif)
|
|
Title |
Análisis de la expresión oral y gestual en proyectos fin de carrera vía un sistema de visión artificial |
Type |
Journal Article |
|
Year |
2011 |
Publication |
ReVisión |
Abbreviated Journal |
|
|
|
Volume |
4 |
Issue |
1 |
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1989-1199 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
|
|
|
Notes |
HuPBA; MILAB;MV |
Approved |
no |
|
|
Call Number |
Admin @ si @ PGB2011d |
Serial |
2514 |
|
Permanent link to this record |