|   | 
Details
   web
Records
Author Fahad Shahbaz Khan
Title Coloring bag-of-words based image representations Type Book Whole
Year 2011 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal
Volume Issue Pages
Keywords
Abstract Put succinctly, the bag-of-words based image representation is the most successful approach for object and scene recognition. Within the bag-of-words framework the optimal fusion of multiple cues, such as shape, texture and color, still remains an active research domain. There exist two main approaches to combine color and shape information within the bag-of-words framework. The first approach called, early fusion, fuses color and shape at the feature level as a result of which a joint colorshape vocabulary is produced. The second approach, called late fusion, concatenates histogram representation of both color and shape, obtained independently. In the first part of this thesis, we analyze the theoretical implications of both early and late feature fusion. We demonstrate that both these approaches are suboptimal for a subset of object categories. Consequently, we propose a novel method for recognizing object categories when using multiple cues by separately processing the shape and color cues and combining them by modulating the shape features by category specific color attention. Color is used to compute bottom-up and top-down attention maps. Subsequently, the color attention maps are used to modulate the weights of the shape features. Shape features are given more weight in regions with higher attention and vice versa. The approach is tested on several benchmark object recognition data sets and the results clearly demonstrate the effectiveness of our proposed method. In the second part of the thesis, we investigate the problem of obtaining compact spatial pyramid representations for object and scene recognition. Spatial pyramids have been successfully applied to incorporate spatial information into bag-of-words based image representation. However, a major drawback of spatial pyramids is that it leads to high dimensional image representations. We present a novel framework for obtaining compact pyramid representation. The approach reduces the size of a high dimensional pyramid representation upto an order of magnitude without any significant reduction in accuracy. Moreover, we also investigate the optimal combination of multiple features such as color and shape within the context of our compact pyramid representation. Finally, we describe a novel technique to build discriminative visual words from multiple cues learned independently from training images. To this end, we use an information theoretic vocabulary compression technique to find discriminative combinations of visual cues and the resulting visual vocabulary is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. The approach is tested on standard object recognition data sets. The results obtained clearly demonstrate the effectiveness of our approach.
Address
Corporate Author Thesis Ph.D. thesis
Publisher Place of Publication Editor Joost Van de Weijer;Maria Vanrell
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (down) CIC Approved no
Call Number Admin @ si @ Kha2011 Serial 1838
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Andrew Bagdanov; Maria Vanrell
Title Portmanteau Vocabularies for Multi-Cue Image Representation Type Conference Article
Year 2011 Publication 25th Annual Conference on Neural Information Processing Systems Abbreviated Journal
Volume Issue Pages
Keywords
Abstract We describe a novel technique for feature combination in the bag-of-words model of image classification. Our approach builds discriminative compound words from primitive cues learned independently from training images. Our main observation is that modeling joint-cue distributions independently is more statistically robust for typical classification problems than attempting to empirically estimate the dependent, joint-cue distribution directly. We use Information theoretic vocabulary compression to find discriminative combinations of cues and the resulting vocabulary of portmanteau words is compact, has the cue binding property, and supports individual weighting of cues in the final image representation. State-of-the-art results on both the Oxford Flower-102 and Caltech-UCSD Bird-200 datasets demonstrate the effectiveness of our technique compared to other, significantly more complex approaches to multi-cue image representation
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference NIPS
Notes (down) CIC Approved no
Call Number Admin @ si @ KWB2011 Serial 1865
Permanent link to this record
 

 
Author Naila Murray; Sandra Skaff; Luca Marchesotti; Florent Perronnin
Title Towards Automatic Concept Transfer Type Conference Article
Year 2011 Publication Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering Abbreviated Journal
Volume Issue Pages 167.176
Keywords chromatic modeling, color concepts, color transfer, concept transfer
Abstract This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.
Address
Corporate Author Thesis
Publisher ACM Press Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-4503-0907-3 Medium
Area Expedition Conference NPAR
Notes (down) CIC Approved no
Call Number Admin @ si @ MSM2011 Serial 1866
Permanent link to this record
 

 
Author Jordi Roca; C. Alejandro Parraga; Maria Vanrell
Title Categorical Focal Colours are Structurally Invariant Under Illuminant Changes Type Conference Article
Year 2011 Publication European Conference on Visual Perception Abbreviated Journal
Volume Issue Pages 196
Keywords
Abstract The visual system perceives the colour of surfaces approximately constant under changes of illumination. In this work, we investigate how stable is the perception of categorical \“focal\” colours and their interrelations with varying illuminants and simple chromatic backgrounds. It has been proposed that best examples of colour categories across languages cluster in small regions of the colour space and are restricted to a set of 11 basic terms (Kay and Regier, 2003 Proceedings of the National Academy of Sciences of the USA 100 9085\–9089). Following this, we developed a psychophysical paradigm that exploits the ability of subjects to reliably reproduce the most representative examples of each category, adjusting multiple test patches embedded in a coloured Mondrian. The experiment was run on a CRT monitor (inside a dark room) under various simulated illuminants. We modelled the recorded data for each subject and adapted state as a 3D interconnected structure (graph) in Lab space. The graph nodes were the subject\’s focal colours at each adaptation state. The model allowed us to get a better distance measure between focal structures under different illuminants. We found that perceptual focal structures tend to be preserved better than the structures of the physical \“ideal\” colours under illuminant changes.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Perception 40 Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ECVP
Notes (down) CIC Approved no
Call Number Admin @ si @ RPV2011 Serial 1867
Permanent link to this record
 

 
Author Jaime Moreno; Xavier Otazu
Title Image compression algorithm based on Hilbert scanning of embedded quadTrees: an introduction of the Hi-SET coder Type Conference Article
Year 2011 Publication IEEE International Conference on Multimedia and Expo Abbreviated Journal
Volume Issue Pages 1-6
Keywords
Abstract In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is the fact of achieving the best quality for the number of bits input by the decoder at any point during the decoding. Hi-SET possesses also this latter feature. Furthermore, the coder is based on a quadtree partition strategy, that applied to image transformation structures such as discrete cosine or wavelet transform allows to obtain an energy clustering both in frequency and space. The coding algorithm is composed of three general steps, using just a list of significant pixels. The implementation of the proposed coder is developed for gray-scale and color image compression. Hi-SET compressed images are, on average, 6.20dB better than the ones obtained by other compression techniques based on the Hilbert scanning. Moreover, Hi-SET improves the image quality in 1.39dB and 1.00dB in gray-scale and color compression, respectively, when compared with JPEG2000 coder.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1945-7871 ISBN 978-1-61284-348-3 Medium
Area Expedition Conference ICME
Notes (down) CIC Approved no
Call Number Admin @ si @ MoO2011a Serial 2176
Permanent link to this record
 

 
Author Jaime Moreno; Xavier Otazu
Title Image coder based on Hilbert scanning of embedded quadTrees Type Conference Article
Year 2011 Publication Data Compression Conference Abbreviated Journal
Volume Issue Pages 470-470
Keywords
Abstract In this work we present an effective and computationally simple algorithm for image compression based on Hilbert Scanning of Embedded quadTrees (Hi-SET). It allows to represent an image as an embedded bitstream along a fractal function. Embedding is an important feature of modern image compression algorithms, in this way Salomon in [1, pg. 614] cite that another feature and perhaps a unique one is the fact of achieving the best quality for the number of bits input by the decoder at any point during the decoding. Hi-SET possesses also this latter feature. Furthermore, the coder is based on a quadtree partition strategy, that applied to image transformation structures such as discrete cosine or wavelet transform allows to obtain an energy clustering both in frequency and space. The coding algorithm is composed of three general steps, using just a list of significant pixels.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DCC
Notes (down) CIC Approved no
Call Number Admin @ si @ MoO2011b Serial 2177
Permanent link to this record
 

 
Author Hamdi Dibeklioglu; M.O. Hortas; I. Kosunen; P. Zuzánek; Albert Ali Salah; Theo Gevers
Title Design and implementation of an affect-responsive interactive photo frame Type Journal
Year 2011 Publication Journal on Multimodal User Interfaces Abbreviated Journal JMUI
Volume 4 Issue 2 Pages 81-95
Keywords
Abstract This paper describes an affect-responsive interactive photo-frame application that offers its user a different experience with every use. It relies on visual analysis of activity levels and facial expressions of its users to select responses from a database of short video segments. This ever-growing database is automatically prepared by an offline analysis of user-uploaded videos. The resulting system matches its user’s affect along dimensions of valence and arousal, and gradually adapts its response to each specific user. In an extended mode, two such systems are coupled and feed each other with visual content. The strengths and weaknesses of the system are assessed through a usability study, where a Wizard-of-Oz response logic is contrasted with the fully automatic system that uses affective and activity-based features, either alone, or in tandem.
Address
Corporate Author Thesis
Publisher Springer–Verlag Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1783-7677 ISBN Medium
Area Expedition Conference
Notes (down) ALTRES;ISE Approved no
Call Number Admin @ si @ DHK2011 Serial 1842
Permanent link to this record
 

 
Author A. Toet; M. Henselmans; M.P. Lucassen; Theo Gevers
Title Emotional effects of dynamic textures Type Journal
Year 2011 Publication i-Perception Abbreviated Journal iPER
Volume 2 Issue 9 Pages 969 – 991
Keywords
Abstract This study explores the effects of various spatiotemporal dynamic texture characteristics on human emotions. The emotional experience of auditory (eg, music) and haptic repetitive patterns has been studied extensively. In contrast, the emotional experience of visual dynamic textures is still largely unknown, despite their natural ubiquity and increasing use in digital media. Participants watched a set of dynamic textures, representing either water or various different media, and self-reported their emotional experience. Motion complexity was found to have mildly relaxing and nondominant effects. In contrast, motion change complexity was found to be arousing and dominant. The speed of dynamics had arousing, dominant, and unpleasant effects. The amplitude of dynamics was also regarded as unpleasant. The regularity of the dynamics over the textures’ area was found to be uninteresting, nondominant, mildly relaxing, and mildly pleasant. The spatial scale of the dynamics had an unpleasant, arousing, and dominant effect, which was larger for textures with diverse content than for water textures. For water textures, the effects of spatial contrast were arousing, dominant, interesting, and mildly unpleasant. None of these effects were observed for textures of diverse content. The current findings are relevant for the design and synthesis of affective multimedia content and for affective scene indexing and retrieval.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 2041-6695 ISBN Medium
Area Expedition Conference
Notes (down) ALTRES;ISE Approved no
Call Number Admin @ si @THL2011 Serial 1843
Permanent link to this record
 

 
Author Marcel P. Lucassen; Theo Gevers; Arjan Gijsenij
Title Texture Affects Color Emotion Type Journal Article
Year 2011 Publication Color Research & Applications Abbreviated Journal CRA
Volume 36 Issue 6 Pages 426–436
Keywords color;texture;color emotion;observer variability;ranking
Abstract Several studies have recorded color emotions in subjects viewing uniform color (UC) samples. We conduct an experiment to measure and model how these color emotions change when texture is added to the color samples. Using a computer monitor, our subjects arrange samples along four scales: warm–cool, masculine–feminine, hard–soft, and heavy–light. Three sample types of increasing visual complexity are used: UC, grayscale textures, and color textures (CTs). To assess the intraobserver variability, the experiment is repeated after 1 week. Our results show that texture fully determines the responses on the Hard-Soft scale, and plays a role of decreasing weight for the masculine–feminine, heavy–light, and warm–cool scales. Using some 25,000 observer responses, we derive color emotion functions that predict the group-averaged scale responses from the samples' color and texture parameters. For UC samples, the accuracy of our functions is significantly higher (average R2 = 0.88) than that of previously reported functions applied to our data. The functions derived for CT samples have an accuracy of R2 = 0.80. We conclude that when textured samples are used in color emotion studies, the psychological responses may be strongly affected by texture. © 2010 Wiley Periodicals, Inc. Col Res Appl, 2010
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (down) ALTRES;ISE Approved no
Call Number Admin @ si @ LGG2011 Serial 1844
Permanent link to this record
 

 
Author Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez
Title Video Alignment for Change Detection Type Journal Article
Year 2011 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 20 Issue 7 Pages 1858-1869
Keywords video alignment
Abstract In this work, we address the problem of aligning two video sequences. Such alignment refers to synchronization, i.e., the establishment of temporal correspondence between frames of the first and second video, followed by spatial registration of all the temporally corresponding frames. Video synchronization and alignment have been attempted before, but most often in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, restrictive assumptions have been applied, including linear time correspondence or the knowledge of the complete trajectories of corresponding scene points; to some extent, these assumptions limit the practical applicability of any solutions developed. We intend to solve the more general problem of aligning video sequences recorded by independently moving cameras that follow similar trajectories, based only on the fusion of image intensity and GPS information. The novelty of our approach is to pose the synchronization as a MAP inference problem on a Bayesian network including the observations from these two sensor types, which have been proved complementary. Alignment results are presented in the context of videos recorded from vehicles driving along the same track at different times, for different road types. In addition, we explore two applications of the proposed video alignment method, both based on change detection between aligned videos. One is the detection of vehicles, which could be of use in ADAS. The other is online difference spotting videos of surveillance rounds.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (down) ADAS; IF Approved no
Call Number DPS 2011; ADAS @ adas @ dps2011 Serial 1705
Permanent link to this record
 

 
Author Yainuvis Socarras
Title Image segmentation for improving pedestrian detection Type Report
Year 2011 Publication CVC Technical Report Abbreviated Journal
Volume 167 Issue Pages
Keywords
Abstract
Address Bellaterra (Spain)
Corporate Author Computer Vision Center Thesis Master's thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (down) ADAS; Approved no
Call Number Admin @ si @ Soc2011 Serial 1933
Permanent link to this record
 

 
Author Muhammad Anwer Rao; David Vazquez; Antonio Lopez
Title Opponent Colors for Human Detection Type Conference Article
Year 2011 Publication 5th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 6669 Issue Pages 363-370
Keywords Pedestrian Detection; Color; Part Based Models
Abstract Human detection is a key component in fields such as advanced driving assistance and video surveillance. However, even detecting non-occluded standing humans remains a challenge of intensive research. Finding good features to build human models for further detection is probably one of the most important issues to face. Currently, shape, texture and motion features have deserve extensive attention in the literature. However, color-based features, which are important in other domains (e.g., image categorization), have received much less attention. In fact, the use of RGB color space has become a kind of choice by default. The focus has been put in developing first and second order features on top of RGB space (e.g., HOG and co-occurrence matrices, resp.). In this paper we evaluate the opponent colors (OPP) space as a biologically inspired alternative for human detection. In particular, by feeding OPP space in the baseline framework of Dalal et al. for human detection (based on RGB, HOG and linear SVM), we will obtain better detection performance than by using RGB space. This is a relevant result since, up to the best of our knowledge, OPP space has not been previously used for human detection. This suggests that in the future it could be worth to compute co-occurrence matrices, self-similarity features, etc., also on top of OPP space, i.e., as we have done with HOG in this paper.
Address Las Palmas de Gran Canaria. Spain
Corporate Author Thesis
Publisher Springer Place of Publication Berlin Heidelberg Editor J. Vitria; J.M. Sanches; M. Hernandez
Language English Summary Language English Original Title Opponent Colors for Human Detection
Series Editor Series Title Lecture Notes on Computer Science Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-21256-7 Medium
Area Expedition Conference IbPRIA
Notes (down) ADAS Approved no
Call Number ADAS @ adas @ RVL2011a Serial 1666
Permanent link to this record
 

 
Author Daniel Ponsa; Joan Serrat; Antonio Lopez
Title On-board image-based vehicle detection and tracking Type Journal Article
Year 2011 Publication Transactions of the Institute of Measurement and Control Abbreviated Journal TIM
Volume 33 Issue 7 Pages 783-805
Keywords vehicle detection
Abstract In this paper we present a computer vision system for daytime vehicle detection and localization, an essential step in the development of several types of advanced driver assistance systems. It has a reduced processing time and high accuracy thanks to the combination of vehicle detection with lane-markings estimation and temporal tracking of both vehicles and lane markings. Concerning vehicle detection, our main contribution is a frame scanning process that inspects images according to the geometry of image formation, and with an Adaboost-based detector that is robust to the variability in the different vehicle types (car, van, truck) and lighting conditions. In addition, we propose a new method to estimate the most likely three-dimensional locations of vehicles on the road ahead. With regards to the lane-markings estimation component, we have two main contributions. First, we employ a different image feature to the other commonly used edges: we use ridges, which are better suited to this problem. Second, we adapt RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane markings to the image features. We qualitatively assess our vehicle detection system in sequences captured on several road types and under very different lighting conditions. The processed videos are available on a web page associated with this paper. A quantitative evaluation of the system has shown quite accurate results (a low number of false positives and negatives) at a reasonable computation time.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (down) ADAS Approved no
Call Number ADAS @ adas @ PSL2011 Serial 1413
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Antonio Lopez
Title Road Detection Based on Illuminant Invariance Type Journal Article
Year 2011 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS
Volume 12 Issue 1 Pages 184-193
Keywords road detection
Abstract By using an onboard camera, it is possible to detect the free road surface ahead of the ego-vehicle. Road detection is of high relevance for autonomous driving, road departure warning, and supporting driver-assistance systems such as vehicle and pedestrian detection. The key for vision-based road detection is the ability to classify image pixels as belonging or not to the road surface. Identifying road pixels is a major challenge due to the intraclass variability caused by lighting conditions. A particularly difficult scenario appears when the road surface has both shadowed and nonshadowed areas. Accordingly, we propose a novel approach to vision-based road detection that is robust to shadows. The novelty of our approach relies on using a shadow-invariant feature space combined with a model-based classifier. The model is built online to improve the adaptability of the algorithm to the current lighting and the presence of other vehicles in the scene. The proposed algorithm works in still images and does not depend on either road shape or temporal restrictions. Quantitative and qualitative experiments on real-world road sequences with heavy traffic and shadows show that the method is robust to shadows and lighting variations. Moreover, the proposed method provides the highest performance when compared with hue-saturation-intensity (HSI)-based algorithms.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes (down) ADAS Approved no
Call Number ADAS @ adas @ AlL2011 Serial 1456
Permanent link to this record
 

 
Author Muhammad Anwer Rao; David Vazquez; Antonio Lopez
Title Color Contribution to Part-Based Person Detection in Different Types of Scenarios Type Conference Article
Year 2011 Publication 14th International Conference on Computer Analysis of Images and Patterns Abbreviated Journal
Volume 6855 Issue II Pages 463-470
Keywords Pedestrian Detection; Color
Abstract Camera-based person detection is of paramount interest due to its potential applications. The task is diffcult because the great variety of backgrounds (scenarios, illumination) in which persons are present, as well as their intra-class variability (pose, clothe, occlusion). In fact, the class person is one of the included in the popular PASCAL visual object classes (VOC) challenge. A breakthrough for this challenge, regarding person detection, is due to Felzenszwalb et al. These authors proposed a part-based detector that relies on histograms of oriented gradients (HOG) and latent support vector machines (LatSVM) to learn a model of the whole human body and its constitutive parts, as well as their relative position. Since the approach of Felzenszwalb et al. appeared new variants have been proposed, usually giving rise to more complex models. In this paper, we focus on an issue that has not attracted suficient interest up to now. In particular, we refer to the fact that HOG is usually computed from RGB color space, but other possibilities exist and deserve the corresponding investigation. In this paper we challenge RGB space with the opponent color space (OPP), which is inspired in the human vision system.We will compute the HOG on top of OPP, then we train and test the part-based human classifer by Felzenszwalb et al. using PASCAL VOC challenge protocols and person database. Our experiments demonstrate that OPP outperforms RGB. We also investigate possible differences among types of scenarios: indoor, urban and countryside. Interestingly, our experiments suggest that the beneficts of OPP with respect to RGB mainly come for indoor and countryside scenarios, those in which the human visual system was designed by evolution.
Address Seville, Spain
Corporate Author Thesis
Publisher Springer Place of Publication Berlin Heidelberg Editor P. Real, D. Diaz, H. Molina, A. Berciano, W. Kropatsch
Language English Summary Language english Original Title Color Contribution to Part-Based Person Detection in Different Types of Scenarios
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-23677-8 Medium
Area Expedition Conference CAIP
Notes (down) ADAS Approved no
Call Number ADAS @ adas @ RVL2011b Serial 1665
Permanent link to this record