Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Xavier Baro; Jordi Gonzalez; Junior Fabian; Miguel Angel Bautista; Marc Oliu; Hugo Jair Escalante; Isabelle Guyon; Sergio Escalera | ||||
Title | ChaLearn Looking at People 2015 challenges: action spotting and cultural event recognition | Type | Conference Article | ||
Year | 2015 | Publication | 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) | Abbreviated Journal | |
Volume | Issue | Pages | 1-9 | ||
Keywords | |||||
Abstract | Following previous series on Looking at People (LAP) challenges [6, 5, 4], ChaLearn ran two competitions to be presented at CVPR 2015: action/interaction spotting and cultural event recognition in RGB data. We ran a second round on human activity recognition on RGB data sequences. In terms of cultural event recognition, tens of categories have to be recognized. This involves scene understanding and human analysis. This paper summarizes the two performed challenges and obtained results. Details of the ChaLearn LAP competitions can be found at http://gesture.chalearn.org/. | ||||
Address | Boston; EEUU; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | HuPBA;MV | Approved | no | ||
Call Number | Serial | 2652 | |||
Permanent link to this record | |||||
Author | Victor Campmany; Sergio Silva; Juan Carlos Moure; Antoni Espinosa; David Vazquez; Antonio Lopez | ||||
Title | GPU-based pedestrian detection for autonomous driving | Type | Abstract | ||
Year | 2015 | Publication | Programming and Tunning Massive Parallel Systems | Abbreviated Journal | PUMPS |
Volume | Issue | Pages | |||
Keywords | Autonomous Driving; ADAS; CUDA; Pedestrian Detection | ||||
Abstract | Pedestrian detection for autonomous driving has gained a lot of prominence during the last few years. Besides the fact that it is one of the hardest tasks within computer vision, it involves huge computational costs. The real-time constraints in the field are tight, and regular processors are not able to handle the workload obtaining an acceptable ratio of frames per second (fps). Moreover, multiple cameras are required to obtain accurate results, so the need to speed up the process is even higher. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system. Further, we introduce significant algorithmic adjustments and optimizations to adapt the problem to the GPU architecture. The aim is to provide a system capable of running in real-time obtaining reliable results. | ||||
Address | Barcelona; Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | PUMPS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | PUMPS | ||
Notes | ADAS; 600.076; 600.082; 600.085 | Approved | no | ||
Call Number | ADAS @ adas @ CSM2015 | Serial | 2644 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Gabriel Villalonga; German Ros; David Vazquez; Antonio Lopez | ||||
Title | 3D-Guided Multiscale Sliding Window for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | 9117 | Issue | Pages | 560-568 | |
Keywords | Pedestrian Detection | ||||
Abstract | The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IbPRIA | |
Notes | ADAS; 600.076; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ GVR2015 | Serial | 2585 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez | ||||
Title | Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | IEEE Intelligent Vehicles Symposium IV2015 | Abbreviated Journal | |
Volume | Issue | Pages | 356-361 | ||
Keywords | Pedestrian Detection | ||||
Abstract | Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy. | ||||
Address | Seoul; Corea; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IV | |
Notes | ADAS; 600.076; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ GVX2015 | Serial | 2625 | ||
Permanent link to this record | |||||
Author | German Ros; Sebastian Ramos; Manuel Granados; Amir Bakhtiary; David Vazquez; Antonio Lopez | ||||
Title | Vision-based Offline-Online Perception Paradigm for Autonomous Driving | Type | Conference Article | ||
Year | 2015 | Publication | IEEE Winter Conference on Applications of Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 231 - 238 | ||
Keywords | Autonomous Driving; Scene Understanding; SLAM; Semantic Segmentation | ||||
Abstract | Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in realtime. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community. | ||||
Address | Hawaii; January 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | WACV | |
Notes | ADAS; 600.076 | Approved | no | ||
Call Number | ADAS @ adas @ RRG2015 | Serial | 2499 | ||
Permanent link to this record | |||||
Author | Sergio Silva; Victor Campmany; Laura Sellart; Juan Carlos Moure; Antoni Espinosa; David Vazquez; Antonio Lopez | ||||
Title | Autonomous GPU-based Driving | Type | Abstract | ||
Year | 2015 | Publication | Programming and Tunning Massive Parallel Systems | Abbreviated Journal | PUMPS |
Volume | Issue | Pages | |||
Keywords | Autonomous Driving; ADAS; CUDA | ||||
Abstract | Human factors cause most driving accidents; this is why nowadays is common to hear about autonomous driving as an alternative. Autonomous driving will not only increase safety, but also will develop a system of cooperative self-driving cars that will reduce pollution and congestion. Furthermore, it will provide more freedom to handicapped people, elderly or kids.
Autonomous Driving requires perceiving and understanding the vehicle environment (e.g., road, traffic signs, pedestrians, vehicles) using sensors (e.g., cameras, lidars, sonars, and radars), selflocalization (requiring GPS, inertial sensors and visual localization in precise maps), controlling the vehicle and planning the routes. These algorithms require high computation capability, and thanks to NVIDIA GPU acceleration this starts to become feasible. NVIDIA® is developing a new platform for boosting the Autonomous Driving capabilities that is able of managing the vehicle via CAN-Bus: the Drive™ PX. It has 8 ARM cores with dual accelerated Tegra® X1 chips. It has 12 synchronized camera inputs for 360º vehicle perception, 4G and Wi-Fi capabilities allowing vehicle communications and GPS and inertial sensors inputs for self-localization. Our research group has been selected for testing Drive™ PX. Accordingly, we are developing a Drive™ PX based autonomous car. Currently, we are porting our previous CPU based algorithms (e.g., Lane Departure Warning, Collision Warning, Automatic Cruise Control, Pedestrian Protection, or Semantic Segmentation) for running in the GPU. |
||||
Address | Barcelona; Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | PUMPS | ||
Notes | ADAS; 600.076; 600.082; 600.085 | Approved | no | ||
Call Number | ADAS @ adas @ SCS2015 | Serial | 2645 | ||
Permanent link to this record | |||||
Author | Pau Riba; Alicia Fornes; Josep Llados | ||||
Title | Towards the Alignment of Handwritten Music Scores | Type | Conference Article | ||
Year | 2015 | Publication | 11th IAPR International Workshop on Graphics Recognition | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | It is very common to find different versions of the same music work in archives of Opera Theaters. These differences correspond to modifications and annotations from the musicians. From the musicologist point of view, these variations are very interesting and deserve study. This paper explores the alignment of music scores as a tool for automatically detecting the passages that contain such differences. Given the difficulties in the recognition of handwritten music scores, our goal is to align the music scores and at the same time, avoid the recognition of music elements as much as possible. After removing the staff lines, braces and ties, the bar lines are detected. Then, the bar units are described as a whole using the Blurred Shape Model. The bar units alignment is performed by using Dynamic Time Warping. The analysis of the alignment path is used to detect the variations in the music scores. The method has been evaluated on a subset of the CVC-MUSCIMA dataset, showing encouraging results. | ||||
Address | Nancy; France; August 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer International Publishing | Place of Publication | Editor | Bart Lamiroy; Rafael Dueire Lins | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-319-52158-9 | Medium | ||
Area | Expedition | Conference | GREC | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ | Serial | 2874 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; Dimosthenis Karatzas; Josep Llados | ||||
Title | Automatic Verification of Properly Signed Multi-page Document Images | Type | Conference Article | ||
Year | 2015 | Publication | Proceedings of the Eleventh International Symposium on Visual Computing | Abbreviated Journal | |
Volume | 9475 | Issue | Pages | 327-336 | |
Keywords | Document Image; Manual Inspection; Signature Verification; Rejection Criterion; Document Flow | ||||
Abstract | In this paper we present an industrial application for the automatic screening of incoming multi-page documents in a banking workflow aimed at determining whether these documents are properly signed or not. The proposed method is divided in three main steps. First individual pages are classified in order to identify the pages that should contain a signature. In a second step, we segment within those key pages the location where the signatures should appear. The last step checks whether the signatures are present or not. Our method is tested in a real large-scale environment and we report the results when checking two different types of real multi-page contracts, having in total more than 14,500 pages. | ||||
Address | Las Vegas, Nevada, USA; December 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | 9475 | Series Issue | Edition | ||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ISVC | ||
Notes | DAG; 600.077 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3189 | ||
Permanent link to this record | |||||
Author | Francisco Alvaro; Francisco Cruz; Joan Andreu Sanchez; Oriol Ramos Terrades; Jose Miguel Benedi | ||||
Title | Structure Detection and Segmentation of Documents Using 2D Stochastic Context-Free Grammars | Type | Journal Article | ||
Year | 2015 | Publication | Neurocomputing | Abbreviated Journal | NEUCOM |
Volume | 150 | Issue | A | Pages | 147-154 |
Keywords | document image analysis; stochastic context-free grammars; text classication features | ||||
Abstract | In this paper we dene a bidimensional extension of Stochastic Context-Free Grammars for structure detection and segmentation of images of documents.
Two sets of text classication features are used to perform an initial classication of each zone of the page. Then, the document segmentation is obtained as the most likely hypothesis according to a stochastic grammar. We used a dataset of historical marriage license books to validate this approach. We also tested several inference algorithms for Probabilistic Graphical Models and the results showed that the proposed grammatical model outperformed the other methods. Furthermore, grammars also provide the document structure along with its segmentation. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 601.158; 600.077; 600.061 | Approved | no | ||
Call Number | Admin @ si @ ACS2015 | Serial | 2531 | ||
Permanent link to this record | |||||
Author | Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva | ||||
Title | Towards social interaction detection in egocentric photo-streams | Type | Conference Article | ||
Year | 2015 | Publication | Proceedings of SPIE, 8th International Conference on Machine Vision , ICMV 2015 | Abbreviated Journal | |
Volume | 9875 | Issue | Pages | ||
Keywords | |||||
Abstract | Detecting social interaction in videos relying solely on visual cues is a valuable task that is receiving increasing attention in recent years. In this work, we address this problem in the challenging domain of egocentric photo-streams captured by a low temporal resolution wearable camera (2fpm). The major difficulties to be handled in this context are the sparsity of observations as well as unpredictability of camera motion and attention orientation due to the fact that the camera is worn as part of clothing. Our method consists of four steps: multi-faces localization and tracking, 3D localization, pose estimation and analysis of f-formations. By estimating pair-to-pair interaction probabilities over the sequence, our method states the presence or absence of interaction with the camera wearer and specifies which people are more involved in the interaction. We tested our method over a dataset of 18.000 images and we show its reliability on our considered purpose. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICMV | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ ADR2015a | Serial | 2702 | ||
Permanent link to this record | |||||
Author | Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva | ||||
Title | Multi-Face Tracking by Extended Bag-of-Tracklets in Egocentric Videos | Type | Miscellaneous | ||
Year | 2015 | Publication | Arxiv | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Egocentric images offer a hands-free way to record daily experiences and special events, where social interactions are of special interest. A natural question that arises is how to extract and track the appearance of multiple persons in a social event captured by a wearable camera. In this paper, we propose a novel method to find correspondences of multiple-faces in low temporal resolution egocentric sequences acquired through a wearable camera. This kind of sequences imposes additional challenges to the multitracking problem with respect to conventional videos. Due to the free motion of the camera and to its low temporal resolution (2 fpm), abrupt changes in the field of view, in illumination conditions and in the target location are very frequent. To overcome such a difficulty, we propose to generate, for each detected face, a set of correspondences along the whole sequence that we call tracklet and to take advantage of their redundancy to deal with both false positive face detections and unreliable tracklets. Similar tracklets are grouped into the so called extended bag-of-tracklets (eBoT), which are aimed to correspond to specific persons. Finally, a prototype tracklet is extracted for each eBoT. We validated our method over a dataset of 18.000 images from 38 egocentric sequences with 52 trackable persons and compared to the state-of-the-art methods, demonstrating its effectiveness and robustness. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ ADR2015b | Serial | 2713 | ||
Permanent link to this record | |||||
Author | Tadashi Araki; Nobutaka Ikeda; Nilanjan Dey; Sayan Chakraborty; Luca Saba; Dinesh Kumar; Elisa Cuadrado Godia; Xiaoyi Jiang; Ajay Gupta; Petia Radeva; John R. Laird; Andrew Nicolaides; Jasjit S. Suri | ||||
Title | A comparative approach of four different image registration techniques for quantitative assessment of coronary artery calcium lesions using intravascular ultrasound | Type | Journal Article | ||
Year | 2015 | Publication | Computer Methods and Programs in Biomedicine | Abbreviated Journal | CMPB |
Volume | 118 | Issue | 2 | Pages | 158-172 |
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ AID2015 | Serial | 2640 | ||
Permanent link to this record | |||||
Author | Arash Akbarinia; C. Alejandro Parraga | ||||
Title | Biologically Plausible Colour Naming Model | Type | Conference Article | ||
Year | 2015 | Publication | European Conference on Visual Perception ECVP2015 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Poster | ||||
Address | Liverpool; UK; August 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECVP | ||
Notes | NEUROBIT; 600.068 | Approved | no | ||
Call Number | Admin @ si @ AkP2015 | Serial | 2660 | ||
Permanent link to this record | |||||
Author | Jaume Amores | ||||
Title | MILDE: multiple instance learning by discriminative embedding | Type | Journal Article | ||
Year | 2015 | Publication | Knowledge and Information Systems | Abbreviated Journal | KAIS |
Volume | 42 | Issue | 2 | Pages | 381-407 |
Keywords | Multi-instance learning; Codebook; Bag of words | ||||
Abstract | While the objective of the standard supervised learning problem is to classify feature vectors, in the multiple instance learning problem, the objective is to classify bags, where each bag contains multiple feature vectors. This represents a generalization of the standard problem, and this generalization becomes necessary in many real applications such as drug activity prediction, content-based image retrieval, and others. While the existing paradigms are based on learning the discriminant information either at the instance level or at the bag level, we propose to incorporate both levels of information. This is done by defining a discriminative embedding of the original space based on the responses of cluster-adapted instance classifiers. Results clearly show the advantage of the proposed method over the state of the art, where we tested the performance through a variety of well-known databases that come from real problems, and we also included an analysis of the performance using synthetically generated data. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer London | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0219-1377 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 601.042; 600.057; 600.076 | Approved | no | ||
Call Number | Admin @ si @ Amo2015 | Serial | 2383 | ||
Permanent link to this record | |||||
Author | David Aldavert; Marçal Rusiñol; Ricardo Toledo; Josep Llados | ||||
Title | A Study of Bag-of-Visual-Words Representations for Handwritten Keyword Spotting | Type | Journal Article | ||
Year | 2015 | Publication | International Journal on Document Analysis and Recognition | Abbreviated Journal | IJDAR |
Volume | 18 | Issue | 3 | Pages | 223-234 |
Keywords | Bag-of-Visual-Words; Keyword spotting; Handwritten documents; Performance evaluation | ||||
Abstract | The Bag-of-Visual-Words (BoVW) framework has gained popularity among the document image analysis community, specifically as a representation of handwritten words for recognition or spotting purposes. Although in the computer vision field the BoVW method has been greatly improved, most of the approaches in the document image analysis domain still rely on the basic implementation of the BoVW method disregarding such latest refinements. In this paper, we present a review of those improvements and its application to the keyword spotting task. We thoroughly evaluate their impact against a baseline system in the well-known George Washington dataset and compare the obtained results against nine state-of-the-art keyword spotting methods. In addition, we also compare both the baseline and improved systems with the methods presented at the Handwritten Keyword Spotting Competition 2014. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1433-2833 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; ADAS; 600.055; 600.061; 601.223; 600.077; 600.097 | Approved | no | ||
Call Number | Admin @ si @ ART2015 | Serial | 2679 | ||
Permanent link to this record |