Records |
Author |
Xavier Roca; Jordi Vitria |
Title |
Multiscale Structure Extraction using Morphological Tools. Applications to Edge Detection. |
Type |
Miscellaneous |
Year |
1993 |
Publication |
SPIE International Symposium on Optical Instrumentation and Applied Science (Conference on image Algebra and Morphological image Processing IV). |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Diego; CA; USA |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
OR;ISE;MV |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ RoV1993 |
Serial |
176 |
Permanent link to this record |
|
|
|
Author |
Maria Vanrell; Jordi Vitria |
Title |
Mathematical Morphology, Granulometries and Texture Perception. |
Type |
Miscellaneous |
Year |
1993 |
Publication |
SPIE International Symposium on Optical Instrumentation and Applied Science (Conference on image Algebra and Morphological image Processing IV). |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Diego; CA; USA |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
OR;CIC;MV |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ VaV1993 |
Serial |
178 |
Permanent link to this record |
|
|
|
Author |
Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat |
Title |
Photometric Stereo through and Adapted Alternation Approach |
Type |
Conference Article |
Year |
2008 |
Publication |
IEEE International Conference on Image Processing, |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1500–1503 |
Keywords |
|
Abstract |
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Diego; CA; USA |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
ADAS |
Approved |
no |
Call Number |
ADAS @ adas @ JSL2008d |
Serial |
1016 |
Permanent link to this record |
|
|
|
Author |
Jose Manuel Alvarez; Felipe Lumbreras; Theo Gevers; Antonio Lopez |
Title |
Geographic Information for vision-based Road Detection |
Type |
Conference Article |
Year |
2010 |
Publication |
IEEE Intelligent Vehicles Symposium |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
621–626 |
Keywords |
road detection |
Abstract |
Road detection is a vital task for the development of autonomous vehicles. The knowledge of the free road surface ahead of the target vehicle can be used for autonomous driving, road departure warning, as well as to support advanced driver assistance systems like vehicle or pedestrian detection. Using vision to detect the road has several advantages in front of other sensors: richness of features, easy integration, low cost or low power consumption. Common vision-based road detection approaches use low-level features (such as color or texture) as visual cues to group pixels exhibiting similar properties. However, it is difficult to foresee a perfect clustering algorithm since roads are in outdoor scenarios being imaged from a mobile platform. In this paper, we propose a novel high-level approach to vision-based road detection based on geographical information. The key idea of the algorithm is exploiting geographical information to provide a rough detection of the road. Then, this segmentation is refined at low-level using color information to provide the final result. The results presented show the validity of our approach. |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Diego; CA; USA |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
IV |
Notes |
ADAS;ISE |
Approved |
no |
Call Number |
ADAS @ adas @ ALG2010 |
Serial |
1428 |
Permanent link to this record |
|
|
|
Author |
Agata Lapedriza; David Masip; Jordi Vitria |
Title |
Are external face features useful for automatic face classification? |
Type |
Miscellaneous |
Year |
2005 |
Publication |
IEEE Workshop on Face Recognition Grand Challenge Experiments, 151–ff |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Diego; CA; USA; |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
OR;MV |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ LMV2005b |
Serial |
547 |
Permanent link to this record |
|
|
|
Author |
Daniel Hernandez; Alejandro Chacon; Antonio Espinosa; David Vazquez; Juan Carlos Moure; Antonio Lopez |
Title |
Embedded real-time stereo estimation via Semi-Global Matching on the GPU |
Type |
Conference Article |
Year |
2016 |
Publication |
16th International Conference on Computational Science |
Abbreviated Journal |
|
Volume |
80 |
Issue |
|
Pages |
143-153 |
Keywords |
Autonomous Driving; Stereo; CUDA; 3d reconstruction |
Abstract |
Dense, robust and real-time computation of depth information from stereo-camera systems is a computationally demanding requirement for robotics, advanced driver assistance systems (ADAS) and autonomous vehicles. Semi-Global Matching (SGM) is a widely used algorithm that propagates consistency constraints along several paths across the image. This work presents a real-time system producing reliable disparity estimation results on the new embedded energy-efficient GPU devices. Our design runs on a Tegra X1 at 41 frames per second for an image size of 640x480, 128 disparity levels, and using 4 path directions for the SGM method. |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Diego; CA; USA; June 2016 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCS |
Notes |
ADAS; 600.085; 600.082; 600.076 |
Approved |
no |
Call Number |
ADAS @ adas @ HCE2016a |
Serial |
2740 |
Permanent link to this record |
|
|
|
Author |
Victor Campmany; Sergio Silva; Antonio Espinosa; Juan Carlos Moure; David Vazquez; Antonio Lopez |
Title |
GPU-based pedestrian detection for autonomous driving |
Type |
Conference Article |
Year |
2016 |
Publication |
16th International Conference on Computational Science |
Abbreviated Journal |
|
Volume |
80 |
Issue |
|
Pages |
2377-2381 |
Keywords |
Pedestrian detection; Autonomous Driving; CUDA |
Abstract |
We propose a real-time pedestrian detection system for the embedded Nvidia Tegra X1 GPU-CPU hybrid platform. The pipeline is composed by the following state-of-the-art algorithms: Histogram of Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG) features extracted from the input image; Pyramidal Sliding Window technique for foreground segmentation; and Support Vector Machine (SVM) for classification. Results show a 8x speedup in the target Tegra X1 platform and a better performance/watt ratio than desktop CUDA platforms in study. |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Diego; CA; USA; June 2016 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCS |
Notes |
ADAS; 600.085; 600.082; 600.076 |
Approved |
no |
Call Number |
ADAS @ adas @ CSE2016 |
Serial |
2741 |
Permanent link to this record |
|
|
|
Author |
Ivet Rafegas; Maria Vanrell |
Title |
Color spaces emerging from deep convolutional networks |
Type |
Conference Article |
Year |
2016 |
Publication |
24th Color and Imaging Conference |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
225-230 |
Keywords |
|
Abstract |
Award for the best interactive session
Defining color spaces that provide a good encoding of spatio-chromatic properties of color surfaces is an open problem in color science [8, 22]. Related to this, in computer vision the fusion of color with local image features has been studied and evaluated [16]. In human vision research, the cells which are selective to specific color hues along the visual pathway are also a focus of attention [7, 14]. In line with these research aims, in this paper we study how color is encoded in a deep Convolutional Neural Network (CNN) that has been trained on more than one million natural images for object recognition. These convolutional nets achieve impressive performance in computer vision, and rival the representations in human brain. In this paper we explore how color is represented in a CNN architecture that can give some intuition about efficient spatio-chromatic representations. In convolutional layers the activation of a neuron is related to a spatial filter, that combines spatio-chromatic representations. We use an inverted version of it to explore the properties. Using a series of unsupervised methods we classify different type of neurons depending on the color axes they define and we propose an index of color-selectivity of a neuron. We estimate the main color axes that emerge from this trained net and we prove that colorselectivity of neurons decreases from early to deeper layers. |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Diego; USA; November 2016 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CIC |
Notes |
CIC |
Approved |
no |
Call Number |
Admin @ si @ RaV2016a |
Serial |
2894 |
Permanent link to this record |
|
|
|
Author |
Petia Radeva; A.Amini; J.Huang; Enric Marti |
Title |
Deformable B-Solids and Implicit Snakes for Localization and Tracking of SPAMM MRI-Data |
Type |
Conference Article |
Year |
1996 |
Publication |
Workshop on Mathematical Methods in Biomedical Image Analysis |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
192-201 |
Keywords |
|
Abstract |
To date, MRI-SPAMM data from different image slices have been analyzed independently. In this paper, we propose an approach for 3D tag localization and tracking of SPAMM data by a novel deformable B-solid. The solid is defined in terms of a 3D tensor product B-spline. The isoparametric curves of the B-spline solid have special importance. These are termed implicit snakes as they deform under image forces from tag lines in different image slices. The localization and tracking of tag lines is performed under constraints of continuity and smoothness of the B-solid. The framework unifies the problems of localization, and displacement fitting and interpolation into the same procedure utilizing B-spline bases for interpolation. To track motion from boundaries and restrict image forces to the myocardium, a volumetric model is employed as a pair of coupled endocardial and epicardial B-spline surfaces. To recover deformations in the LV an energy-minimization problem is posed where both tag and ... |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Francisco CA |
Corporate Author |
|
Thesis |
|
Publisher |
IEEE Computer Society |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
0-8186-7368-0 |
Medium |
|
Area |
|
Expedition |
|
Conference |
MMBIA ’96 |
Notes |
MILAB;IAM; |
Approved |
no |
Call Number |
IAM @ iam @ RAH1996 |
Serial |
1630 |
Permanent link to this record |
|
|
|
Author |
Mario Rojas; David Masip; A. Todorov; Jordi Vitria |
Title |
Automatic Point-based Facial Trait Judgments Evaluation |
Type |
Conference Article |
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
2715–2720 |
Keywords |
|
Abstract |
Humans constantly evaluate the personalities of other people using their faces. Facial trait judgments have been studied in the psychological field, and have been determined to influence important social outcomes of our lives, such as elections outcomes and social relationships. Recent work on textual descriptions of faces has shown that trait judgments are highly correlated. Further, behavioral studies suggest that two orthogonal dimensions, valence and dominance, can describe the basis of the human judgments from faces. In this paper, we used a corpus of behavioral data of judgments on different trait dimensions to automatically learn a trait predictor from facial pixel images. We study whether trait evaluations performed by humans can be learned using machine learning classifiers, and used later in automatic evaluations of new facial images. The experiments performed using local point-based descriptors show promising results in the evaluation of the main traits. |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Francisco CA, USA |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPR |
Notes |
OR;MV |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ RMT2010 |
Serial |
1282 |
Permanent link to this record |
|
|
|
Author |
Josep M. Gonfaus; Xavier Boix; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez |
Title |
Harmony Potentials for Joint Classification and Segmentation |
Type |
Conference Article |
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
3280–3287 |
Keywords |
|
Abstract |
Hierarchical conditional random fields have been successfully applied to object segmentation. One reason is their ability to incorporate contextual information at different scales. However, these models do not allow multiple labels to be assigned to a single node. At higher scales in the image, this yields an oversimplified model, since multiple classes can be reasonable expected to appear within one region. This simplified model especially limits the impact that observations at larger scales may have on the CRF model. Neglecting the information at larger scales is undesirable since class-label estimates based on these scales are more reliable than at smaller, noisier scales. To address this problem, we propose a new potential, called harmony potential, which can encode any possible combination of class labels. We propose an effective sampling strategy that renders tractable the underlying optimization problem. Results show that our approach obtains state-of-the-art results on two challenging datasets: Pascal VOC 2009 and MSRC-21. |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Francisco CA, USA |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPR |
Notes |
ADAS;CIC;ISE |
Approved |
no |
Call Number |
ADAS @ adas @ GBW2010 |
Serial |
1296 |
Permanent link to this record |
|
|
|
Author |
Xinhang Song; Luis Herranz; Shuqiang Jiang |
Title |
Depth CNNs for RGB-D Scene Recognition: Learning from Scratch Better than Transferring from RGB-CNNs |
Type |
Conference Article |
Year |
2017 |
Publication |
31st AAAI Conference on Artificial Intelligence |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
RGB-D scene recognition; weakly supervised; fine tune; CNN |
Abstract |
Scene recognition with RGB images has been extensively studied and has reached very remarkable recognition levels, thanks to convolutional neural networks (CNN) and large scene datasets. In contrast, current RGB-D scene data is much more limited, so often leverages RGB large datasets, by transferring pretrained RGB CNN models and fine-tuning with the target RGB-D dataset. However, we show that this approach has the limitation of hardly reaching bottom layers, which is key to learn modality-specific features. In contrast, we focus on the bottom layers, and propose an alternative strategy to learn depth features combining local weakly supervised training from patches followed by global fine tuning with images. This strategy is capable of learning very discriminative depth-specific features with limited depth images, without resorting to Places-CNN. In addition we propose a modified CNN architecture to further match the complexity of the model and the amount of data available. For RGB-D scene recognition, depth and RGB features are combined by projecting them in a common space and further leaning a multilayer classifier, which is jointly optimized in an end-to-end network. Our framework achieves state-of-the-art accuracy on NYU2 and SUN RGB-D in both depth only and combined RGB-D data. |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Francisco CA; February 2017 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
AAAI |
Notes |
LAMP; 600.120 |
Approved |
no |
Call Number |
Admin @ si @ SHJ2017 |
Serial |
2967 |
Permanent link to this record |
|
|
|
Author |
Sandra Jimenez; Xavier Otazu; Valero Laparra; Jesus Malo |
Title |
Chromatic induction and contrast masking: similar models, different goals? |
Type |
Conference Article |
Year |
2013 |
Publication |
Human Vision and Electronic Imaging XVIII |
Abbreviated Journal |
|
Volume |
8651 |
Issue |
|
Pages |
|
Keywords |
|
Abstract |
Normalization of signals coming from linear sensors is an ubiquitous mechanism of neural adaptation.1 Local interaction between sensors tuned to a particular feature at certain spatial position and neighbor sensors explains a wide range of psychophysical facts including (1) masking of spatial patterns, (2) non-linearities of motion sensors, (3) adaptation of color perception, (4) brightness and chromatic induction, and (5) image quality assessment. Although the above models have formal and qualitative similarities, it does not necessarily mean that the mechanisms involved are pursuing the same statistical goal. For instance, in the case of chromatic mechanisms (disregarding spatial information), different parameters in the normalization give rise to optimal discrimination or adaptation, and different non-linearities may give rise to error minimization or component independence. In the case of spatial sensors (disregarding color information), a number of studies have pointed out the benefits of masking in statistical independence terms. However, such statistical analysis has not been performed for spatio-chromatic induction models where chromatic perception depends on spatial configuration. In this work we investigate whether successful spatio-chromatic induction models,6 increase component independence similarly as previously reported for masking models. Mutual information analysis suggests that seeking an efficient chromatic representation may explain the prevalence of induction effects in spatially simple images. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only. |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Francisco CA; USA; February 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
HVEI |
Notes |
CIC |
Approved |
no |
Call Number |
Admin @ si @ JOL2013 |
Serial |
2240 |
Permanent link to this record |
|
|
|
Author |
Petia Radeva; M. Scoccianti |
Title |
3D Reconstruction of Abdominal Aortic Aneurysm |
Type |
Miscellaneous |
Year |
2000 |
Publication |
Elsevier Science B.V., Ed. H.U. Lemke, M.W. Vannier, K. Inamura, A.G.Farman and K.Doi, CARS 2000, pp.1014, ISBN:0–444–50536–9 |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
|
Keywords |
|
Abstract |
|
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Francisco, USA |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
|
Notes |
MILAB |
Approved |
no |
Call Number |
BCNPCL @ bcnpcl @ RaS2000 |
Serial |
439 |
Permanent link to this record |
|
|
|
Author |
Jose Manuel Alvarez; Theo Gevers; Antonio Lopez |
Title |
3D Scene Priors for Road Detection |
Type |
Conference Article |
Year |
2010 |
Publication |
23rd IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
57–64 |
Keywords |
road detection |
Abstract |
Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods. |
Address ![sorted by Address field, ascending order (up)](img/sort_asc.gif) |
San Francisco; CA; USA; June 2010 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1063-6919 |
ISBN |
978-1-4244-6984-0 |
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPR |
Notes |
ADAS;ISE |
Approved |
no |
Call Number |
ADAS @ adas @ AGL2010a |
Serial |
1302 |
Permanent link to this record |