Home | [31–40] << 41 42 43 44 45 46 47 48 49 50 >> [51–60] |
Records | |||||
---|---|---|---|---|---|
Author | Yaxing Wang; Joost Van de Weijer; Luis Herranz | ||||
Title | Mix and match networks: encoder-decoder alignment for zero-pair image translation | Type | Conference Article | ||
Year | 2018 | Publication | 31st IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 5467 - 5476 | ||
Keywords | |||||
Abstract | We address the problem of image translation between domains or modalities for which no direct paired data is available (i.e. zero-pair translation). We propose mix and match networks, based on multiple encoders and decoders aligned in such a way that other encoder-decoder pairs can be composed at test time to perform unseen image translation tasks between domains or modalities for which explicit paired samples were not seen during training. We study the impact of autoencoders, side information and losses in improving the alignment and transferability of trained pairwise translation models to unseen translations. We show our approach is scalable and can perform colorization and style transfer between unseen combinations of domains. We evaluate our system in a challenging cross-modal setting where semantic segmentation is estimated from depth images, without explicit access to any depth-semantic segmentation training pairs. Our model outperforms baselines based on pix2pix and CycleGAN models. | ||||
Address | Salt Lake City; USA; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | LAMP; 600.109; 600.106; 600.120 | Approved | no | ||
Call Number | Admin @ si @ WWH2018b | Serial | 3131 | ||
Permanent link to this record | |||||
Author | Adrian Galdran; Aitor Alvarez-Gila; Alessandro Bria; Javier Vazquez; Marcelo Bertalmio | ||||
Title | On the Duality Between Retinex and Image Dehazing | Type | Conference Article | ||
Year | 2018 | Publication | 31st IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 8212–8221 | ||
Keywords | Image color analysis; Task analysis; Atmospheric modeling; Computer vision; Computational modeling; Lighting | ||||
Abstract | Image dehazing deals with the removal of undesired loss of visibility in outdoor images due to the presence of fog. Retinex is a color vision model mimicking the ability of the Human Visual System to robustly discount varying illuminations when observing a scene under different spectral lighting conditions. Retinex has been widely explored in the computer vision literature for image enhancement and other related tasks. While these two problems are apparently unrelated, the goal of this work is to show that they can be connected by a simple linear relationship. Specifically, most Retinex-based algorithms have the characteristic feature of always increasing image brightness, which turns them into ideal candidates for effective image dehazing by directly applying Retinex to a hazy image whose intensities have been inverted. In this paper, we give theoretical proof that Retinex on inverted intensities is a solution to the image dehazing problem. Comprehensive qualitative and quantitative results indicate that several classical and modern implementations of Retinex can be transformed into competing image dehazing algorithms performing on pair with more complex fog removal methods, and can overcome some of the main challenges associated with this problem. | ||||
Address | Salt Lake City; USA; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | LAMP; 600.120 | Approved | no | ||
Call Number | Admin @ si @ GAB2018 | Serial | 3146 | ||
Permanent link to this record | |||||
Author | Xialei Liu; Joost Van de Weijer; Andrew Bagdanov | ||||
Title | Leveraging Unlabeled Data for Crowd Counting by Learning to Rank | Type | Conference Article | ||
Year | 2018 | Publication | 31st IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 7661 - 7669 | ||
Keywords | Task analysis; Training; Computer vision; Visualization; Estimation; Head; Context modeling | ||||
Abstract | We propose a novel crowd counting approach that leverages abundantly available unlabeled crowd imagery in a learning-to-rank framework. To induce a ranking of
cropped images , we use the observation that any sub-image of a crowded scene image is guaranteed to contain the same number or fewer persons than the super-image. This allows us to address the problem of limited size of existing datasets for crowd counting. We collect two crowd scene datasets from Google using keyword searches and queryby-example image retrieval, respectively. We demonstrate how to efficiently learn from these unlabeled datasets by incorporating learning-to-rank in a multi-task network which simultaneously ranks images and estimates crowd density maps. Experiments on two of the most challenging crowd counting datasets show that our approach obtains state-ofthe-art results. |
||||
Address | Salt Lake City; USA; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | LAMP; 600.109; 600.106; 600.120 | Approved | no | ||
Call Number | Admin @ si @ LWB2018 | Serial | 3159 | ||
Permanent link to this record | |||||
Author | Abel Gonzalez-Garcia; Davide Modolo; Vittorio Ferrari | ||||
Title | Objects as context for detecting their semantic parts | Type | Conference Article | ||
Year | 2018 | Publication | 31st IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 6907 - 6916 | ||
Keywords | Proposals; Semantics; Wheels; Automobiles; Context modeling; Task analysis; Object detection | ||||
Abstract | We present a semantic part detection approach that effectively leverages object information. We use the object appearance and its class as indicators of what parts to expect. We also model the expected relative location of parts inside the objects based on their appearance. We achieve this with a new network module, called OffsetNet, that efficiently predicts a variable number of part locations within a given object. Our model incorporates all these cues to
detect parts in the context of their objects. This leads to considerably higher performance for the challenging task of part detection compared to using part appearance alone (+5 mAP on the PASCAL-Part dataset). We also compare to other part detection methods on both PASCAL-Part and CUB200-2011 datasets. |
||||
Address | Salt Lake City; USA; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | LAMP; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ GMF2018 | Serial | 3229 | ||
Permanent link to this record | |||||
Author | Patricia Suarez; Angel Sappa; Boris X. Vintimilla; Riad I. Hammoud | ||||
Title | Deep Learning based Single Image Dehazing | Type | Conference Article | ||
Year | 2018 | Publication | 31st IEEE Conference on Computer Vision and Pattern Recognition Workhsop | Abbreviated Journal | |
Volume | Issue | Pages | 1250 - 12507 | ||
Keywords | Gallium nitride; Atmospheric modeling; Generators; Generative adversarial networks; Convergence; Image color analysis | ||||
Abstract | This paper proposes a novel approach to remove haze degradations in RGB images using a stacked conditional Generative Adversarial Network (GAN). It employs a triplet of GAN to remove the haze on each color channel independently.
A multiple loss functions scheme, applied over a conditional probabilistic model, is proposed. The proposed GAN architecture learns to remove the haze, using as conditioned entrance, the images with haze from which the clear images will be obtained. Such formulation ensures a fast model training convergence and a homogeneous model generalization. Experiments showed that the proposed method generates high-quality clear images. |
||||
Address | Salt Lake City; USA; June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | MSIAU; 600.086; 600.130; 600.122 | Approved | no | ||
Call Number | Admin @ si @ SSV2018d | Serial | 3197 | ||
Permanent link to this record | |||||
Author | David Vazquez; Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Antonio Lopez; Adriana Romero; Michal Drozdzal; Aaron Courville | ||||
Title | A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images | Type | Conference Article | ||
Year | 2017 | Publication | 31st International Congress and Exhibition on Computer Assisted Radiology and Surgery | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Deep Learning; Medical Imaging | ||||
Abstract | Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss-rate and inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing Decision Support Systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. We provide new baselines on this dataset by training standard fully convolutional networks (FCN) for semantic segmentation and significantly outperforming, without any further post-processing, prior results in endoluminal scene segmentation. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CARS | ||
Notes | ADAS; MV; 600.075; 600.085; 600.076; 601.281; 600.118 | Approved | no | ||
Call Number | ADAS @ adas @ VBS2017a | Serial | 2880 | ||
Permanent link to this record | |||||
Author | Quentin Angermann; Jorge Bernal; Cristina Sanchez Montes; Maroua Hammami; Gloria Fernandez Esparrach; Xavier Dray; Olivier Romain; F. Javier Sanchez; Aymeric Histace | ||||
Title | Real-Time Polyp Detection in Colonoscopy Videos: A Preliminary Study For Adapting Still Frame-based Methodology To Video Sequences Analysis | Type | Conference Article | ||
Year | 2017 | Publication | 31st International Congress and Exhibition on Computer Assisted Radiology and Surgery | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Barcelona; Spain; June 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CARS | ||
Notes | MV; no menciona | Approved | no | ||
Call Number | Admin @ si @ ABS2017 | Serial | 2947 | ||
Permanent link to this record | |||||
Author | Boris N. Oreshkin; Pau Rodriguez; Alexandre Lacoste | ||||
Title | TADAM: Task dependent adaptive metric for improved few-shot learning | Type | Conference Article | ||
Year | 2018 | Publication | 32nd Annual Conference on Neural Information Processing Systems | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100. | ||||
Address | Montreal; Canada; December 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NIPS | ||
Notes | ISE; 600.098; 600.119 | Approved | no | ||
Call Number | Admin @ si @ ORL2018 | Serial | 3140 | ||
Permanent link to this record | |||||
Author | Abel Gonzalez-Garcia; Joost Van de Weijer; Yoshua Bengio | ||||
Title | Image-to-image translation for cross-domain disentanglement | Type | Conference Article | ||
Year | 2018 | Publication | 32nd Annual Conference on Neural Information Processing Systems | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | Montreal; Canada; December 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NIPS | ||
Notes | LAMP; 600.120 | Approved | no | ||
Call Number | Admin @ si @ GWB2018 | Serial | 3155 | ||
Permanent link to this record | |||||
Author | Chenshen Wu; Luis Herranz; Xialei Liu; Joost Van de Weijer; Bogdan Raducanu | ||||
Title | Memory Replay GANs: Learning to Generate New Categories without Forgetting | Type | Conference Article | ||
Year | 2018 | Publication | 32nd Annual Conference on Neural Information Processing Systems | Abbreviated Journal | |
Volume | Issue | Pages | 5966-5976 | ||
Keywords | |||||
Abstract | Previous works on sequential learning address the problem of forgetting in discriminative models. In this paper we consider the case of generative models. In particular, we investigate generative adversarial networks (GANs) in the task of learning new categories in a sequential fashion. We first show that sequential fine tuning renders the network unable to properly generate images from previous categories (ie forgetting). Addressing this problem, we propose Memory Replay GANs (MeRGANs), a conditional GAN framework that integrates a memory replay generator. We study two methods to prevent forgetting by leveraging these replays, namely joint training with replay and replay alignment. Qualitative and quantitative experimental results in MNIST, SVHN and LSUN datasets show that our memory replay approach can generate competitive images while significantly mitigating the forgetting of previous categories. | ||||
Address | Montreal; Canada; December 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NIPS | ||
Notes | LAMP; 600.106; 600.109; 602.200; 600.120 | Approved | no | ||
Call Number | Admin @ si @ WHL2018 | Serial | 3249 | ||
Permanent link to this record | |||||
Author | Jaume Garcia; Albert Andaluz; Debora Gil; Francesc Carreras | ||||
Title | Decoupled External Forces in a Predictor-Corrector Segmentation Scheme for LV Contours in Tagged MR Images | Type | Conference Article | ||
Year | 2010 | Publication | 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society | Abbreviated Journal | |
Volume | Issue | Pages | 4805-4808 | ||
Keywords | |||||
Abstract | Computation of functional regional scores requires proper identification of LV contours. On one hand, manual segmentation is robust, but it is time consuming and requires high expertise. On the other hand, the tag pattern in TMR sequences is a problem for automatic segmentation of LV boundaries. We propose a segmentation method based on a predictorcorrector (Active Contours – Shape Models) scheme. Special stress is put in the definition of the AC external forces. First, we introduce a semantic description of the LV that discriminates myocardial tissue by using texture and motion descriptors. Second, in order to ensure convergence regardless of the initial contour, the external energy is decoupled according to the orientation of the edges in the image potential. We have validated the model in terms of error in segmented contours and accuracy of regional clinical scores. | ||||
Address | Buenos Aires (Argentina) | ||||
Corporate Author | IEEE EMB | Thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1557-170X | ISBN | 978-1-4244-4123-5 | Medium | |
Area | Expedition | Conference | EMBC | ||
Notes | IAM | Approved | no | ||
Call Number | IAM @ iam @ GAG2010 | Serial | 1514 | ||
Permanent link to this record | |||||
Author | Anjan Dutta; Zeynep Akata | ||||
Title | Semantically Tied Paired Cycle Consistency for Zero-Shot Sketch-based Image Retrieval | Type | Conference Article | ||
Year | 2019 | Publication | 32nd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 5089-5098 | ||
Keywords | |||||
Abstract | Zero-shot sketch-based image retrieval (SBIR) is an emerging task in computer vision, allowing to retrieve natural images relevant to sketch queries that might not been seen in the training phase. Existing works either require aligned sketch-image pairs or inefficient memory fusion layer for mapping the visual information to a semantic space. In this work, we propose a semantically aligned paired cycle-consistent generative (SEM-PCYC) model for zero-shot SBIR, where each branch maps the visual information to a common semantic space via an adversarial training. Each of these branches maintains a cycle consistency that only requires supervision at category levels, and avoids the need of highly-priced aligned sketch-image pairs. A classification criteria on the generators' outputs ensures the visual to semantic space mapping to be discriminating. Furthermore, we propose to combine textual and hierarchical side information via a feature selection auto-encoder that selects discriminating side information within a same end-to-end model. Our results demonstrate a significant boost in zero-shot SBIR performance over the state-of-the-art on the challenging Sketchy and TU-Berlin datasets. | ||||
Address | Long beach; California; USA; June 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | DAG; 600.141; 600.121 | Approved | no | ||
Call Number | Admin @ si @ DuA2019 | Serial | 3268 | ||
Permanent link to this record | |||||
Author | Lu Yu; Vacit Oguz Yazici; Xialei Liu; Joost Van de Weijer; Yongmei Cheng; Arnau Ramisa | ||||
Title | Learning Metrics from Teachers: Compact Networks for Image Embedding | Type | Conference Article | ||
Year | 2019 | Publication | 32nd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 2907-2916 | ||
Keywords | |||||
Abstract | Metric learning networks are used to compute image embeddings, which are widely used in many applications such as image retrieval and face recognition. In this paper, we propose to use network distillation to efficiently compute image embeddings with small networks. Network distillation has been successfully applied to improve image classification, but has hardly been explored for metric learning. To do so, we propose two new loss functions that model the
communication of a deep teacher network to a small student network. We evaluate our system in several datasets, including CUB-200-2011, Cars-196, Stanford Online Products and show that embeddings computed using small student networks perform significantly better than those computed using standard networks of similar size. Results on a very compact network (MobileNet-0.25), which can be used on mobile devices, show that the proposed method can greatly improve Recall@1 results from 27.5% to 44.6%. Furthermore, we investigate various aspects of distillation for embeddings, including hint and attention layers, semisupervised learning and cross quality distillation. |
||||
Address | Long beach; California; june 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | LAMP; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ YYL2019 | Serial | 3281 | ||
Permanent link to this record | |||||
Author | Ali Furkan Biten; Lluis Gomez; Marçal Rusiñol; Dimosthenis Karatzas | ||||
Title | Good News, Everyone! Context driven entity-aware captioning for news images | Type | Conference Article | ||
Year | 2019 | Publication | 32nd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 12458-12467 | ||
Keywords | |||||
Abstract | Current image captioning systems perform at a merely descriptive level, essentially enumerating the objects in the scene and their relations. Humans, on the contrary, interpret images by integrating several sources of prior knowledge of the world. In this work, we aim to take a step closer to producing captions that offer a plausible interpretation of the scene, by integrating such contextual information into the captioning pipeline. For this we focus on the captioning of images used to illustrate news articles. We propose a novel captioning method that is able to leverage contextual information provided by the text of news articles associated with an image. Our model is able to selectively draw information from the article guided by visual cues, and to dynamically extend the output dictionary to out-of-vocabulary named entities that appear in the context source. Furthermore we introduce“ GoodNews”, the largest news image captioning dataset in the literature and demonstrate state-of-the-art results. | ||||
Address | Long beach; California; USA; june 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | DAG; 600.129; 600.135; 601.338; 600.121 | Approved | no | ||
Call Number | Admin @ si @ BGR2019 | Serial | 3289 | ||
Permanent link to this record | |||||
Author | Shifeng Zhang; Xiaobo Wang; Ajian Liu; Chenxu Zhao; Jun Wan; Sergio Escalera; Hailin Shi; Zezheng Wang; Stan Z. Li | ||||
Title | A Dataset and Benchmark for Large-scale Multi-modal Face Anti-spoofing | Type | Conference Article | ||
Year | 2019 | Publication | 32nd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 919-928 | ||
Keywords | |||||
Abstract | Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects (≤170) and modalities (≤2), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and visual modalities. Specifically, it consists of 1,000 subjects with 21,000 videos and each sample has 3 modalities (i.e., RGB, Depth and IR). We also provide a measurement set, evaluation protocol and training/validation/testing subsets, developing a new benchmark for face anti-spoofing. Moreover, we present a new multi-modal fusion method as baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modal. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at https://sites.google.com/qq.com/chalearnfacespoofingattackdete/. | ||||
Address | California; June 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPR | ||
Notes | HuPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ ZWL2019 | Serial | 3331 | ||
Permanent link to this record |