toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author M. Oliver; G. Haro; Mariella Dimiccoli; B. Mazin; C. Ballester edit   pdf
doi  openurl
  Title (up) A Computational Model for Amodal Completion Type Journal Article
  Year 2016 Publication Journal of Mathematical Imaging and Vision Abbreviated Journal JMIV  
  Volume 56 Issue 3 Pages 511–534  
  Keywords Perception; visual completion; disocclusion; Bayesian model;relatability; Euler elastica  
  Abstract This paper presents a computational model to recover the most likely interpretation
of the 3D scene structure from a planar image, where some objects may occlude others. The estimated scene interpretation is obtained by integrating some global and local cues and provides both the complete disoccluded objects that form the scene and their ordering according to depth.
Our method first computes several distal scenes which are compatible with the proximal planar image. To compute these different hypothesized scenes, we propose a perceptually inspired object disocclusion method, which works by minimizing the Euler's elastica as well as by incorporating the relatability of partially occluded contours and the convexity of the disoccluded objects. Then, to estimate the preferred scene we rely on a Bayesian model and define probabilities taking into account the global complexity of the objects in the hypothesized scenes as well as the effort of bringing these objects in their relative position in the planar image, which is also measured by an Euler's elastica-based quantity. The model is illustrated with numerical experiments on, both, synthetic and real images showing the ability of our model to reconstruct the occluded objects and the preferred perceptual order among them. We also present results on images of the Berkeley dataset with provided figure-ground ground-truth labeling.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; 601.235 Approved no  
  Call Number Admin @ si @ OHD2016b Serial 2745  
Permanent link to this record
 

 
Author Maria Oliver; Gloria Haro; Mariella Dimiccoli; Baptiste Mazin; Coloma Ballester edit   pdf
openurl 
  Title (up) A computational model of amodal completion Type Conference Article
  Year 2016 Publication SIAM Conference on Imaging Science Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract This paper presents a computational model to recover the most likely interpretation of the 3D scene structure from a planar image, where some objects may occlude others. The estimated scene interpretation is obtained by integrating some global and local cues and provides both the complete disoccluded objects that form the scene and their ordering according to depth. Our method first computes several distal scenes which are compatible with the proximal planar image. To compute these different hypothesized scenes, we propose a perceptually inspired object disocclusion method, which works by minimizing the Euler's elastica as well as by incorporating the relatability of partially occluded contours and the convexity of the disoccluded objects. Then, to estimate the preferred scene we rely on a Bayesian model and define probabilities taking into account the global complexity of the objects in the hypothesized scenes as well as the effort of bringing these objects in their relative position in the planar image, which is also measured by an Euler's elastica-based quantity. The model is illustrated with numerical experiments on, both, synthetic and real images showing the ability of our model to reconstruct the occluded objects and the preferred perceptual order among them. We also present results on images of the Berkeley dataset with provided figure-ground ground-truth labeling.  
  Address Albuquerque; New Mexico; USA; May 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IS  
  Notes MILAB; 601.235 Approved no  
  Call Number Admin @ si @OHD2016a Serial 2788  
Permanent link to this record
 

 
Author Antonio Clavelli edit  isbn
openurl 
  Title (up) A computational model of eye guidance, searching for text in real scene images Type Book Whole
  Year 2014 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Searching for text objects in real scene images is an open problem and a very active computer vision research area. A large number of methods have been proposed tackling the text search as extension of the ones from the document analysis field or inspired by general purpose object detection methods. However the general problem of object search in real scene images remains an extremely challenging problem due to the huge variability in object appearance. This thesis builds on top of the most recent findings in the visual attention literature presenting a novel computational model of eye guidance aiming to better describe text object search in real scene images.
First are presented the relevant state-of-the-art results from the visual attention literature regarding eye movements and visual search. Relevant models of attention are discussed and integrated with recent observations on the role of top-down constraints and the emerging need for a layered model of attention in which saliency is not the only factor guiding attention. Visual attention is then explained by the interaction of several modulating factors, such as objects, value, plans and saliency. Then we introduce our probabilistic formulation of attention deployment in real scene. The model is based on the rationale that oculomotor control depends on two interacting but distinct processes: an attentional process that assigns value to the sources of information and motor process that flexibly links information with action.
In such framework, the choice of where to look next is task-dependent and oriented to classes of objects embedded within pictures of complex scenes. The dependence on task is taken into account by exploiting the value and the reward of gazing at certain image patches or proto-objects that provide a sparse representation of the scene objects.
In the experimental section the model is tested in laboratory condition, comparing model simulations with data from eye tracking experiments. The comparison is qualitative in terms of observable scan paths and quantitative in terms of statistical similarity of gaze shift amplitude. Experiments are performed using eye tracking data from both a publicly available dataset of face and text and from newly performed eye-tracking experiments on a dataset of street view pictures containing text. The last part of this thesis is dedicated to study the extent to which the proposed model can account for human eye movements in a low constrained setting. We used a mobile eye tracking device and an ad-hoc developed methodology to compare model simulated eye data with the human eye data from mobile eye tracking recordings. Such setting allow to test the model in an incomplete visual information condition, reproducing a close to real-life search task.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Dimosthenis Karatzas;Giuseppe Boccignone;Josep Llados  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-940902-6-4 Medium  
  Area Expedition Conference  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ Cla2014 Serial 2571  
Permanent link to this record
 

 
Author Mohamed Ali Souibgui; Y.Kessentini; Alicia Fornes edit   pdf
openurl 
  Title (up) A conditional GAN based approach for distorted camera captured documents recovery Type Conference Article
  Year 2020 Publication 4th Mediterranean Conference on Pattern Recognition and Artificial Intelligence Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Virtual; December 2020  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MedPRAI  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ SKF2020 Serial 3450  
Permanent link to this record
 

 
Author R. Bertrand; Oriol Ramos Terrades; P. Gomez-Kramer; P. Franco; Jean-Marc Ogier edit  doi
openurl 
  Title (up) A Conditional Random Field model for font forgery detection Type Conference Article
  Year 2015 Publication 13th International Conference on Document Analysis and Recognition ICDAR2015 Abbreviated Journal  
  Volume Issue Pages 576 - 580  
  Keywords  
  Abstract Nowadays, document forgery is becoming a real issue. A large amount of documents that contain critical information as payment slips, invoices or contracts, are constantly subject to fraudster manipulation because of the lack of security regarding this kind of document. Previously, a system to detect fraudulent documents based on its intrinsic features has been presented. It was especially designed to retrieve copy-move forgery and imperfection due to fraudster manipulation. However, when a set of characters is not present in the original document, copy-move forgery is not feasible. Hence, the fraudster will use a text toolbox to add or modify information in the document by imitating the font or he will cut and paste characters from another document where the font properties are similar. This often results in font type errors. Thus, a clue to detect document forgery consists of finding characters, words or sentences in a document with font properties different from their surroundings. To this end, we present in this paper an automatic forgery detection method based on document font features. Using the Conditional Random Field a measurement of probability that a character belongs to a specific font is made by comparing the character font features to a knowledge database. Then, the character is classified as a genuine or a fake one by comparing its probability to belong to a certain font type with those of the neighboring characters.  
  Address Nancy; France; August 2015  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ BRG2015 Serial 2725  
Permanent link to this record
 

 
Author Patricia Marquez edit  isbn
openurl 
  Title (up) A Confidence Framework for the Assessment of Optical Flow Performance Type Book Whole
  Year 2015 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Optical Flow (OF) is the input of a wide range of decision support systems such as car driver assistance, UAV guiding or medical diagnose. In these real situations, the absence of ground truth forces to assess OF quality using quantities computed from either sequences or the computed optical flow itself. These quantities are generally known as Confidence Measures, CM. Even if we have a proper confidence measure we still need a way to evaluate its ability to discard pixels with an OF prone to have a large error. Current approaches only provide a descriptive evaluation of the CM performance but such approaches are not capable to fairly compare different confidence measures and optical flow algorithms. Thus, it is of prime importance to define a framework and a general road map for the evaluation of optical flow performance.

This thesis provides a framework able to decide which pairs “ optical flow – confidence measure” (OF-CM) are best suited for optical flow error bounding given a confidence level determined by a decision support system. To design this framework we cover the following points:

Descriptive scores. As a first step, we summarize and analyze the sources of inaccuracies in the output of optical flow algorithms. Second, we present several descriptive plots that visually assess CM capabilities for OF error bounding. In addition to the descriptive plots, given a plot representing OF-CM capabilities to bound the error, we provide a numeric score that categorizes the plot according to its decreasing profile, that is, a score assessing CM performance.
Statistical framework. We provide a comparison framework that assesses the best suited OF-CM pair for error bounding that uses a two stage cascade process. First of all we assess the predictive value of the confidence measures by means of a descriptive plot. Then, for a sample of descriptive plots computed over training frames, we obtain a generic curve that will be used for sequences with no ground truth. As a second step, we evaluate the obtained general curve and its capabilities to really reflect the predictive value of a confidence measure using the variability across train frames by means of ANOVA.

The presented framework has shown its potential in the application on clinical decision support systems. In particular, we have analyzed the impact of the different image artifacts such as noise and decay to the output of optical flow in a cardiac diagnose system and we have improved the navigation inside the bronchial tree on bronchoscopy.
 
  Address July 2015  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Debora Gil;Aura Hernandez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-943427-2-1 Medium  
  Area Expedition Conference  
  Notes IAM; 600.075 Approved no  
  Call Number Admin @ si @ Mar2015 Serial 2687  
Permanent link to this record
 

 
Author Patricia Marquez; Debora Gil; Aura Hernandez-Sabate edit   pdf
url  doi
openurl 
  Title (up) A Confidence Measure for Assessing Optical Flow Accuracy in the Absence of Ground Truth Type Conference Article
  Year 2011 Publication IEEE International Conference on Computer Vision – Workshops Abbreviated Journal  
  Volume Issue Pages 2042-2049  
  Keywords IEEE International Conference on Computer Vision – Workshops  
  Abstract Optical flow is a valuable tool for motion analysis in autonomous navigation systems. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in real world sequences. This paper introduces a measure of optical flow accuracy for Lucas-Kanade based flows in terms of the numerical stability of the data-term. We call this measure optical flow condition number. A statistical analysis over ground-truth data show a good statistical correlation between the condition number and optical flow error. Experiments on driving sequences illustrate its potential for autonomous navigation systems.  
  Address  
  Corporate Author Thesis  
  Publisher IEEE Place of Publication Barcelona (Spain) Editor  
  Language English Summary Language English Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes IAM; ADAS Approved no  
  Call Number IAM @ iam @ MGH2011 Serial 1682  
Permanent link to this record
 

 
Author Muhammad Muzzamil Luqman; Thierry Brouard; Jean-Yves Ramel; Josep Llados edit  doi
isbn  openurl
  Title (up) A Content Spotting System For Line Drawing Graphic Document Images Type Conference Article
  Year 2010 Publication 20th International Conference on Pattern Recognition Abbreviated Journal  
  Volume 20 Issue Pages 3420–3423  
  Keywords  
  Abstract We present a content spotting system for line drawing graphic document images. The proposed system is sufficiently domain independent and takes the keyword based information retrieval for graphic documents, one step forward, to Query By Example (QBE) and focused retrieval. During offline learning mode: we vectorize the documents in the repository, represent them by attributed relational graphs, extract regions of interest (ROIs) from them, convert each ROI to a fuzzy structural signature, cluster similar signatures to form ROI classes and build an index for the repository. During online querying mode: a Bayesian network classifier recognizes the ROIs in the query image and the corresponding documents are fetched by looking up in the repository index. Experimental results are presented for synthetic images of architectural and electronic documents.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1051-4651 ISBN 978-1-4244-7542-1 Medium  
  Area Expedition Conference ICPR  
  Notes DAG Approved no  
  Call Number DAG @ dag @ LBR2010b Serial 1460  
Permanent link to this record
 

 
Author X. Binefa; Xavier Roca; Jordi Vitria edit  openurl
  Title (up) A Contrast Approach to Depth from Focus. Type Miscellaneous
  Year 1997 Publication VII Simposium Nacional de Reconocimiento de Formas y Analisis de Imagenes. Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Barcelona  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;ISE;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ BRV1997 Serial 63  
Permanent link to this record
 

 
Author X. Binefa; Jordi Vitria edit  openurl
  Title (up) A contrast based focusing criterium. Type Miscellaneous
  Year 1996 Publication IEEE International Conference on Pattern Recognition. Vol. A, pp. 805–809 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ BiV1996 Serial 79  
Permanent link to this record
 

 
Author Karim Lekadir; Alfiia Galimzianova; Angels Betriu; Maria del Mar Vila; Laura Igual; Daniel L. Rubin; Elvira Fernandez-Giraldez; Petia Radeva; Sandy Napel edit  doi
openurl 
  Title (up) A Convolutional Neural Network for Automatic Characterization of Plaque Composition in Carotid Ultrasound Type Journal Article
  Year 2017 Publication IEEE Journal Biomedical and Health Informatics Abbreviated Journal J-BHI  
  Volume 21 Issue 1 Pages 48-55  
  Keywords  
  Abstract Characterization of carotid plaque composition, more specifically the amount of lipid core, fibrous tissue, and calcified tissue, is an important task for the identification of plaques that are prone to rupture, and thus for early risk estimation of cardiovascular and cerebrovascular events. Due to its low costs and wide availability, carotid ultrasound has the potential to become the modality of choice for plaque characterization in clinical practice. However, its significant image noise, coupled with the small size of the plaques and their complex appearance, makes it difficult for automated techniques to discriminate between the different plaque constituents. In this paper, we propose to address this challenging problem by exploiting the unique capabilities of the emerging deep learning framework. More specifically, and unlike existing works which require a priori definition of specific imaging features or thresholding values, we propose to build a convolutional neural network (CNN) that will automatically extract from the images the information that is optimal for the identification of the different plaque constituents. We used approximately 90 000 patches extracted from a database of images and corresponding expert plaque characterizations to train and to validate the proposed CNN. The results of cross-validation experiments show a correlation of about 0.90 with the clinical assessment for the estimation of lipid core, fibrous cap, and calcified tissue areas, indicating the potential of deep learning for the challenging task of automatic characterization of plaque composition in carotid ultrasound.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no menciona Approved no  
  Call Number Admin @ si @ LGB2017 Serial 2931  
Permanent link to this record
 

 
Author Marta Ligero; Alonso Garcia Ruiz; Cristina Viaplana; Guillermo Villacampa; Maria V Raciti; Jaid Landa; Ignacio Matos; Juan Martin Liberal; Maria Ochoa de Olza; Cinta Hierro; Joaquin Mateo; Macarena Gonzalez; Rafael Morales Barrera; Cristina Suarez; Jordi Rodon; Elena Elez; Irene Braña; Eva Muñoz-Couselo; Ana Oaknin; Roberta Fasani; Paolo Nuciforo; Debora Gil; Carlota Rubio Perez; Joan Seoane; Enriqueta Felip; Manuel Escobar; Josep Tabernero; Joan Carles; Rodrigo Dienstmann; Elena Garralda; Raquel Perez Lopez edit  url
doi  openurl
  Title (up) A CT-based radiomics signature is associated with response to immune checkpoint inhibitors in advanced solid tumors Type Journal Article
  Year 2021 Publication Radiology Abbreviated Journal  
  Volume 299 Issue 1 Pages 109-119  
  Keywords  
  Abstract Background Reliable predictive imaging markers of response to immune checkpoint inhibitors are needed. Purpose To develop and validate a pretreatment CT-based radiomics signature to predict response to immune checkpoint inhibitors in advanced solid tumors. Materials and Methods In this retrospective study, a radiomics signature was developed in patients with advanced solid tumors (including breast, cervix, gastrointestinal) treated with anti-programmed cell death-1 or programmed cell death ligand-1 monotherapy from August 2012 to May 2018 (cohort 1). This was tested in patients with bladder and lung cancer (cohorts 2 and 3). Radiomics variables were extracted from all metastases delineated at pretreatment CT and selected by using an elastic-net model. A regression model combined radiomics and clinical variables with response as the end point. Biologic validation of the radiomics score with RNA profiling of cytotoxic cells (cohort 4) was assessed with Mann-Whitney analysis. Results The radiomics signature was developed in 85 patients (cohort 1: mean age, 58 years ± 13 [standard deviation]; 43 men) and tested on 46 patients (cohort 2: mean age, 70 years ± 12; 37 men) and 47 patients (cohort 3: mean age, 64 years ± 11; 40 men). Biologic validation was performed in a further cohort of 20 patients (cohort 4: mean age, 60 years ± 13; 14 men). The radiomics signature was associated with clinical response to immune checkpoint inhibitors (area under the curve [AUC], 0.70; 95% CI: 0.64, 0.77; P < .001). In cohorts 2 and 3, the AUC was 0.67 (95% CI: 0.58, 0.76) and 0.67 (95% CI: 0.56, 0.77; P < .001), respectively. A radiomics-clinical signature (including baseline albumin level and lymphocyte count) improved on radiomics-only performance (AUC, 0.74 [95% CI: 0.63, 0.84; P < .001]; Akaike information criterion, 107.00 and 109.90, respectively). Conclusion A pretreatment CT-based radiomics signature is associated with response to immune checkpoint inhibitors, likely reflecting the tumor immunophenotype. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Summers in this issue.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM; 600.145 Approved no  
  Call Number Admin @ si @ LGV2021 Serial 3593  
Permanent link to this record
 

 
Author Albert Gordo edit  openurl
  Title (up) A Cyclic Page Layout Descriptor for Document Classification & Retrieval Type Report
  Year 2009 Publication CVC Technical Report Abbreviated Journal  
  Volume 128 Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Computer Vision Center Thesis Master's thesis  
  Publisher Place of Publication Bellaterra, Barcelona Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC;DAG Approved no  
  Call Number Admin @ si @ Gor2009 Serial 2387  
Permanent link to this record
 

 
Author Robert Benavente; Maria Vanrell; Ramon Baldrich edit  url
openurl 
  Title (up) A data set for fuzzy colour naming Type Journal
  Year 2006 Publication Color Research & Application, 31(1):48–56 Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number CAT @ cat @ BVB2006 Serial 590  
Permanent link to this record
 

 
Author Shifeng Zhang; Xiaobo Wang; Ajian Liu; Chenxu Zhao; Jun Wan; Sergio Escalera; Hailin Shi; Zezheng Wang; Stan Z. Li edit   pdf
url  doi
openurl 
  Title (up) A Dataset and Benchmark for Large-scale Multi-modal Face Anti-spoofing Type Conference Article
  Year 2019 Publication 32nd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 919-928  
  Keywords  
  Abstract Face anti-spoofing is essential to prevent face recognition systems from a security breach. Much of the progresses have been made by the availability of face anti-spoofing benchmark datasets in recent years. However, existing face anti-spoofing benchmarks have limited number of subjects (≤170) and modalities (≤2), which hinder the further development of the academic community. To facilitate face anti-spoofing research, we introduce a large-scale multi-modal dataset, namely CASIA-SURF, which is the largest publicly available dataset for face anti-spoofing in terms of both subjects and visual modalities. Specifically, it consists of 1,000 subjects with 21,000 videos and each sample has 3 modalities (i.e., RGB, Depth and IR). We also provide a measurement set, evaluation protocol and training/validation/testing subsets, developing a new benchmark for face anti-spoofing. Moreover, we present a new multi-modal fusion method as baseline, which performs feature re-weighting to select the more informative channel features while suppressing the less useful ones for each modal. Extensive experiments have been conducted on the proposed dataset to verify its significance and generalization capability. The dataset is available at https://sites.google.com/qq.com/chalearnfacespoofingattackdete/.  
  Address California; June 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ ZWL2019 Serial 3331  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: