Home | [1–10] << 11 12 13 >> |
Records | Links | |||||
---|---|---|---|---|---|---|
Author | Hassan Ahmed Sial |
|
||||
Title | Estimating Light Effects from a Single Image: Deep Architectures and Ground-Truth Generation | Type | Book Whole | |||
Year | 2021 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | In this thesis, we explore how to estimate the effects of the light interacting with the scene objects from a single image. To achieve this goal, we focus on recovering intrinsic components like reflectance, shading, or light properties such as color and position using deep architectures. The success of these approaches relies on training on large and diversified image datasets. Therefore, we present several contributions on this such as: (a) a data-augmentation technique; (b) a ground-truth for an existing multi-illuminant dataset; (c) a family of synthetic datasets, SID for Surreal Intrinsic Datasets, with diversified backgrounds and coherent light conditions; and (d) a practical pipeline to create hybrid ground-truths to overcome the complexity of acquiring realistic light conditions in a massive way. In parallel with the creation of datasets, we trained different flexible encoder-decoder deep architectures incorporating physical constraints from the image formation models.
In the last part of the thesis, we apply all the previous experience to two different problems. Firstly, we create a large hybrid Doc3DShade dataset with real shading and synthetic reflectance under complex illumination conditions, that is used to train a two-stage architecture that improves the character recognition task in complex lighting conditions of unwrapped documents. Secondly, we tackle the problem of single image scene relighting by extending both, the SID dataset to present stronger shading and shadows effects, and the deep architectures to use intrinsic components to estimate new relit images. |
|||||
Address | September 2021 | |||||
Corporate Author | Thesis | Ph.D. thesis | ||||
Publisher | IMPRIMA | Place of Publication | Editor | Maria Vanrell;Ramon Baldrich | ||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | 978-84-122714-8-5 | Medium | |||
Area | Expedition | Conference | ||||
Notes | CIC; | Approved | no | |||
Call Number | Admin @ si @ Sia2021 | Serial | 3607 | |||
Permanent link to this record | ||||||
Author | Yasuko Sugito; Trevor Canham; Javier Vazquez; Marcelo Bertalmio |
|
||||
Title | A Study of Objective Quality Metrics for HLG-Based HDR/WCG Image Coding | Type | Journal | |||
Year | 2021 | Publication | SMPTE Motion Imaging Journal | Abbreviated Journal | SMPTE | |
Volume | 130 | Issue | 4 | Pages | 53 - 65 | |
Keywords | ||||||
Abstract | In this work, we study the suitability of high dynamic range, wide color gamut (HDR/WCG) objective quality metrics to assess the perceived deterioration of compressed images encoded using the hybrid log-gamma (HLG) method, which is the standard for HDR television. Several image quality metrics have been developed to deal specifically with HDR content, although in previous work we showed that the best results (i.e., better matches to the opinion of human expert observers) are obtained by an HDR metric that consists simply in applying a given standard dynamic range metric, called visual information fidelity (VIF), directly to HLG-encoded images. However, all these HDR metrics ignore the chroma components for their calculations, that is, they consider only the luminance channel. For this reason, in the current work, we conduct subjective evaluation experiments in a professional setting using compressed HDR/WCG images encoded with HLG and analyze the ability of the best HDR metric to detect perceivable distortions in the chroma components, as well as the suitability of popular color metrics (including ΔITPR , which supports parameters for HLG) to correlate with the opinion scores. Our first contribution is to show that there is a need to consider the chroma components in HDR metrics, as there are color distortions that subjects perceive but that the best HDR metric fails to detect. Our second contribution is the surprising result that VIF, which utilizes only the luminance channel, correlates much better with the subjective evaluation scores than the metrics investigated that do consider the color components. | |||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ||||
Notes | CIC | Approved | no | |||
Call Number | SCV2021 | Serial | 3671 | |||
Permanent link to this record | ||||||
Author | Bojana Gajic; Ariel Amato; Ramon Baldrich; Joost Van de Weijer; Carlo Gatta |
|
||||
Title | Area Under the ROC Curve Maximization for Metric Learning | Type | Conference Article | |||
Year | 2022 | Publication | CVPR 2022 Workshop on Efficien Deep Learning for Computer Vision (ECV 2022, 5th Edition) | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | Training; Computer vision; Conferences; Area measurement; Benchmark testing; Pattern recognition | |||||
Abstract | Most popular metric learning losses have no direct relation with the evaluation metrics that are subsequently applied to evaluate their performance. We hypothesize that training a metric learning model by maximizing the area under the ROC curve (which is a typical performance measure of recognition systems) can induce an implicit ranking suitable for retrieval problems. This hypothesis is supported by previous work that proved that a curve dominates in ROC space if and only if it dominates in Precision-Recall space. To test this hypothesis, we design and maximize an approximated, derivable relaxation of the area under the ROC curve. The proposed AUC loss achieves state-of-the-art results on two large scale retrieval benchmark datasets (Stanford Online Products and DeepFashion In-Shop). Moreover, the AUC loss achieves comparable performance to more complex, domain specific, state-of-the-art methods for vehicle re-identification. | |||||
Address | New Orleans, USA; 20 June 2022 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | CVPRW | |||
Notes | CIC; LAMP; | Approved | no | |||
Call Number | Admin @ si @ GAB2022 | Serial | 3700 | |||
Permanent link to this record | ||||||
Author | Bojana Gajic; Ramon Baldrich |
|
||||
Title | Cross-domain fashion image retrieval | Type | Conference Article | |||
Year | 2018 | Publication | CVPR 2018 Workshop on Women in Computer Vision (WiCV 2018, 4th Edition) | Abbreviated Journal | ||
Volume | Issue | Pages | 19500-19502 | |||
Keywords | ||||||
Abstract | Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products
in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task. |
|||||
Address | Salt Lake City, USA; 22 June 2018 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | CVPRW | |||
Notes | CIC; 600.087 | Approved | no | |||
Call Number | Admin @ si @ | Serial | 3709 | |||
Permanent link to this record | ||||||
Author | Bojana Gajic; Eduard Vazquez; Ramon Baldrich |
|
||||
Title | Evaluation of Deep Image Descriptors for Texture Retrieval | Type | Conference Article | |||
Year | 2017 | Publication | Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) | Abbreviated Journal | ||
Volume | Issue | Pages | 251-257 | |||
Keywords | Texture Representation; Texture Retrieval; Convolutional Neural Networks; Psychophysical Evaluation | |||||
Abstract | The increasing complexity learnt in the layers of a Convolutional Neural Network has proven to be of great help for the task of classification. The topic has received great attention in recently published literature.
Nonetheless, just a handful of works study low-level representations, commonly associated with lower layers. In this paper, we explore recent findings which conclude, counterintuitively, the last layer of the VGG convolutional network is the best to describe a low-level property such as texture. To shed some light on this issue, we are proposing a psychophysical experiment to evaluate the adequacy of different layers of the VGG network for texture retrieval. Results obtained suggest that, whereas the last convolutional layer is a good choice for a specific task of classification, it might not be the best choice as a texture descriptor, showing a very poor performance on texture retrieval. Intermediate layers show the best performance, showing a good combination of basic filters, as in the primary visual cortex, and also a degree of higher level information to describe more complex textures. |
|||||
Address | Porto, Portugal; 27 February – 1 March 2017 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | VISIGRAPP | |||
Notes | CIC; 600.087 | Approved | no | |||
Call Number | Admin @ si @ | Serial | 3710 | |||
Permanent link to this record | ||||||
Author | Marcos V Conde; Javier Vazquez; Michael S Brown; Radu TImofte |
|
||||
Title | NILUT: Conditional Neural Implicit 3D Lookup Tables for Image Enhancement | Type | Conference Article | |||
Year | 2024 | Publication | 38th AAAI Conference on Artificial Intelligence | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | 3D lookup tables (3D LUTs) are a key component for image enhancement. Modern image signal processors (ISPs) have dedicated support for these as part of the camera rendering pipeline. Cameras typically provide multiple options for picture styles, where each style is usually obtained by applying a unique handcrafted 3D LUT. Current approaches for learning and applying 3D LUTs are notably fast, yet not so memory-efficient, as storing multiple 3D LUTs is required. For this reason and other implementation limitations, their use on mobile devices is less popular. In this work, we propose a Neural Implicit LUT (NILUT), an implicitly defined continuous 3D color transformation parameterized by a neural network. We show that NILUTs are capable of accurately emulating real 3D LUTs. Moreover, a NILUT can be extended to incorporate multiple styles into a single network with the ability to blend styles implicitly. Our novel approach is memory-efficient, controllable and can complement previous methods, including learned ISPs. | |||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | AAAI | |||
Notes | CIC; MACO | Approved | no | |||
Call Number | Admin @ si @ CVB2024 | Serial | 3872 | |||
Permanent link to this record | ||||||
Author | Danna Xue; Javier Vazquez; Luis Herranz; Yang Zhang; Michael S Brown |
|
||||
Title | Integrating High-Level Features for Consistent Palette-based Multi-image Recoloring | Type | Journal Article | |||
Year | 2023 | Publication | Computer Graphics Forum | Abbreviated Journal | CGF | |
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | Achieving visually consistent colors across multiple images is important when images are used in photo albums, websites, and brochures. Unfortunately, only a handful of methods address multi-image color consistency compared to one-to-one color transfer techniques. Furthermore, existing methods do not incorporate high-level features that can assist graphic designers in their work. To address these limitations, we introduce a framework that builds upon a previous palette-based color consistency method and incorporates three high-level features: white balance, saliency, and color naming. We show how these features overcome the limitations of the prior multi-consistency workflow and showcase the user-friendly nature of our framework. | |||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ||||
Notes | CIC; MACO | Approved | no | |||
Call Number | Admin @ si @ XVH2023 | Serial | 3883 | |||
Permanent link to this record | ||||||
Author | Jaykishan Patel; Alban Flachot; Javier Vazquez; David H. Brainard; Thomas S. A. Wallis; Marcus A. Brubaker; Richard F. Murray |
|
||||
Title | A deep convolutional neural network trained to infer surface reflectance is deceived by mid-level lightness illusions | Type | Journal Article | |||
Year | 2023 | Publication | Journal of Vision | Abbreviated Journal | JV | |
Volume | 23 | Issue | 9 | Pages | 4817-4817 | |
Keywords | ||||||
Abstract | A long-standing view is that lightness illusions are by-products of strategies employed by the visual system to stabilize its perceptual representation of surface reflectance against changes in illumination. Computationally, one such strategy is to infer reflectance from the retinal image, and to base the lightness percept on this inference. CNNs trained to infer reflectance from images have proven successful at solving this problem under limited conditions. To evaluate whether these CNNs provide suitable starting points for computational models of human lightness perception, we tested a state-of-the-art CNN on several lightness illusions, and compared its behaviour to prior measurements of human performance. We trained a CNN (Yu & Smith, 2019) to infer reflectance from luminance images. The network had a 30-layer hourglass architecture with skip connections. We trained the network via supervised learning on 100K images, rendered in Blender, each showing randomly placed geometric objects (surfaces, cubes, tori, etc.), with random Lambertian reflectance patterns (solid, Voronoi, or low-pass noise), under randomized point+ambient lighting. The renderer also provided the ground-truth reflectance images required for training. After training, we applied the network to several visual illusions. These included the argyle, Koffka-Adelson, snake, White’s, checkerboard assimilation, and simultaneous contrast illusions, along with their controls where appropriate. The CNN correctly predicted larger illusions in the argyle, Koffka-Adelson, and snake images than in their controls. It also correctly predicted an assimilation effect in White's illusion. It did not, however, account for the checkerboard assimilation or simultaneous contrast effects. These results are consistent with the view that at least some lightness phenomena are by-products of a rational approach to inferring stable representations of physical properties from intrinsically ambiguous retinal images. Furthermore, they suggest that CNN models may be a promising starting point for new models of human lightness perception. | |||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ||||
Notes | MACO; CIC | Approved | no | |||
Call Number | Admin @ si @ PFV2023 | Serial | 3890 | |||
Permanent link to this record | ||||||
Author | Marcos V Conde; Florin Vasluianu; Javier Vazquez; Radu Timofte |
|
||||
Title | Perceptual image enhancement for smartphone real-time applications | Type | Conference Article | |||
Year | 2023 | Publication | Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision | Abbreviated Journal | ||
Volume | Issue | Pages | 1848-1858 | |||
Keywords | ||||||
Abstract | Recent advances in camera designs and imaging pipelines allow us to capture high-quality images using smartphones. However, due to the small size and lens limitations of the smartphone cameras, we commonly find artifacts or degradation in the processed images. The most common unpleasant effects are noise artifacts, diffraction artifacts, blur, and HDR overexposure. Deep learning methods for image restoration can successfully remove these artifacts. However, most approaches are not suitable for real-time applications on mobile devices due to their heavy computation and memory requirements. In this paper, we propose LPIENet, a lightweight network for perceptual image enhancement, with the focus on deploying it on smartphones. Our experiments show that, with much fewer parameters and operations, our model can deal with the mentioned artifacts and achieve competitive performance compared with state-of-the-art methods on standard benchmarks. Moreover, to prove the efficiency and reliability of our approach, we deployed the model directly on commercial smartphones and evaluated its performance. Our model can process 2K resolution images under 1 second in mid-level commercial smartphones. | |||||
Address | Waikoloa; Hawai; USA; January 2023 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | WACV | |||
Notes | MACO; CIC | Approved | no | |||
Call Number | Admin @ si @ CVV2023 | Serial | 3900 | |||
Permanent link to this record | ||||||
Author | Danna Xue; Luis Herranz; Javier Vazquez; Yanning Zhang |
|
||||
Title | Burst Perception-Distortion Tradeoff: Analysis and Evaluation | Type | Conference Article | |||
Year | 2023 | Publication | IEEE International Conference on Acoustics, Speech and Signal Processing | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | Burst image restoration attempts to effectively utilize the complementary cues appearing in sequential images to produce a high-quality image. Most current methods use all the available images to obtain the reconstructed image. However, using more images for burst restoration is not always the best option regarding reconstruction quality and efficiency, as the images acquired by handheld imaging devices suffer from degradation and misalignment caused by the camera noise and shake. In this paper, we extend the perception-distortion tradeoff theory by introducing multiple-frame information. We propose the area of the unattainable region as a new metric for perception-distortion tradeoff evaluation and comparison. Based on this metric, we analyse the performance of burst restoration from the perspective of the perception-distortion tradeoff under both aligned bursts and misaligned bursts situations. Our analysis reveals the importance of inter-frame alignment for burst restoration and shows that the optimal burst length for the restoration model depends both on the degree of degradation and misalignment. | |||||
Address | Rodhes Islands; Greece; June 2023 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | ICASSP | |||
Notes | CIC; MACO | Approved | no | |||
Call Number | Admin @ si @ XHV2023 | Serial | 3909 | |||
Permanent link to this record | ||||||
Author | Yawei Li; Yulun Zhang; Radu Timofte; Luc Van Gool; Zhijun Tu; Kunpeng Du; Hailing Wang; Hanting Chen; Wei Li; Xiaofei Wang; Jie Hu; Yunhe Wang; Xiangyu Kong; Jinlong Wu; Dafeng Zhang; Jianxing Zhang; Shuai Liu; Furui Bai; Chaoyu Feng; Hao Wang; Yuqian Zhang; Guangqi Shao; Xiaotao Wang; Lei Lei; Rongjian Xu; Zhilu Zhang; Yunjin Chen; Dongwei Ren; Wangmeng Zuo; Qi Wu; Mingyan Han; Shen Cheng; Haipeng Li; Ting Jiang; Chengzhi Jiang; Xinpeng Li; Jinting Luo; Wenjie Lin; Lei Yu; Haoqiang Fan; Shuaicheng Liu; Aditya Arora; Syed Waqas Zamir; Javier Vazquez; Konstantinos G. Derpanis; Michael S. Brown; Hao Li; Zhihao Zhao; Jinshan Pan; Jiangxin Dong; Jinhui Tang; Bo Yang; Jingxiang Chen; Chenghua Li; Xi Zhang; Zhao Zhang; Jiahuan Ren; Zhicheng Ji; Kang Miao; Suiyi Zhao; Huan Zheng; YanYan Wei; Kangliang Liu; Xiangcheng Du; Sijie Liu; Yingbin Zheng; Xingjiao Wu; Cheng Jin; Rajeev Irny; Sriharsha Koundinya; Vighnesh Kamath; Gaurav Khandelwal; Sunder Ali Khowaja; Jiseok Yoon; Ik Hyun Lee; Shijie Chen; Chengqiang Zhao; Huabin Yang; Zhongjian Zhang; Junjia Huang; Yanru Zhang |
|
||||
Title | NTIRE 2023 challenge on image denoising: Methods and results | Type | Conference Article | |||
Year | 2023 | Publication | Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops | Abbreviated Journal | ||
Volume | Issue | Pages | 1904-1920 | |||
Keywords | ||||||
Abstract | This paper reviews the NTIRE 2023 challenge on image denoising (σ = 50) with a focus on the proposed solutions and results. The aim is to obtain a network design capable to produce high-quality results with the best performance measured by PSNR for image denoising. Independent additive white Gaussian noise (AWGN) is assumed and the noise level is 50. The challenge had 225 registered participants, and 16 teams made valid submissions. They gauge the state-of-the-art for image denoising. | |||||
Address | Vancouver; Canada; June 2023 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | CVPRW | |||
Notes | MACO; CIC | Approved | no | |||
Call Number | Admin @ si @ LZT2023 | Serial | 3910 | |||
Permanent link to this record | ||||||
Author | Justine Giroux; Mohammad Reza Karimi Dastjerdi; Yannick Hold-Geoffroy; Javier Vazquez; Jean François Lalonde |
|
||||
Title | Towards a Perceptual Evaluation Framework for Lighting Estimation | Type | Conference Article | |||
Year | 2024 | Publication | Arxiv | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | rogress in lighting estimation is tracked by computing existing image quality assessment (IQA) metrics on images from standard datasets. While this may appear to be a reasonable approach, we demonstrate that doing so does not correlate to human preference when the estimated lighting is used to relight a virtual scene into a real photograph. To study this, we design a controlled psychophysical experiment where human observers must choose their preference amongst rendered scenes lit using a set of lighting estimation algorithms selected from the recent literature, and use it to analyse how these algorithms perform according to human perception. Then, we demonstrate that none of the most popular IQA metrics from the literature, taken individually, correctly represent human perception. Finally, we show that by learning a combination of existing IQA metrics, we can more accurately represent human preference. This provides a new perceptual framework to help evaluate future lighting estimation algorithms. | |||||
Address | Seattle; USA; June 2024 | |||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | CVPR | |||
Notes | MACO; CIC | Approved | no | |||
Call Number | Admin @ si @ GDH2024 | Serial | 3999 | |||
Permanent link to this record | ||||||
Author | Trevor Canham; Javier Vazquez; D Long; Richard F. Murray; Michael S Brown |
|
||||
Title | Noise Prism: A Novel Multispectral Visualization Technique | Type | Journal Article | |||
Year | 2021 | Publication | 31st Color and Imaging Conference | Abbreviated Journal | ||
Volume | Issue | Pages | ||||
Keywords | ||||||
Abstract | A novel technique for visualizing multispectral images is proposed. Inspired by how prisms work, our method spreads spectral information over a chromatic noise pattern. This is accomplished by populating the pattern with pixels representing each measurement band at a count proportional to its measured intensity. The method is advantageous because it allows for lightweight encoding and visualization of spectral information
while maintaining the color appearance of the stimulus. A four alternative forced choice (4AFC) experiment was conducted to validate the method’s information-carrying capacity in displaying metameric stimuli of varying colors and spectral basis functions. The scores ranged from 100% to 20% (less than chance given the 4AFC task), with many conditions falling somewhere in between at statistically significant intervals. Using this data, color and texture difference metrics can be evaluated and optimized to predict the legibility of the visualization technique. |
|||||
Address | ||||||
Corporate Author | Thesis | |||||
Publisher | Place of Publication | Editor | ||||
Language | Summary Language | Original Title | ||||
Series Editor | Series Title | Abbreviated Series Title | ||||
Series Volume | Series Issue | Edition | ||||
ISSN | ISBN | Medium | ||||
Area | Expedition | Conference | CIC | |||
Notes | MACO; CIC | Approved | no | |||
Call Number | Admin @ si @ CVL2021 | Serial | 4000 | |||
Permanent link to this record |