Home | [211–220] << 221 222 223 224 225 226 227 228 >> |
Records | |||||
---|---|---|---|---|---|
Author | Pau Rodriguez; Josep M. Gonfaus; Guillem Cucurull; Xavier Roca; Jordi Gonzalez | ||||
Title | Attend and Rectify: A Gated Attention Mechanism for Fine-Grained Recovery | Type | Conference Article | ||
Year | 2018 | Publication | 15th European Conference on Computer Vision | Abbreviated Journal | |
Volume | 11212 | Issue | Pages | 357-372 | |
Keywords | Deep Learning; Convolutional Neural Networks; Attention | ||||
Abstract | We propose a novel attention mechanism to enhance Convolutional Neural Networks for fine-grained recognition. It learns to attend to lower-level feature activations without requiring part annotations and uses these activations to update and rectify the output likelihood distribution. In contrast to other approaches, the proposed mechanism is modular, architecture-independent and efficient both in terms of parameters and computation required. Experiments show that networks augmented with our approach systematically improve their classification accuracy and become more robust to clutter. As a result, Wide Residual Networks augmented with our proposal surpasses the state of the art classification accuracies in CIFAR-10, the Adience gender recognition task, Stanford dogs, and UEC Food-100. | ||||
Address | Munich; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCV | ||
Notes | ISE; 600.098; 602.121; 600.119 | Approved | no | ||
Call Number | Admin @ si @ RGC2018 | Serial | 3139 | ||
Permanent link to this record | |||||
Author | Ciprian Corneanu; Meysam Madadi; Sergio Escalera | ||||
Title | Deep Structure Inference Network for Facial Action Unit Recognition | Type | Conference Article | ||
Year | 2018 | Publication | 15th European Conference on Computer Vision | Abbreviated Journal | |
Volume | 11216 | Issue | Pages | 309-324 | |
Keywords | Computer Vision; Machine Learning; Deep Learning; Facial Expression Analysis; Facial Action Units; Structure Inference | ||||
Abstract | Facial expressions are combinations of basic components called Action Units (AU). Recognizing AUs is key for general facial expression analysis. Recently, efforts in automatic AU recognition have been dedicated to learning combinations of local features and to exploiting correlations between AUs. We propose a deep neural architecture that tackles both problems by combining learned local and global features in its initial stages and replicating a message passing algorithm between classes similar to a graphical model inference approach in later stages. We show that by training the model end-to-end with increased supervision we improve state-of-the-art by 5.3% and 8.2% performance on BP4D and DISFA datasets, respectively. | ||||
Address | Munich; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCV | ||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ CME2018 | Serial | 3205 | ||
Permanent link to this record | |||||
Author | Lluis Gomez; Andres Mafla; Marçal Rusiñol; Dimosthenis Karatzas | ||||
Title | Single Shot Scene Text Retrieval | Type | Conference Article | ||
Year | 2018 | Publication | 15th European Conference on Computer Vision | Abbreviated Journal | |
Volume | 11218 | Issue | Pages | 728-744 | |
Keywords | Image retrieval; Scene text; Word spotting; Convolutional Neural Networks; Region Proposals Networks; PHOC | ||||
Abstract | Textual information found in scene images provides high level semantic information about the image and its context and it can be leveraged for better scene understanding. In this paper we address the problem of scene text retrieval: given a text query, the system must return all images containing the queried text. The novelty of the proposed model consists in the usage of a single shot CNN architecture that predicts at the same time bounding boxes and a compact text representation of the words in them. In this way, the text based image retrieval task can be casted as a simple nearest neighbor search of the query text representation over the outputs of the CNN over the entire image
database. Our experiments demonstrate that the proposed architecture outperforms previous state-of-the-art while it offers a significant increase in processing speed. |
||||
Address | Munich; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCV | ||
Notes | DAG; 600.084; 601.338; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ GMR2018 | Serial | 3143 | ||
Permanent link to this record | |||||
Author | Marc Oliu; Javier Selva; Sergio Escalera | ||||
Title | Folded Recurrent Neural Networks for Future Video Prediction | Type | Conference Article | ||
Year | 2018 | Publication | 15th European Conference on Computer Vision | Abbreviated Journal | |
Volume | 11218 | Issue | Pages | 745-761 | |
Keywords | |||||
Abstract | Future video prediction is an ill-posed Computer Vision problem that recently received much attention. Its main challenges are the high variability in video content, the propagation of errors through time, and the non-specificity of the future frames: given a sequence of past frames there is a continuous distribution of possible futures. This work introduces bijective Gated Recurrent Units, a double mapping between the input and output of a GRU layer. This allows for recurrent auto-encoders with state sharing between encoder and decoder, stratifying the sequence representation and helping to prevent capacity problems. We show how with this topology only the encoder or decoder needs to be applied for input encoding and prediction, respectively. This reduces the computational cost and avoids re-encoding the predictions when generating a sequence of frames, mitigating the propagation of errors. Furthermore, it is possible to remove layers from an already trained model, giving an insight to the role performed by each layer and making the model more explainable. We evaluate our approach on three video datasets, outperforming state of the art prediction results on MMNIST and UCF101, and obtaining competitive results on KTH with 2 and 3 times less memory usage and computational cost than the best scored approach. | ||||
Address | Munich; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCV | ||
Notes | HUPBA; no menciona | Approved | no | ||
Call Number | Admin @ si @ OSE2018 | Serial | 3204 | ||
Permanent link to this record | |||||
Author | Felipe Codevilla; Antonio Lopez; Vladlen Koltun; Alexey Dosovitskiy | ||||
Title | On Offline Evaluation of Vision-based Driving Models | Type | Conference Article | ||
Year | 2018 | Publication | 15th European Conference on Computer Vision | Abbreviated Journal | |
Volume | 11219 | Issue | Pages | 246-262 | |
Keywords | Autonomous driving; deep learning | ||||
Abstract | Autonomous driving models should ideally be evaluated by deploying
them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and suitable offline metrics. |
||||
Address | Munich; September 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCV | ||
Notes | ADAS; 600.124; 600.118 | Approved | no | ||
Call Number | Admin @ si @ CLK2018 | Serial | 3162 | ||
Permanent link to this record | |||||
Author | Santi Puch; Irina Sanchez; Aura Hernandez-Sabate; Gemma Piella; Vesna Prckovska | ||||
Title | Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation | Type | Conference Article | ||
Year | 2018 | Publication | International MICCAI Brainlesion Workshop | Abbreviated Journal | |
Volume | 11384 | Issue | Pages | 393-405 | |
Keywords | Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution | ||||
Abstract | In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAIW | ||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ PSH2018 | Serial | 3251 | ||
Permanent link to this record | |||||
Author | Anguelos Nicolaou; Sounak Dey; V.Christlein; A.Maier; Dimosthenis Karatzas | ||||
Title | Non-deterministic Behavior of Ranking-based Metrics when Evaluating Embeddings | Type | Conference Article | ||
Year | 2018 | Publication | International Workshop on Reproducible Research in Pattern Recognition | Abbreviated Journal | |
Volume | 11455 | Issue | Pages | 71-82 | |
Keywords | |||||
Abstract | Embedding data into vector spaces is a very popular strategy of pattern recognition methods. When distances between embeddings are quantized, performance metrics become ambiguous. In this paper, we present an analysis of the ambiguity quantized distances introduce and provide bounds on the effect. We demonstrate that it can have a measurable effect in empirical data in state-of-the-art systems. We also approach the phenomenon from a computer security perspective and demonstrate how someone being evaluated by a third party can exploit this ambiguity and greatly outperform a random predictor without even access to the input data. We also suggest a simple solution making the performance metrics, which rely on ranking, totally deterministic and impervious to such exploits. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ NDC2018 | Serial | 3178 | ||
Permanent link to this record | |||||
Author | Victoria Ruiz; Angel Sanchez; Jose F. Velez; Bogdan Raducanu | ||||
Title | Automatic Image-Based Waste Classification | Type | Conference Article | ||
Year | 2019 | Publication | International Work-Conference on the Interplay Between Natural and Artificial Computation. From Bioinspired Systems and Biomedical Applications to Machine Learning | Abbreviated Journal | |
Volume | 11487 | Issue | Pages | 422–431 | |
Keywords | Computer Vision; Deep learning; Convolutional neural networks; Waste classification | ||||
Abstract | The management of solid waste in large urban environments has become a complex problem due to increasing amount of waste generated every day by citizens and companies. Current Computer Vision and Deep Learning techniques can help in the automatic detection and classification of waste types for further recycling tasks. In this work, we use the TrashNet dataset to train and compare different deep learning architectures for automatic classification of garbage types. In particular, several Convolutional Neural Networks (CNN) architectures were compared: VGG, Inception and ResNet. The best classification results were obtained using a combined Inception-ResNet model that achieved 88.6% of accuracy. These are the best results obtained with the considered dataset. | ||||
Address | Almeria; June 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IWINAC | ||
Notes | LAMP; 600.120 | Approved | no | ||
Call Number | RSV2019 | Serial | 3273 | ||
Permanent link to this record | |||||
Author | Estefania Talavera; Nicolai Petkov; Petia Radeva | ||||
Title | Unsupervised Routine Discovery in Egocentric Photo-Streams | Type | Conference Article | ||
Year | 2019 | Publication | 18th International Conference on Computer Analysis of Images and Patterns | Abbreviated Journal | |
Volume | 11678 | Issue | Pages | 576-588 | |
Keywords | Routine discovery; Lifestyle; Egocentric vision; Behaviour analysis | ||||
Abstract | The routine of a person is defined by the occurrence of activities throughout different days, and can directly affect the person’s health. In this work, we address the recognition of routine related days. To do so, we rely on egocentric images, which are recorded by a wearable camera and allow to monitor the life of the user from a first-person view perspective. We propose an unsupervised model that identifies routine related days, following an outlier detection approach. We test the proposed framework over a total of 72 days in the form of photo-streams covering around 2 weeks of the life of 5 different camera wearers. Our model achieves an average of 76% Accuracy and 68% Weighted F-Score for all the users. Thus, we show that our framework is able to recognise routine related days and opens the door to the understanding of the behaviour of people. | ||||
Address | Salermo; Italy; September 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CAIP | ||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ TPR2019a | Serial | 3367 | ||
Permanent link to this record | |||||
Author | Eduardo Aguilar; Petia Radeva | ||||
Title | Class-Conditional Data Augmentation Applied to Image Classification | Type | Conference Article | ||
Year | 2019 | Publication | 18th International Conference on Computer Analysis of Images and Patterns | Abbreviated Journal | |
Volume | 11679 | Issue | Pages | 182-192 | |
Keywords | CNNs; Data augmentation; Deep learning; Epistemic uncertainty; Image classification; Food recognition | ||||
Abstract | Image classification is widely researched in the literature, where models based on Convolutional Neural Networks (CNNs) have provided better results. When data is not enough, CNN models tend to be overfitted. To deal with this, often, traditional techniques of data augmentation are applied, such as: affine transformations, adjusting the color balance, among others. However, we argue that some techniques of data augmentation may be more appropriate for some of the classes. In order to select the techniques that work best for particular class, we propose to explore the epistemic uncertainty for the samples within each class. From our experiments, we can observe that when the data augmentation is applied class-conditionally, we improve the results in terms of accuracy and also reduce the overall epistemic uncertainty. To summarize, in this paper we propose a class-conditional data augmentation procedure that allows us to obtain better results and improve robustness of the classification in the face of model uncertainty. | ||||
Address | Salermo; Italy; September 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CAIP | ||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ AgR2019 | Serial | 3366 | ||
Permanent link to this record | |||||
Author | Debora Gil; Antonio Esteban Lansaque; Sebastian Stefaniga; Mihail Gaianu; Carles Sanchez | ||||
Title | Data Augmentation from Sketch | Type | Conference Article | ||
Year | 2019 | Publication | International Workshop on Uncertainty for Safe Utilization of Machine Learning in Medical Imaging | Abbreviated Journal | |
Volume | 11840 | Issue | Pages | 155-162 | |
Keywords | Data augmentation; cycleGANs; Multi-objective optimization | ||||
Abstract | State of the art machine learning methods need huge amounts of data with unambiguous annotations for their training. In the context of medical imaging this is, in general, a very difficult task due to limited access to clinical data, the time required for manual annotations and variability across experts. Simulated data could serve for data augmentation provided that its appearance was comparable to the actual appearance of intra-operative acquisitions. Generative Adversarial Networks (GANs) are a powerful tool for artistic style transfer, but lack a criteria for selecting epochs ensuring also preservation of intra-operative content.
We propose a multi-objective optimization strategy for a selection of cycleGAN epochs ensuring a mapping between virtual images and the intra-operative domain preserving anatomical content. Our approach has been applied to simulate intra-operative bronchoscopic videos and chest CT scans from virtual sketches generated using simple graphical primitives. |
||||
Address | Shenzhen; China; October 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CLIP | ||
Notes | IAM; 600.145; 601.337; 600.139; 600.145 | Approved | no | ||
Call Number | Admin @ si @ GES2019 | Serial | 3359 | ||
Permanent link to this record | |||||
Author | Eduardo Aguilar; Petia Radeva | ||||
Title | Food Recognition by Integrating Local and Flat Classifiers | Type | Conference Article | ||
Year | 2019 | Publication | 9th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 11867 | Issue | Pages | 65-74 | |
Keywords | |||||
Abstract | The recognition of food image is an interesting research topic, in which its applicability in the creation of nutritional diaries stands out with the aim of improving the quality of life of people with a chronic disease (e.g. diabetes, heart disease) or prone to acquire it (e.g. people with overweight or obese). For a food recognition system to be useful in real applications, it is necessary to recognize a huge number of different foods. We argue that for very large scale classification, a traditional flat classifier is not enough to acquire an acceptable result. To address this, we propose a method that performs prediction with local classifiers, based on a class hierarchy, or with flat classifier. We decide which approach to use, depending on the analysis of both the Epistemic Uncertainty obtained for the image in the children classifiers and the prediction of the parent classifier. When our criterion is met, the final prediction is obtained with the respective local classifier; otherwise, with the flat classifier. From the results, we can see that the proposed method improves the classification performance compared to the use of a single flat classifier. | ||||
Address | Madrid; July 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IbPRIA | ||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ AgR2019b | Serial | 3369 | ||
Permanent link to this record | |||||
Author | Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo | ||||
Title | Single view facial hair 3D reconstruction | Type | Conference Article | ||
Year | 2019 | Publication | 9th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 11867 | Issue | Pages | 423-436 | |
Keywords | 3D Vision; Shape Reconstruction; Facial Hair Modeling | ||||
Abstract | n this work, we introduce a novel energy-based framework that addresses the challenging problem of 3D reconstruction of facial hair from a single RGB image. To this end, we identify hair pixels over the image via texture analysis and then determine individual hair fibers that are modeled by means of a parametric hair model based on 3D helixes. We propose to minimize an energy composed of several terms, in order to adapt the hair parameters that better fit the image detections. The final hairs respond to the resulting fibers after a post-processing step where we encourage further realism. The resulting approach generates realistic facial hair fibers from solely an RGB image without assuming any training data nor user interaction. We provide an experimental evaluation on real-world pictures where several facial hair styles and image conditions are observed, showing consistent results and establishing a comparison with respect to competing approaches. | ||||
Address | Madrid; July 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IbPRIA | ||
Notes | ADAS; 600.086; 600.130; 600.122 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3707 | ||
Permanent link to this record | |||||
Author | Parichehr Behjati Ardakani; Diego Velazquez; Josep M. Gonfaus; Pau Rodriguez; Xavier Roca; Jordi Gonzalez | ||||
Title | Catastrophic interference in Disguised Face Recognition | Type | Conference Article | ||
Year | 2019 | Publication | 9th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 11868 | Issue | Pages | 64-75 | |
Keywords | Neural network forgetness; Face recognition; Disguised Faces | ||||
Abstract | It is commonly known the natural tendency of artificial neural networks to completely and abruptly forget previously known information when learning new information. We explore this behaviour in the context of Face Verification on the recently proposed Disguised Faces in the Wild dataset (DFW). We empirically evaluate several commonly used DCNN architectures on Face Recognition and distill some insights about the effect of sequential learning on distinct identities from different datasets, showing that the catastrophic forgetness phenomenon is present even in feature embeddings fine-tuned on different tasks from the original domain. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IbPRIA | ||
Notes | ISE; 600.098; 600.119 | Approved | no | ||
Call Number | Admin @ si @ AVG2019 | Serial | 3416 | ||
Permanent link to this record | |||||
Author | Martin Menchon; Estefania Talavera; Jose M. Massa; Petia Radeva | ||||
Title | Behavioural Pattern Discovery from Collections of Egocentric Photo-Streams | Type | Conference Article | ||
Year | 2020 | Publication | ECCV Workshops | Abbreviated Journal | |
Volume | 12538 | Issue | Pages | 469-484 | |
Keywords | |||||
Abstract | The automatic discovery of behaviour is of high importance when aiming to assess and improve the quality of life of people. Egocentric images offer a rich and objective description of the daily life of the camera wearer. This work proposes a new method to identify a person’s patterns of behaviour from collected egocentric photo-streams. Our model characterizes time-frames based on the context (place, activities and environment objects) that define the images composition. Based on the similarity among the time-frames that describe the collected days for a user, we propose a new unsupervised greedy method to discover the behavioural pattern set based on a novel semantic clustering approach. Moreover, we present a new score metric to evaluate the performance of the proposed algorithm. We validate our method on 104 days and more than 100k images extracted from 7 users. Results show that behavioural patterns can be discovered to characterize the routine of individuals and consequently their lifestyle. | ||||
Address | Virtual; August 2020 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECCVW | ||
Notes | MILAB; no proj | Approved | no | ||
Call Number | Admin @ si @ MTM2020 | Serial | 3528 | ||
Permanent link to this record |