Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Mariella Dimiccoli; Marc Bolaños; Estefania Talavera; Maedeh Aghaei; Stavri G. Nikolov; Petia Radeva | ||||
Title | SR-Clustering: Semantic Regularized Clustering for Egocentric Photo Streams Segmentation | Type | Journal Article | ||
Year | 2017 | Publication | Computer Vision and Image Understanding | Abbreviated Journal | CVIU |
Volume | 155 | Issue | Pages | 55-69 | |
Keywords | |||||
Abstract | While wearable cameras are becoming increasingly popular, locating relevant information in large unstructured collections of egocentric images is still a tedious and time consuming processes. This paper addresses the problem of organizing egocentric photo streams acquired by a wearable camera into semantically meaningful segments. First, contextual and semantic information is extracted for each image by employing a Convolutional Neural Networks approach. Later, by integrating language processing, a vocabulary of concepts is defined in a semantic space. Finally, by exploiting the temporal coherence in photo streams, images which share contextual and semantic attributes are grouped together. The resulting temporal segmentation is particularly suited for further analysis, ranging from activity and event recognition to semantic indexing and summarization. Experiments over egocentric sets of nearly 17,000 images, show that the proposed approach outperforms state-of-the-art methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; 601.235 | Approved | no | ||
Call Number | Admin @ si @ DBT2017 | Serial | 2714 | ||
Permanent link to this record | |||||
Author | Xavier Perez Sala; Fernando De la Torre; Laura Igual; Sergio Escalera; Cecilio Angulo | ||||
Title | Subspace Procrustes Analysis | Type | Journal Article | ||
Year | 2017 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 121 | Issue | 3 | Pages | 327–343 |
Keywords | |||||
Abstract | Procrustes Analysis (PA) has been a popular technique to align and build 2-D statistical models of shapes. Given a set of 2-D shapes PA is applied to remove rigid transformations. Then, a non-rigid 2-D model is computed by modeling (e.g., PCA) the residual. Although PA has been widely used, it has several limitations for modeling 2-D shapes: occluded landmarks and missing data can result in local minima solutions, and there is no guarantee that the 2-D shapes provide a uniform sampling of the 3-D space of rotations for the object. To address previous issues, this paper proposes Subspace PA (SPA). Given several
instances of a 3-D object, SPA computes the mean and a 2-D subspace that can simultaneously model all rigid and non-rigid deformations of the 3-D object. We propose a discrete (DSPA) and continuous (CSPA) formulation for SPA, assuming that 3-D samples of an object are provided. DSPA extends the traditional PA, and produces unbiased 2-D models by uniformly sampling different views of the 3-D object. CSPA provides a continuous approach to uniformly sample the space of 3-D rotations, being more efficient in space and time. Experiments using SPA to learn 2-D models of bodies from motion capture data illustrate the benefits of our approach. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; HuPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ PTI2017 | Serial | 2841 | ||
Permanent link to this record | |||||
Author | Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen | ||||
Title | Tex-Nets: Binary Patterns Encoded Convolutional Neural Networks for Texture Recognition | Type | Conference Article | ||
Year | 2017 | Publication | 19th International Conference on Multimodal Interaction | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Convolutional Neural Networks; Texture Recognition; Local Binary Paterns | ||||
Abstract | Recognizing materials and textures in realistic imaging conditions is a challenging computer vision problem. For many years, local features based orderless representations were a dominant approach for texture recognition. Recently deep local features, extracted from the intermediate layers of a Convolutional Neural Network (CNN), are used as filter banks. These dense local descriptors from a deep model, when encoded with Fisher Vectors, have shown to provide excellent results for texture recognition. The CNN models, employed in such approaches, take RGB patches as input and train on a large amount of labeled images. We show that CNN models, which we call TEX-Nets, trained using mapped coded images with explicit texture information provide complementary information to the standard deep models trained on RGB patches. We further investigate two deep architectures, namely early and late fusion, to combine the texture and color information. Experiments on benchmark texture datasets clearly demonstrate that TEX-Nets provide complementary information to standard RGB deep network. Our approach provides a large gain of 4.8%, 3.5%, 2.6% and 4.1% respectively in accuracy on the DTD, KTH-TIPS-2a, KTH-TIPS-2b and Texture-10 datasets, compared to the standard RGB network of the same architecture. Further, our final combination leads to consistent improvements over the state-of-the-art on all four datasets. | ||||
Address | Glasgow; Scothland; November 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM | ||
Notes | LAMP; 600.109; 600.068; 600.120 | Approved | no | ||
Call Number | Admin @ si @ RKW2017 | Serial | 3038 | ||
Permanent link to this record | |||||
Author | Lluis Gomez; Dimosthenis Karatzas | ||||
Title | TextProposals: a Text‐specific Selective Search Algorithm for Word Spotting in the Wild | Type | Journal Article | ||
Year | 2017 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 70 | Issue | Pages | 60-74 | |
Keywords | |||||
Abstract | Motivated by the success of powerful while expensive techniques to recognize words in a holistic way (Goel et al., 2013; Almazán et al., 2014; Jaderberg et al., 2016) object proposals techniques emerge as an alternative to the traditional text detectors. In this paper we introduce a novel object proposals method that is specifically designed for text. We rely on a similarity based region grouping algorithm that generates a hierarchy of word hypotheses. Over the nodes of this hierarchy it is possible to apply a holistic word recognition method in an efficient way.
Our experiments demonstrate that the presented method is superior in its ability of producing good quality word proposals when compared with class-independent algorithms. We show impressive recall rates with a few thousand proposals in different standard benchmarks, including focused or incidental text datasets, and multi-language scenarios. Moreover, the combination of our object proposals with existing whole-word recognizers (Almazán et al., 2014; Jaderberg et al., 2016) shows competitive performance in end-to-end word spotting, and, in some benchmarks, outperforms previously published results. Concretely, in the challenging ICDAR2015 Incidental Text dataset, we overcome in more than 10% F-score the best-performing method in the last ICDAR Robust Reading Competition (Karatzas, 2015). Source code of the complete end-to-end system is available at https://github.com/lluisgomez/TextProposals. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG; 600.084; 601.197; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ GoK2017 | Serial | 2886 | ||
Permanent link to this record | |||||
Author | Simon Jégou; Michal Drozdzal; David Vazquez; Adriana Romero; Yoshua Bengio | ||||
Title | The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation | Type | Conference Article | ||
Year | 2017 | Publication | IEEE Conference on Computer Vision and Pattern Recognition Workshops | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Semantic Segmentation | ||||
Abstract | State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions.
Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train. In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets. |
||||
Address | Honolulu; USA; July 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | MILAB; ADAS; 600.076; 600.085; 601.281 | Approved | no | ||
Call Number | ADAS @ adas @ JDV2016 | Serial | 2866 | ||
Permanent link to this record | |||||
Author | Dimosthenis Karatzas; Lluis Gomez; Marçal Rusiñol | ||||
Title | The Robust Reading Competition Annotation and Evaluation Platform | Type | Conference Article | ||
Year | 2017 | Publication | 1st International Workshop on Open Services and Tools for Document Analysis | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The ICDAR Robust Reading Competition (RRC), initiated in 2003 and re-established in 2011, has become the defacto evaluation standard for the international community. Concurrent with its second incarnation in 2011, a continuous effort started to develop an online framework to facilitate the hosting and management of competitions. This short paper briefly outlines the Robust Reading Competition Annotation and Evaluation Platform, the backbone of the Robust Reading Competition, comprising a collection of tools and processes that aim to simplify the management and annotation
of data, and to provide online and offline performance evaluation and analysis services |
||||
Address | Kyoto; Japan; November 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDAR-OST | ||
Notes | DAG; 600.084; 600.121; 600.129 | Approved | no | ||
Call Number | Admin @ si @ KGR2017 | Serial | 3063 | ||
Permanent link to this record | |||||
Author | Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Jorma Laaksonen | ||||
Title | Top-Down Deep Appearance Attention for Action Recognition | Type | Conference Article | ||
Year | 2017 | Publication | 20th Scandinavian Conference on Image Analysis | Abbreviated Journal | |
Volume | 10269 | Issue | Pages | 297-309 | |
Keywords | Action recognition; CNNs; Feature fusion | ||||
Abstract | Recognizing human actions in videos is a challenging problem in computer vision. Recently, convolutional neural network based deep features have shown promising results for action recognition. In this paper, we investigate the problem of fusing deep appearance and motion cues for action recognition. We propose a video representation which combines deep appearance and motion based local convolutional features within the bag-of-deep-features framework. Firstly, dense deep appearance and motion based local convolutional features are extracted from spatial (RGB) and temporal (flow) networks, respectively. Both visual cues are processed in parallel by constructing separate visual vocabularies for appearance and motion. A category-specific appearance map is then learned to modulate the weights of the deep motion features. The proposed representation is discriminative and binds the deep local convolutional features to their spatial locations. Experiments are performed on two challenging datasets: JHMDB dataset with 21 action classes and ACT dataset with 43 categories. The results clearly demonstrate that our approach outperforms both standard approaches of early and late feature fusion. Further, our approach is only employing action labels and without exploiting body part information, but achieves competitive performance compared to the state-of-the-art deep features based approaches. | ||||
Address | Tromso; June 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SCIA | ||
Notes | LAMP; 600.109; 600.068; 600.120 | Approved | no | ||
Call Number | Admin @ si @ RKW2017b | Serial | 3039 | ||
Permanent link to this record | |||||
Author | Carles Sanchez; Antonio Esteban Lansaque; Agnes Borras; Marta Diez-Ferrer; Antoni Rosell; Debora Gil | ||||
Title | Towards a Videobronchoscopy Localization System from Airway Centre Tracking | Type | Conference Article | ||
Year | 2017 | Publication | 12th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 352-359 | ||
Keywords | Video-bronchoscopy; Lung cancer diagnosis; Airway lumen detection; Region tracking; Guided bronchoscopy navigation | ||||
Abstract | Bronchoscopists use fluoroscopy to guide flexible bronchoscopy to the lesion to be biopsied without any kind of incision. Being fluoroscopy an imaging technique based on X-rays, the risk of developmental problems and cancer is increased in those subjects exposed to its application, so minimizing radiation is crucial. Alternative guiding systems such as electromagnetic navigation require specific equipment, increase the cost of the clinical procedure and still require fluoroscopy. In this paper we propose an image based guiding system based on the extraction of airway centres from intra-operative videos. Such anatomical landmarks are matched to the airway centreline extracted from a pre-planned CT to indicate the best path to the nodule. We present a
feasibility study of our navigation system using simulated bronchoscopic videos and a multi-expert validation of landmarks extraction in 3 intra-operative ultrathin explorations. |
||||
Address | Porto; Portugal; February 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | IAM; 600.096; 600.075; 600.145 | Approved | no | ||
Call Number | Admin @ si @ SEB2017 | Serial | 2943 | ||
Permanent link to this record | |||||
Author | Quentin Angermann; Jorge Bernal; Cristina Sanchez Montes; Gloria Fernandez Esparrach; Xavier Gray; Olivier Romain; F. Javier Sanchez; Aymeric Histace | ||||
Title | Towards Real-Time Polyp Detection in Colonoscopy Videos: Adapting Still Frame-Based Methodologies for Video Sequences Analysis | Type | Conference Article | ||
Year | 2017 | Publication | 4th International Workshop on Computer Assisted and Robotic Endoscopy | Abbreviated Journal | |
Volume | Issue | Pages | 29-41 | ||
Keywords | Polyp detection; colonoscopy; real time; spatio temporal coherence | ||||
Abstract | Colorectal cancer is the second cause of cancer death in United States: precursor lesions (polyps) detection is key for patient survival. Though colonoscopy is the gold standard screening tool, some polyps are still missed. Several computational systems have been proposed but none of them are used in the clinical room mainly due to computational constraints. Besides, most of them are built over still frame databases, decreasing their performance on video analysis due to the lack of output stability and not coping with associated variability on image quality and polyp appearance. We propose a strategy to adapt these methods to video analysis by adding a spatio-temporal stability module and studying a combination of features to capture polyp appearance variability. We validate our strategy, incorporated on a real-time detection method, on a public video database. Resulting method detects all
polyps under real time constraints, increasing its performance due to our adaptation strategy. |
||||
Address | Quebec; Canada; September 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CARE | ||
Notes | MV; 600.096; 600.075 | Approved | no | ||
Call Number | Admin @ si @ ABS2017b | Serial | 2977 | ||
Permanent link to this record | |||||
Author | Marc Bolaños; Mariella Dimiccoli; Petia Radeva | ||||
Title | Towards Storytelling from Visual Lifelogging: An Overview | Type | Journal Article | ||
Year | 2017 | Publication | IEEE Transactions on Human-Machine Systems | Abbreviated Journal | THMS |
Volume | 47 | Issue | 1 | Pages | 77 - 90 |
Keywords | |||||
Abstract | Visual lifelogging consists of acquiring images that capture the daily experiences of the user by wearing a camera over a long period of time. The pictures taken offer considerable potential for knowledge mining concerning how people live their lives, hence, they open up new opportunities for many potential applications in fields including healthcare, security, leisure and
the quantified self. However, automatically building a story from a huge collection of unstructured egocentric data presents major challenges. This paper provides a thorough review of advances made so far in egocentric data analysis, and in view of the current state of the art, indicates new lines of research to move us towards storytelling from visual lifelogging. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; 601.235 | Approved | no | ||
Call Number | Admin @ si @ BDR2017 | Serial | 2712 | ||
Permanent link to this record | |||||
Author | Pau Riba; Alicia Fornes; Josep Llados | ||||
Title | Towards the Alignment of Handwritten Music Scores | Type | Book Chapter | ||
Year | 2017 | Publication | International Workshop on Graphics Recognition. GREC 2015.Graphic Recognition. Current Trends and Challenges | Abbreviated Journal | |
Volume | 9657 | Issue | Pages | 103-116 | |
Keywords | Optical Music Recognition; Handwritten Music Scores; Dynamic Time Warping alignment | ||||
Abstract | It is very common to nd dierent versions of the same music work in archives of Opera Theaters. These dierences correspond to modications and annotations from the musicians. From the musicologist point of view, these variations are very interesting and deserve study.
This paper explores the alignment of music scores as a tool for automatically detecting the passages that contain such dierences. Given the diculties in the recognition of handwritten music scores, our goal is to align the music scores and at the same time, avoid the recognition of music elements as much as possible. After removing the sta lines, braces and ties, the bar lines are detected. Then, the bar units are described as a whole using the Blurred Shape Model. The bar units alignment is performed by using Dynamic Time Warping. The analysis of the alignment path is used to detect the variations in the music scores. The method has been evaluated on a subset of the CVC-MUSCIMA dataset, showing encouraging results. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | Bart Lamiroy; R Dueire Lins | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-319-52158-9 | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; 600.097; 602.006; 600.121 | Approved | no | ||
Call Number | Admin @ si @ RFL2017 | Serial | 2955 | ||
Permanent link to this record | |||||
Author | Antonio Lopez; Gabriel Villalonga; Laura Sellart; German Ros; David Vazquez; Jiaolong Xu; Javier Marin; Azadeh S. Mozafari | ||||
Title | Training my car to see using virtual worlds | Type | Journal Article | ||
Year | 2017 | Publication | Image and Vision Computing | Abbreviated Journal | IMAVIS |
Volume | 38 | Issue | Pages | 102-118 | |
Keywords | |||||
Abstract | Computer vision technologies are at the core of different advanced driver assistance systems (ADAS) and will play a key role in oncoming autonomous vehicles too. One of the main challenges for such technologies is to perceive the driving environment, i.e. to detect and track relevant driving information in a reliable manner (e.g. pedestrians in the vehicle route, free space to drive through). Nowadays it is clear that machine learning techniques are essential for developing such a visual perception for driving. In particular, the standard working pipeline consists of collecting data (i.e. on-board images), manually annotating the data (e.g. drawing bounding boxes around pedestrians), learning a discriminative data representation taking advantage of such annotations (e.g. a deformable part-based model, a deep convolutional neural network), and then assessing the reliability of such representation with the acquired data. In the last two decades most of the research efforts focused on representation learning (first, designing descriptors and learning classifiers; later doing it end-to-end). Hence, collecting data and, especially, annotating it, is essential for learning good representations. While this has been the case from the very beginning, only after the disruptive appearance of deep convolutional neural networks that it became a serious issue due to their data hungry nature. In this context, the problem is that manual data annotation is a tiresome work prone to errors. Accordingly, in the late 00’s we initiated a research line consisting of training visual models using photo-realistic computer graphics, especially focusing on assisted and autonomous driving. In this paper, we summarize such a work and show how it has become a new tendency with increasing acceptance. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.118 | Approved | no | ||
Call Number | Admin @ si @ LVS2017 | Serial | 2985 | ||
Permanent link to this record | |||||
Author | Alicia Fornes; Beata Megyesi; Joan Mas | ||||
Title | Transcription of Encoded Manuscripts with Image Processing Techniques | Type | Conference Article | ||
Year | 2017 | Publication | Digital Humanities Conference | Abbreviated Journal | |
Volume | Issue | Pages | 441-443 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DH | ||
Notes | DAG; 600.097; 600.121 | Approved | no | ||
Call Number | Admin @ si @ FMM2017 | Serial | 3061 | ||
Permanent link to this record | |||||
Author | Mireia Sole; Joan Blanco; Debora Gil; G. Fonseka; Richard Frodsham; Oliver Valero; Francesca Vidal; Zaida Sarrate | ||||
Title | Unraveling the enigmas of chromosome territoriality during spermatogenesis | Type | Conference Article | ||
Year | 2017 | Publication | IX Jornada del Departament de Biologia Cel•lular, Fisiologia i Immunologia | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | UAB; Barcelona; June 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; 600.145 | Approved | no | ||
Call Number | Admin @ si @ SBG2017b | Serial | 2959 | ||
Permanent link to this record | |||||
Author | Fernando Vilariño; Dan Norton | ||||
Title | Using mutimedia tools to spread poetry collections | Type | Conference Article | ||
Year | 2017 | Publication | Internet librarian International Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | London; UK; October 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ILI | ||
Notes | MV; 600.097;SIAI | Approved | no | ||
Call Number | Admin @ si @ ViN2017 | Serial | 3031 | ||
Permanent link to this record |