|   | 
Details
   web
Records
Author Andres Traumann; Sergio Escalera; Gholamreza Anbarjafari
Title A New Retexturing Method for Virtual Fitting Room Using Kinect 2 Camera Type Conference Article
Year 2015 Publication 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) Abbreviated Journal
Volume Issue Pages (down) 75-79
Keywords
Abstract
Address Boston; EEUU; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CVPRW
Notes HuPBA;MILAB Approved no
Call Number Admin @ si @ TEA2015 Serial 2653
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Oriol Ramos Terrades; Josep Llados
Title Ontology-Based Understanding of Architectural Drawings Type Book Chapter
Year 2017 Publication International Workshop on Graphics Recognition. GREC 2015.Graphic Recognition. Current Trends and Challenges Abbreviated Journal
Volume 9657 Issue Pages (down) 75-85
Keywords Graphics recognition; Floor plan analysi; Domain ontology
Abstract In this paper we present a knowledge base of architectural documents aiming at improving existing methods of floor plan classification and understanding. It consists of an ontological definition of the domain and the inclusion of real instances coming from both, automatically interpreted and manually labeled documents. The knowledge base has proven to be an effective tool to structure our knowledge and to easily maintain and upgrade it. Moreover, it is an appropriate means to automatically check the consistency of relational data and a convenient complement of hard-coded knowledge interpretation systems.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.121 Approved no
Call Number Admin @ si @ HRL2017 Serial 3086
Permanent link to this record
 

 
Author Marçal Rusiñol
Title Classificació semàntica i visual de documents digitals Type Journal
Year 2019 Publication Revista de biblioteconomia i documentacio Abbreviated Journal
Volume Issue Pages (down) 75-86
Keywords
Abstract Se analizan los sistemas de procesamiento automático que trabajan sobre documentos digitalizados con el objetivo de describir los contenidos. De esta forma contribuyen a facilitar el acceso, permitir la indización automática y hacer accesibles los documentos a los motores de búsqueda. El objetivo de estas tecnologías es poder entrenar modelos computacionales que sean capaces de clasificar, agrupar o realizar búsquedas sobre documentos digitales. Así, se describen las tareas de clasificación, agrupamiento y búsqueda. Cuando utilizamos tecnologías de inteligencia artificial en los sistemas de
clasificación esperamos que la herramienta nos devuelva etiquetas semánticas; en sistemas de agrupamiento que nos devuelva documentos agrupados en clusters significativos; y en sistemas de búsqueda esperamos que dada una consulta, nos devuelva una lista ordenada de documentos en función de la relevancia. A continuación se da una visión de conjunto de los métodos que nos permiten describir los documentos digitales, tanto de manera visual (cuál es su apariencia), como a partir de sus contenidos semánticos (de qué hablan). En cuanto a la descripción visual de documentos se aborda el estado de la cuestión de las representaciones numéricas de documentos digitalizados
tanto por métodos clásicos como por métodos basados en el aprendizaje profundo (deep learning). Respecto de la descripción semántica de los contenidos se analizan técnicas como el reconocimiento óptico de caracteres (OCR); el cálculo de estadísticas básicas sobre la aparición de las diferentes palabras en un texto (bag-of-words model); y los métodos basados en aprendizaje profundo como el método word2vec, basado en una red neuronal que, dadas unas cuantas palabras de un texto, debe predecir cuál será la
siguiente palabra. Desde el campo de las ingenierías se están transfiriendo conocimientos que se han integrado en productos o servicios en los ámbitos de la archivística, la biblioteconomía, la documentación y las plataformas de gran consumo, sin embargo los algoritmos deben ser lo suficientemente eficientes no sólo para el reconocimiento y transcripción literal sino también para la capacidad de interpretación de los contenidos.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes DAG; 600.084; 600.135; 600.121; 600.129 Approved no
Call Number Admin @ si @ Rus2019 Serial 3282
Permanent link to this record
 

 
Author Julio C. S. Jacques Junior; Yagmur Gucluturk; Marc Perez; Umut Guçlu; Carlos Andujar; Xavier Baro; Hugo Jair Escalante; Isabelle Guyon; Marcel A. J. van Gerven; Rob van Lier; Sergio Escalera
Title First Impressions: A Survey on Vision-Based Apparent Personality Trait Analysis Type Journal Article
Year 2022 Publication IEEE Transactions on Affective Computing Abbreviated Journal TAC
Volume 13 Issue 1 Pages (down) 75-95
Keywords Personality computing; first impressions; person perception; big-five; subjective bias; computer vision; machine learning; nonverbal signals; facial expression; gesture; speech analysis; multi-modal recognition
Abstract Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.
Address 1 Jan.-March 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA Approved no
Call Number Admin @ si @ JGP2022 Serial 3724
Permanent link to this record
 

 
Author Silvio Giancola; Anthony Cioppa; Adrien Deliege; Floriane Magera; Vladimir Somers; Le Kang; Xin Zhou; Olivier Barnich; Christophe De Vleeschouwer; Alexandre Alahi; Bernard Ghanem; Marc Van Droogenbroeck; Abdulrahman Darwish; Adrien Maglo; Albert Clapes; Andreas Luyts; Andrei Boiarov; Artur Xarles; Astrid Orcesi; Avijit Shah; Baoyu Fan; Bharath Comandur; Chen Chen; Chen Zhang; Chen Zhao; Chengzhi Lin; Cheuk-Yiu Chan; Chun Chuen Hui; Dengjie Li; Fan Yang; Fan Liang; Fang Da; Feng Yan; Fufu Yu; Guanshuo Wang; H. Anthony Chan; He Zhu; Hongwei Kan; Jiaming Chu; Jianming Hu; Jianyang Gu; Jin Chen; Joao V. B. Soares; Jonas Theiner; Jorge De Corte; Jose Henrique Brito; Jun Zhang; Junjie Li; Junwei Liang; Leqi Shen; Lin Ma; Lingchi Chen; Miguel Santos Marques; Mike Azatov; Nikita Kasatkin; Ning Wang; Qiong Jia; Quoc Cuong Pham; Ralph Ewerth; Ran Song; Rengang Li; Rikke Gade; Ruben Debien; Runze Zhang; Sangrok Lee; Sergio Escalera; Shan Jiang; Shigeyuki Odashima; Shimin Chen; Shoichi Masui; Shouhong Ding; Sin-wai Chan; Siyu Chen; Tallal El-Shabrawy; Tao He; Thomas B. Moeslund; Wan-Chi Siu; Wei Zhang; Wei Li; Xiangwei Wang; Xiao Tan; Xiaochuan Li; Xiaolin Wei; Xiaoqing Ye; Xing Liu; Xinying Wang; Yandong Guo; Yaqian Zhao; Yi Yu; Yingying Li; Yue He; Yujie Zhong; Zhenhua Guo; Zhiheng Li
Title SoccerNet 2022 Challenges Results Type Conference Article
Year 2022 Publication 5th International ACM Workshop on Multimedia Content Analysis in Sports Abbreviated Journal
Volume Issue Pages (down) 75-86
Keywords
Abstract The SoccerNet 2022 challenges were the second annual video understanding challenges organized by the SoccerNet team. In 2022, the challenges were composed of 6 vision-based tasks: (1) action spotting, focusing on retrieving action timestamps in long untrimmed videos, (2) replay grounding, focusing on retrieving the live moment of an action shown in a replay, (3) pitch localization, focusing on detecting line and goal part elements, (4) camera calibration, dedicated to retrieving the intrinsic and extrinsic camera parameters, (5) player re-identification, focusing on retrieving the same players across multiple views, and (6) multiple object tracking, focusing on tracking players and the ball through unedited video streams. Compared to last year's challenges, tasks (1-2) had their evaluation metrics redefined to consider tighter temporal accuracies, and tasks (3-6) were novel, including their underlying data and annotations. More information on the tasks, challenges and leaderboards are available on this https URL. Baselines and development kits are available on this https URL.
Address Lisboa; Portugal; October 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ACMW
Notes HUPBA; no menciona Approved no
Call Number Admin @ si @ GCD2022 Serial 3801
Permanent link to this record
 

 
Author Pierluigi Casale; Oriol Pujol; Petia Radeva; Jordi Vitria
Title A First Approach to Activity Recognition Using Topic Models Type Conference Article
Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume 202 Issue Pages (down) 74 - 82
Keywords
Abstract In this work, we present a first approach to activity patterns discovery by mean of topic models. Using motion data collected with a wearable device we prototype, TheBadge, we analyse raw accelerometer data using Latent Dirichlet Allocation (LDA), a particular instantiation of topic models. Results show that for particular values of the parameters necessary for applying LDA to a countinous dataset, good accuracies in activity classification can be achieved.
Address Cardona, Spain
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-60750-061-2 Medium
Area Expedition Conference CCIA
Notes OR;MILAB;HuPBA;MV Approved no
Call Number BCNPCL @ bcnpcl @ CPR2009e Serial 1231
Permanent link to this record
 

 
Author Muhammad Anwer Rao; Fahad Shahbaz Khan; Joost Van de Weijer; Matthieu Molinier; Jorma Laaksonen
Title Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification Type Journal Article
Year 2018 Publication ISPRS Journal of Photogrammetry and Remote Sensing Abbreviated Journal ISPRS J
Volume 138 Issue Pages (down) 74-85
Keywords Remote sensing; Deep learning; Scene classification; Local Binary Patterns; Texture analysis
Abstract Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes LAMP; 600.109; 600.106; 600.120 Approved no
Call Number Admin @ si @ RKW2018 Serial 3158
Permanent link to this record
 

 
Author Mireia Sole; Joan Blanco; Debora Gil; G. Fonseka; Richard Frodsham; Francesca Vidal; Zaida Sarrate
Title Noves perspectives en l estudi de la territorialitat cromosomica de cel·lules germinals masculines: estudis tridimensionals Type Journal
Year 2017 Publication Biologia de la Reproduccio Abbreviated Journal JBR
Volume 15 Issue Pages (down) 73-78
Keywords
Abstract In somatic cells, chromosomes occupy specific nuclear regions called chromosome territories which are involved in the
maintenance and regulation of the genome. Preliminary data in male germ cells also suggest the importance of chromosome
territoriality in cell functionality. Nevertheless, the specific characteristics of testicular tissue (presence of different
cell types with different morphological characteristics, in different stages of development and with different ploidy)
makes difficult to achieve conclusive results. In this study we have developed a methodology to approach the threedimensional
study of all chromosome territories in male germ cells from C57BL/6J mice (Mus musculus). The method
includes the following steps: i) Optimized cell fixation to obtain an optimal preservation of the three-dimensionality cell
morphology, ii) Chromosome identification by FISH (Chromoprobe Multiprobe® OctoChrome™ Murine System; Cytocell)
and confocal microscopy (TCS-SP5, Leica Microsystems), iii) Cell type identification by immunofluorescence
iv) Image analysis using Matlab scripts, v) Numerical data extraction related to chromosome features, chromosome
radial position and chromosome relative position. This methodology allows the unequivocally identification and the
analysis of the chromosome territories of all spermatogenic stages. Results will provide information about the features
that determine chromosomal position, preferred associations between chromosomes, and the relationship between chromosome
positioning and genome regulation.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-84-697-3767-5 Medium
Area Expedition Conference
Notes IAM; 600.096; 600.145 Approved no
Call Number Admin @ si @ SBG2017c Serial 2961
Permanent link to this record
 

 
Author Arnau Baro; Jialuo Chen; Alicia Fornes; Beata Megyesi
Title Towards a generic unsupervised method for transcription of encoded manuscripts Type Conference Article
Year 2019 Publication 3rd International Conference on Digital Access to Textual Cultural Heritage Abbreviated Journal
Volume Issue Pages (down) 73-78
Keywords A. Baró, J. Chen, A. Fornés, B. Megyesi.
Abstract Historical ciphers, a special type of manuscripts, contain encrypted information, important for the interpretation of our history. The first step towards decipherment is to transcribe the images, either manually or by automatic image processing techniques. Despite the improvements in handwritten text recognition (HTR) thanks to deep learning methodologies, the need of labelled data to train is an important limitation. Given that ciphers often use symbol sets across various alphabets and unique symbols without any transcription scheme available, these supervised HTR techniques are not suitable to transcribe ciphers. In this paper we propose an un-supervised method for transcribing encrypted manuscripts based on clustering and label propagation, which has been successfully applied to community detection in networks. We analyze the performance on ciphers with various symbol sets, and discuss the advantages and drawbacks compared to supervised HTR methods.
Address Brussels; May 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference DATeCH
Notes DAG; 600.097; 600.140; 600.121 Approved no
Call Number Admin @ si @ BCF2019 Serial 3276
Permanent link to this record
 

 
Author Laura Igual; Agata Lapedriza; Ricard Borras
Title Robust Gait-Based Gender Classification using Depth Cameras Type Journal Article
Year 2013 Publication EURASIP Journal on Advances in Signal Processing Abbreviated Journal EURASIPJ
Volume 37 Issue 1 Pages (down) 72-80
Keywords
Abstract This article presents a new approach for gait-based gender recognition using depth cameras, that can run in real time. The main contribution of this study is a new fast feature extraction strategy that uses the 3D point cloud obtained from the frames in a gait cycle. For each frame, these points are aligned according to their centroid and grouped. After that, they are projected into their PCA plane, obtaining a representation of the cycle particularly robust against view changes. Then, final discriminative features are computed by first making a histogram of the projected points and then using linear discriminant analysis. To test the method we have used the DGait database, which is currently the only publicly available database for gait analysis that includes depth information. We have performed experiments on manually labeled cycles and over whole video sequences, and the results show that our method improves the accuracy significantly, compared with state-of-the-art systems which do not use depth information. Furthermore, our approach is insensitive to illumination changes, given that it discards the RGB information. That makes the method especially suitable for real applications, as illustrated in the last part of the experiments section.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR;MV Approved no
Call Number Admin @ si @ ILB2013 Serial 2144
Permanent link to this record
 

 
Author Michal Drozdzal; Santiago Segui; Carolina Malagelada; Fernando Azpiroz; Petia Radeva
Title Adaptable image cuts for motility inspection using WCE Type Journal Article
Year 2013 Publication Computerized Medical Imaging and Graphics Abbreviated Journal CMIG
Volume 37 Issue 1 Pages (down) 72-80
Keywords
Abstract The Wireless Capsule Endoscopy (WCE) technology allows the visualization of the whole small intestine tract. Since the capsule is freely moving, mainly by the means of peristalsis, the data acquired during the study gives a lot of information about the intestinal motility. However, due to: (1) huge amount of frames, (2) complex intestinal scene appearance and (3) intestinal dynamics that make difficult the visualization of the small intestine physiological phenomena, the analysis of the WCE data requires computer-aided systems to speed up the analysis. In this paper, we propose an efficient algorithm for building a novel representation of the WCE video data, optimal for motility analysis and inspection. The algorithm transforms the 3D video data into 2D longitudinal view by choosing the most informative, from the intestinal motility point of view, part of each frame. This step maximizes the lumen visibility in its longitudinal extension. The task of finding “the best longitudinal view” has been defined as a cost function optimization problem which global minimum is obtained by using Dynamic Programming. Validation on both synthetic data and WCE data shows that the adaptive longitudinal view is a good alternative to the traditional motility analysis done by video analysis. The proposed novel data representation a new, holistic insight into the small intestine motility, allowing to easily define and analyze motility events that are difficult to spot by analyzing WCE video. Moreover, the visual inspection of small intestine motility is 4 times faster then by means of video skimming of the WCE.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB; OR; 600.046; 605.203 Approved no
Call Number Admin @ si @ DSM2012 Serial 2151
Permanent link to this record
 

 
Author Frederic Sampedro; Anna Domenech; Sergio Escalera; Ignasi Carrio
Title Computing quantitative indicators of structural renal damage in pediatric DMSA scans Type Journal Article
Year 2017 Publication Revista Española de Medicina Nuclear e Imagen Molecular Abbreviated Journal REMNIM
Volume 36 Issue 2 Pages (down) 72-77
Keywords
Abstract OBJECTIVES:
The proposal and implementation of a computational framework for the quantification of structural renal damage from 99mTc-dimercaptosuccinic acid (DMSA) scans. The aim of this work is to propose, implement, and validate a computational framework for the quantification of structural renal damage from DMSA scans and in an observer-independent manner.
MATERIALS AND METHODS:
From a set of 16 pediatric DMSA-positive scans and 16 matched controls and using both expert-guided and automatic approaches, a set of image-derived quantitative indicators was computed based on the relative size, intensity and histogram distribution of the lesion. A correlation analysis was conducted in order to investigate the association of these indicators with other clinical data of interest in this scenario, including C-reactive protein (CRP), white cell count, vesicoureteral reflux, fever, relative perfusion, and the presence of renal sequelae in a 6-month follow-up DMSA scan.
RESULTS:
A fully automatic lesion detection and segmentation system was able to successfully classify DMSA-positive from negative scans (AUC=0.92, sensitivity=81% and specificity=94%). The image-computed relative size of the lesion correlated with the presence of fever and CRP levels (p<0.05), and a measurement derived from the distribution histogram of the lesion obtained significant performance results in the detection of permanent renal damage (AUC=0.86, sensitivity=100% and specificity=75%).
CONCLUSIONS:
The proposal and implementation of a computational framework for the quantification of structural renal damage from DMSA scans showed a promising potential to complement visual diagnosis and non-imaging indicators.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA;MILAB; no menciona Approved no
Call Number Admin @ si @ SDE2017 Serial 2842
Permanent link to this record
 

 
Author Carles Fernandez; Pau Baiget; Xavier Roca; Jordi Gonzalez
Title Exploiting Natural Language Generation in Scene Interpretation Type Book Chapter
Year 2009 Publication Human–Centric Interfaces for Ambient Intelligence Abbreviated Journal
Volume 4 Issue Pages (down) 71–93
Keywords
Abstract
Address
Corporate Author Thesis
Publisher Elsevier Science and Tech Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number ISE @ ise @ FBR2009 Serial 1212
Permanent link to this record
 

 
Author Jorge Bernal; F. Javier Sanchez; Fernando Vilariño
Title Depth of Valleys Accumulation Algorithm for Object Detection Type Conference Article
Year 2011 Publication 14th Congrès Català en Intel·ligencia Artificial Abbreviated Journal
Volume 1 Issue 1 Pages (down) 71-80
Keywords Object Recognition, Object Region Identification, Image Analysis, Image Processing
Abstract This work aims at detecting in which regions the objects in the image are by using information about the intensity of valleys, which appear to surround ob- jects in images where the source of light is in the line of direction than the camera. We present our depth of valleys accumulation method, which consists of two stages: first, the definition of the depth of valleys image which combines the output of a ridges and valleys detector with the morphological gradient to measure how deep is a point inside a valley and second, an algorithm that denotes points of the image as interior to objects those which are inside complete or incomplete boundaries in the depth of valleys image. To evaluate the performance of our method we have tested it on several application domains. Our results on object region identification are promising, specially in the field of polyp detection in colonoscopy videos, and we also show its applicability in different areas.
Address Lleida
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-60750-841-0 Medium
Area 800 Expedition Conference CCIA
Notes MV;SIAI Approved no
Call Number IAM @ iam @ BSV2011b Serial 1699
Permanent link to this record
 

 
Author G. de Oliveira; Mariella Dimiccoli; Petia Radeva
Title Egocentric Image Retrieval With Deep Convolutional Neural Networks Type Conference Article
Year 2016 Publication 19th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume Issue Pages (down) 71-76
Keywords
Abstract
Address Barcelona; Spain; October 2016
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes MILAB Approved no
Call Number Admin @ si @ODR2016 Serial 2790
Permanent link to this record