toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author P. Ricaurte ; C. Chilan; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Angel Sappa edit  doi
openurl 
  Title Feature Point Descriptors: Infrared and Visible Spectra Type Journal Article
  Year 2014 Publication Sensors Abbreviated Journal SENS  
  Volume 14 Issue 2 Pages 3690-3701  
  Keywords  
  Abstract This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes ADAS;600.055; 600.076 Approved no  
  Call Number Admin @ si @ RCA2014a Serial 2474  
Permanent link to this record
 

 
Author Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny edit  doi
openurl 
  Title Word Spotting and Recognition with Embedded Attributes Type Journal Article
  Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 12 Pages 2552 - 2566  
  Keywords  
  Abstract This article addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN Medium  
  Area Expedition Conference (up)  
  Notes DAG; 600.056; 600.045; 600.061; 602.006; 600.077 Approved no  
  Call Number Admin @ si @ AGF2014a Serial 2483  
Permanent link to this record
 

 
Author Alicia Fornes; Gemma Sanchez edit  doi
isbn  openurl
  Title Analysis and Recognition of Music Scores Type Book Chapter
  Year 2014 Publication Handbook of Document Image Processing and Recognition Abbreviated Journal  
  Volume E Issue Pages 749-774  
  Keywords  
  Abstract The analysis and recognition of music scores has attracted the interest of researchers for decades. Optical Music Recognition (OMR) is a classical research field of Document Image Analysis and Recognition (DIAR), whose aim is to extract information from music scores. Music scores contain both graphical and textual information, and for this reason, techniques are closely related to graphics recognition and text recognition. Since music scores use a particular diagrammatic notation that follow the rules of music theory, many approaches make use of context information to guide the recognition and solve ambiguities. This chapter overviews the main Optical Music Recognition (OMR) approaches. Firstly, the different methods are grouped according to the OMR stages, namely, staff removal, music symbol recognition, and syntactical analysis. Secondly, specific approaches for old and handwritten music scores are reviewed. Finally, online approaches and commercial systems are also commented.  
  Address  
  Corporate Author Thesis  
  Publisher Springer London Place of Publication Editor D. Doermann; K. Tombre  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-0-85729-860-7 Medium  
  Area Expedition Conference (up)  
  Notes DAG; ADAS; 600.076; 600.077 Approved no  
  Call Number Admin @ si @ FoS2014 Serial 2484  
Permanent link to this record
 

 
Author Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny edit  doi
openurl 
  Title Segmentation-free Word Spotting with Exemplar SVMs Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue 12 Pages 3967–3978  
  Keywords Word spotting; Segmentation-free; Unsupervised learning; Reranking; Query expansion; Compression  
  Abstract In this paper we propose an unsupervised segmentation-free method for word spotting in document images. Documents are represented with a grid of HOG descriptors, and a sliding-window approach is used to locate the document regions that are most similar to the query. We use the Exemplar SVM framework to produce a better representation of the query in an unsupervised way. Then, we use a more discriminative representation based on Fisher Vector to rerank the best regions retrieved, and the most promising ones are used to expand the Exemplar SVM training set and improve the query representation. Finally, the document descriptors are precomputed and compressed with Product Quantization. This offers two advantages: first, a large number of documents can be kept in RAM memory at the same time. Second, the sliding window becomes significantly faster since distances between quantized HOG descriptors can be precomputed. Our results significantly outperform other segmentation-free methods in the literature, both in accuracy and in speed and memory usage.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes DAG; 600.045; 600.056; 600.061; 602.006; 600.077 Approved no  
  Call Number Admin @ si @ AGF2014b Serial 2485  
Permanent link to this record
 

 
Author Michal Drozdzal edit  isbn
openurl 
  Title Sequential image analysis for computer-aided wireless endoscopy Type Book Whole
  Year 2014 Publication PhD Thesis, Universitat de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Wireless Capsule Endoscopy (WCE) is a technique for inner-visualization of the entire small intestine and, thus, offers an interesting perspective on intestinal motility. The two major drawbacks of this technique are: 1) huge amount of data acquired by WCE makes the motility analysis tedious and 2) since the capsule is the first tool that offers complete inner-visualization of the small intestine,the exact importance of the observed events is still an open issue. Therefore, in this thesis, a novel computer-aided system for intestinal motility analysis is presented. The goal of the system is to provide an easily-comprehensible visual description of motility-related intestinal events to a physician. In order to do so, several tools based either on computer vision concepts or on machine learning techniques are presented. A method for transforming 3D video signal to a holistic image of intestinal motility, called motility bar, is proposed. The method calculates the optimal mapping from video into image from the intestinal motility point of view.
To characterize intestinal motility, methods for automatic extraction of motility information from WCE are presented. Two of them are based on the motility bar and two of them are based on frame-per-frame analysis. In particular, four algorithms dealing with the problems of intestinal contraction detection, lumen size estimation, intestinal content characterization and wrinkle frame detection are proposed and validated. The results of the algorithms are converted into sequential features using an online statistical test. This test is designed to work with multivariate data streams. To this end, we propose a novel formulation of concentration inequality that is introduced into a robust adaptive windowing algorithm for multivariate data streams. The algorithm is used to obtain robust representation of segments with constant intestinal motility activity. The obtained sequential features are shown to be discriminative in the problem of abnormal motility characterization.
Finally, we tackle the problem of efficient labeling. To this end, we incorporate active learning concepts to the problems present in WCE data and propose two approaches. The first one is based the concepts of sequential learning and the second one adapts the partition-based active learning to an error-free labeling scheme. All these steps are sufficient to provide an extensive visual description of intestinal motility that can be used by an expert as decision support system.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Petia Radeva  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-940902-3-3 Medium  
  Area Expedition Conference (up)  
  Notes MILAB Approved no  
  Call Number Admin @ si @ Dro2014 Serial 2486  
Permanent link to this record
 

 
Author Jorge Bernal edit   pdf
url  openurl
  Title Polyp Localization and Segmentation in Colonoscopy Images by Means of a Model of Appearance for Polyps Type Journal Article
  Year 2014 Publication Electronic Letters on Computer Vision and Image Analysis Abbreviated Journal ELCVIA  
  Volume 13 Issue 2 Pages 9-10  
  Keywords Colonoscopy; polyp localization; polyp segmentation; Eye-tracking  
  Abstract Colorectal cancer is the fourth most common cause of cancer death worldwide and its survival rate depends on the stage in which it is detected on hence the necessity for an early colon screening. There are several screening techniques but colonoscopy is still nowadays the gold standard, although it has some drawbacks such as the miss rate. Our contribution, in the field of intelligent systems for colonoscopy, aims at providing a polyp localization and a polyp segmentation system based on a model of appearance for polyps. To develop both methods we define a model of appearance for polyps, which describes a polyp as enclosed by intensity valleys. The novelty of our contribution resides on the fact that we include in our model aspects of the image formation and we also consider the presence of other elements from the endoluminal scene such as specular highlights and blood vessels, which have an impact on the performance of our methods. In order to develop our polyp localization method we accumulate valley information in order to generate energy maps, which are also used to guide the polyp segmentation. Our methods achieve promising results in polyp localization and segmentation. As we want to explore the usability of our methods we present a comparative analysis between physicians fixations obtained via an eye tracking device and our polyp localization method. The results show that our method is indistinguishable to novice physicians although it is far from expert physicians.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor Alicia Fornes; Volkmar Frinken  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes MV Approved no  
  Call Number Admin @ si @ Ber2014 Serial 2487  
Permanent link to this record
 

 
Author Joan Marc Llargues Asensio; Juan Peralta; Raul Arrabales; Manuel Gonzalez Bedia; Paulo Cortez; Antonio Lopez edit  doi
openurl 
  Title Artificial Intelligence Approaches for the Generation and Assessment of Believable Human-Like Behaviour in Virtual Characters Type Journal Article
  Year 2014 Publication Expert Systems With Applications Abbreviated Journal EXSY  
  Volume 41 Issue 16 Pages 7281–7290  
  Keywords Turing test; Human-like behaviour; Believability; Non-player characters; Cognitive architectures; Genetic algorithm; Artificial neural networks  
  Abstract Having artificial agents to autonomously produce human-like behaviour is one of the most ambitious original goals of Artificial Intelligence (AI) and remains an open problem nowadays. The imitation game originally proposed by Turing constitute a very effective method to prove the indistinguishability of an artificial agent. The behaviour of an agent is said to be indistinguishable from that of a human when observers (the so-called judges in the Turing test) cannot tell apart humans and non-human agents. Different environments, testing protocols, scopes and problem domains can be established to develop limited versions or variants of the original Turing test. In this paper we use a specific version of the Turing test, based on the international BotPrize competition, built in a First-Person Shooter video game, where both human players and non-player characters interact in complex virtual environments. Based on our past experience both in the BotPrize competition and other robotics and computer game AI applications we have developed three new more advanced controllers for believable agents: two based on a combination of the CERA–CRANIUM and SOAR cognitive architectures and other based on ADANN, a system for the automatic evolution and adaptation of artificial neural networks. These two new agents have been put to the test jointly with CCBot3, the winner of BotPrize 2010 competition (Arrabales et al., 2012), and have showed a significant improvement in the humanness ratio. Additionally, we have confronted all these bots to both First-person believability assessment (BotPrize original judging protocol) and Third-person believability assessment, demonstrating that the active involvement of the judge has a great impact in the recognition of human-like behaviour.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes ADAS; 600.055; 600.057; 600.076 Approved no  
  Call Number Admin @ si @ LPA2014 Serial 2500  
Permanent link to this record
 

 
Author Jose Manuel Alvarez; Antonio Lopez; Theo Gevers; Felipe Lumbreras edit   pdf
doi  openurl
  Title Combining Priors, Appearance and Context for Road Detection Type Journal Article
  Year 2014 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 15 Issue 3 Pages 1168-1178  
  Keywords Illuminant invariance; lane markings; road detection; road prior; road scene understanding; vanishing point; 3-D scene layout  
  Abstract Detecting the free road surface ahead of a moving vehicle is an important research topic in different areas of computer vision, such as autonomous driving or car collision warning.
Current vision-based road detection methods are usually based solely on low-level features. Furthermore, they generally assume structured roads, road homogeneity, and uniform lighting conditions, constraining their applicability in real-world scenarios. In this paper, road priors and contextual information are introduced for road detection. First, we propose an algorithm to estimate road priors online using geographical information, providing relevant initial information about the road location. Then, contextual cues, including horizon lines, vanishing points, lane markings, 3-D scene layout, and road geometry, are used in addition to low-level cues derived from the appearance of roads. Finally, a generative model is used to combine these cues and priors, leading to a road detection method that is, to a large degree, robust to varying imaging conditions, road types, and scenarios.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference (up)  
  Notes ADAS; 600.076;ISE Approved no  
  Call Number Admin @ si @ ALG2014 Serial 2501  
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Joaquin Salas; Bogdan Raducanu edit  doi
isbn  openurl
  Title Robust Head Gestures Recognition for Assistive Technology Type Book Chapter
  Year 2014 Publication Pattern Recognition Abbreviated Journal  
  Volume 8495 Issue Pages 152-161  
  Keywords  
  Abstract This paper presents a system capable of recognizing six head gestures: nodding, shaking, turning right, turning left, looking up, and looking down. The main difference of our system compared to other methods is that the Hidden Markov Models presented in this paper, are fully connected and consider all possible states in any given order, providing the following advantages to the system: (1) allows unconstrained movement of the head and (2) it can be easily integrated into a wearable device (e.g. glasses, neck-hung devices), in which case it can robustly recognize gestures in the presence of ego-motion. Experimental results show that this approach outperforms common methods that use restricted HMMs for each gesture.  
  Address  
  Corporate Author Thesis  
  Publisher Springer International Publishing Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-319-07490-0 Medium  
  Area Expedition Conference (up)  
  Notes LAMP; Approved no  
  Call Number Admin @ si @ TSR2014b Serial 2505  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Muhammad Anwer Rao; Michael Felsberg; Carlo Gatta edit   pdf
doi  openurl
  Title Semantic Pyramids for Gender and Action Recognition Type Journal Article
  Year 2014 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 23 Issue 8 Pages 3633-3645  
  Keywords  
  Abstract Person description is a challenging problem in computer vision. We investigated two major aspects of person description: 1) gender and 2) action recognition in still images. Most state-of-the-art approaches for gender and action recognition rely on the description of a single body part, such as face or full-body. However, relying on a single body part is suboptimal due to significant variations in scale, viewpoint, and pose in real-world images. This paper proposes a semantic pyramid approach for pose normalization. Our approach is fully automatic and based on combining information from full-body, upper-body, and face regions for gender and action recognition in still images. The proposed approach does not require any annotations for upper-body and face of a person. Instead, we rely on pretrained state-of-the-art upper-body and face detectors to automatically extract semantic information of a person. Given multiple bounding boxes from each body part detector, we then propose a simple method to select the best candidate bounding box, which is used for feature extraction. Finally, the extracted features from the full-body, upper-body, and face regions are combined into a single representation for classification. To validate the proposed approach for gender recognition, experiments are performed on three large data sets namely: 1) human attribute; 2) head-shoulder; and 3) proxemics. For action recognition, we perform experiments on four data sets most used for benchmarking action recognition in still images: 1) Sports; 2) Willow; 3) PASCAL VOC 2010; and 4) Stanford-40. Our experiments clearly demonstrate that the proposed approach, despite its simplicity, outperforms state-of-the-art methods for gender and action recognition.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1057-7149 ISBN Medium  
  Area Expedition Conference (up)  
  Notes CIC; LAMP; 601.160; 600.074; 600.079;MILAB Approved no  
  Call Number Admin @ si @ KWR2014 Serial 2507  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Shida Beigpour; Joost Van de Weijer; Michael Felsberg edit  doi
openurl 
  Title Painting-91: A Large Scale Database for Computational Painting Categorization Type Journal Article
  Year 2014 Publication Machine Vision and Applications Abbreviated Journal MVAP  
  Volume 25 Issue 6 Pages 1385-1397  
  Keywords  
  Abstract Computer analysis of visual art, especially paintings, is an interesting cross-disciplinary research domain. Most of the research in the analysis of paintings involve medium to small range datasets with own specific settings. Interestingly, significant progress has been made in the field of object and scene recognition lately. A key factor in this success is the introduction and availability of benchmark datasets for evaluation. Surprisingly, such a benchmark setup is still missing in the area of computational painting categorization. In this work, we propose a novel large scale dataset of digital paintings. The dataset consists of paintings from 91 different painters. We further show three applications of our dataset namely: artist categorization, style classification and saliency detection. We investigate how local and global features popular in image classification perform for the tasks of artist and style categorization. For both categorization tasks, our experimental results suggest that combining multiple features significantly improves the final performance. We show that state-of-the-art computer vision methods can correctly classify 50 % of unseen paintings to its painter in a large dataset and correctly attribute its artistic style in over 60 % of the cases. Additionally, we explore the task of saliency detection on paintings and show experimental findings using state-of-the-art saliency estimation algorithms.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0932-8092 ISBN Medium  
  Area Expedition Conference (up)  
  Notes CIC; LAMP; 600.074; 600.079 Approved no  
  Call Number Admin @ si @ KBW2014 Serial 2510  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Jordi Roca; Dimosthenis Karatzas; Sophie Wuerger edit   pdf
url  doi
openurl 
  Title Limitations of visual gamma corrections in LCD displays Type Journal Article
  Year 2014 Publication Displays Abbreviated Journal Dis  
  Volume 35 Issue 5 Pages 227–239  
  Keywords Display calibration; Psychophysics; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration  
  Abstract A method for estimating the non-linear gamma transfer function of liquid–crystal displays (LCDs) without the need of a photometric measurement device was described by Xiao et al. (2011) [1]. It relies on observer’s judgments of visual luminance by presenting eight half-tone patterns with luminances from 1/9 to 8/9 of the maximum value of each colour channel. These half-tone patterns were distributed over the screen both over the vertical and horizontal viewing axes. We conducted a series of photometric and psychophysical measurements (consisting in the simultaneous presentation of half-tone patterns in each trial) to evaluate whether the angular dependency of the light generated by three different LCD technologies would bias the results of these gamma transfer function estimations. Our results show that there are significant differences between the gamma transfer functions measured and produced by observers at different viewing angles. We suggest appropriate modifications to the Xiao et al. paradigm to counterbalance these artefacts which also have the advantage of shortening the amount of time spent in collecting the psychophysical measurements.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes CIC; DAG; 600.052; 600.077; 600.074 Approved no  
  Call Number Admin @ si @ PRK2014 Serial 2511  
Permanent link to this record
 

 
Author C. Alejandro Parraga edit  doi
isbn  openurl
  Title Color Vision, Computational Methods for Type Book Chapter
  Year 2014 Publication Encyclopedia of Computational Neuroscience Abbreviated Journal  
  Volume Issue Pages 1-11  
  Keywords Color computational vision; Computational neuroscience of color  
  Abstract The study of color vision has been aided by a whole battery of computational methods that attempt to describe the mechanisms that lead to our perception of colors in terms of the information-processing properties of the visual system. Their scope is highly interdisciplinary, linking apparently dissimilar disciplines such as mathematics, physics, computer science, neuroscience, cognitive science, and psychology. Since the sensation of color is a feature of our brains, computational approaches usually include biological features of neural systems in their descriptions, from retinal light-receptor interaction to subcortical color opponency, cortical signal decoding, and color categorization. They produce hypotheses that are usually tested by behavioral or psychophysical experiments.  
  Address  
  Corporate Author Thesis  
  Publisher Springer-Verlag Berlin Heidelberg Place of Publication Editor Dieter Jaeger; Ranu Jung  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4614-7320-6 Medium  
  Area Expedition Conference (up)  
  Notes CIC; 600.074 Approved no  
  Call Number Admin @ si @ Par2014 Serial 2512  
Permanent link to this record
 

 
Author Svebor Karaman; Giuseppe Lisanti; Andrew Bagdanov; Alberto del Bimbo edit  doi
isbn  openurl
  Title From re-identification to identity inference: Labeling consistency by local similarity constraints Type Book Chapter
  Year 2014 Publication Person Re-Identification Abbreviated Journal  
  Volume 2 Issue Pages 287-307  
  Keywords re-identification; Identity inference; Conditional random fields; Video surveillance  
  Abstract In this chapter, we introduce the problem of identity inference as a generalization of person re-identification. It is most appropriate to distinguish identity inference from re-identification in situations where a large number of observations must be identified without knowing a priori that groups of test images represent the same individual. The standard single- and multishot person re-identification common in the literature are special cases of our formulation. We present an approach to solving identity inference by modeling it as a labeling problem in a Conditional Random Field (CRF). The CRF model ensures that the final labeling gives similar labels to detections that are similar in feature space. Experimental results are given on the ETHZ, i-LIDS and CAVIAR datasets. Our approach yields state-of-the-art performance for multishot re-identification, and our results on the more general identity inference problem demonstrate that we are able to infer the identity of very many examples even with very few labeled images in the gallery.  
  Address  
  Corporate Author Thesis  
  Publisher Springer London Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2191-6586 ISBN 978-1-4471-6295-7 Medium  
  Area Expedition Conference (up)  
  Notes LAMP; 600.079 Approved no  
  Call Number Admin @ si @KLB2014b Serial 2521  
Permanent link to this record
 

 
Author Svebor Karaman; Giuseppe Lisanti; Andrew Bagdanov; Alberto del Bimbo edit   pdf
doi  openurl
  Title Leveraging local neighborhood topology for large scale person re-identification Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue 12 Pages 3767–3778  
  Keywords Re-identification; Conditional random field; Semi-supervised; ETHZ; CAVIAR; 3DPeS; CMV100  
  Abstract In this paper we describe a semi-supervised approach to person re-identification that combines discriminative models of person identity with a Conditional Random Field (CRF) to exploit the local manifold approximation induced by the nearest neighbor graph in feature space. The linear discriminative models learned on few gallery images provides coarse separation of probe images into identities, while a graph topology defined by distances between all person images in feature space leverages local support for label propagation in the CRF. We evaluate our approach using multiple scenarios on several publicly available datasets, where the number of identities varies from 28 to 191 and the number of images ranges between 1003 and 36 171. We demonstrate that the discriminative model and the CRF are complementary and that the combination of both leads to significant improvement over state-of-the-art approaches. We further demonstrate how the performance of our approach improves with increasing test data and also with increasing amounts of additional unlabeled data.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference (up)  
  Notes LAMP; 601.240; 600.079 Approved no  
  Call Number Admin @ si @ KLB2014a Serial 2522  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: