|   | 
Details
   web
Records
Author Fahad Shahbaz Khan; Jiaolong Xu; Muhammad Anwer Rao; Joost Van de Weijer; Andrew Bagdanov; Antonio Lopez
Title Recognizing Actions through Action-specific Person Detection Type Journal Article
Year 2015 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 24 Issue 11 Pages 4422-4432
Keywords
Abstract Action recognition in still images is a challenging problem in computer vision. To facilitate comparative evaluation independently of person detection, the standard evaluation protocol for action recognition uses an oracle person detector to obtain perfect bounding box information at both training and test time. The assumption is that, in practice, a general person detector will provide candidate bounding boxes for action recognition. In this paper, we argue that this paradigm is suboptimal and that action class labels should already be considered during the detection stage. Motivated by the observation that body pose is strongly conditioned on action class, we show that: 1) the existing state-of-the-art generic person detectors are not adequate for proposing candidate bounding boxes for action classification; 2) due to limited training examples, the direct training of action-specific person detectors is also inadequate; and 3) using only a small number of labeled action examples, the transfer learning is able to adapt an existing detector to propose higher quality bounding boxes for subsequent action classification. To the best of our knowledge, we are the first to investigate transfer learning for the task of action-specific person detection in still images. We perform extensive experiments on two benchmark data sets: 1) Stanford-40 and 2) PASCAL VOC 2012. For the action detection task (i.e., both person localization and classification of the action performed), our approach outperforms methods based on general person detection by 5.7% mean average precision (MAP) on Stanford-40 and 2.1% MAP on PASCAL VOC 2012. Our approach also significantly outperforms the state of the art with a MAP of 45.4% on Stanford-40 and 31.4% on PASCAL VOC 2012. We also evaluate our action detection approach for the task of action classification (i.e., recognizing actions without localizing them). For this task, our approach, without using any ground-truth person localization at test tim- , outperforms on both data sets state-of-the-art methods, which do use person locations.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 1057-7149 ISBN Medium
Area Expedition Conference
Notes ADAS; LAMP; 600.076; 600.079 Approved no
Call Number Admin @ si @ KXR2015 Serial 2668
Permanent link to this record
 

 
Author Lluis Garrido; M.Guerrieri; Laura Igual
Title Image Segmentation with Cage Active Contours Type Journal Article
Year 2015 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 24 Issue 12 Pages 5557 - 5566
Keywords Level sets; Mean value coordinates; Parametrized active contours; level sets; mean value coordinates
Abstract In this paper, we present a framework for image segmentation based on parametrized active contours. The evolving contour is parametrized according to a reduced set of control points that form a closed polygon and have a clear visual interpretation. The parametrization, called mean value coordinates, stems from the techniques used in computer graphics to animate virtual models. Our framework allows to easily formulate region-based energies to segment an image. In particular, we present three different local region-based energy terms: 1) the mean model; 2) the Gaussian model; 3) and the histogram model. We show the behavior of our method on synthetic and real images and compare the performance with state-of-the-art level set methods.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 1057-7149 ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @ GGI2015 Serial 2673
Permanent link to this record
 

 
Author Mikhail Mozerov; Joost Van de Weijer
Title Global Color Sparseness and a Local Statistics Prior for Fast Bilateral Filtering Type Journal Article
Year 2015 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP
Volume 24 Issue 12 Pages 5842-5853
Keywords
Abstract The property of smoothing while preserving edges makes the bilateral filter a very popular image processing tool. However, its non-linear nature results in a computationally costly operation. Various works propose fast approximations to the bilateral filter. However, the majority does not generalize to vector input as is the case with color images. We propose a fast approximation to the bilateral filter for color images. The filter is based on two ideas. First, the number of colors, which occur in a single natural image, is limited. We exploit this color sparseness to rewrite the initial non-linear bilateral filter as a number of linear filter operations. Second, we impose a statistical prior to the image values that are locally present within the filter window. We show that this statistical prior leads to a closed-form solution of the bilateral filter. Finally, we combine both ideas into a single fast and accurate bilateral filter for color images. Experimental results show that our bilateral filter based on the local prior yields an extremely fast bilateral filter approximation, but with limited accuracy, which has potential application in real-time video filtering. Our bilateral filter, which combines color sparseness and local statistics, yields a fast and accurate bilateral filter approximation and obtains the state-of-the-art results.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 1057-7149 ISBN Medium
Area Expedition Conference
Notes LAMP; 600.079;ISE Approved no
Call Number Admin @ si @ MoW2015b Serial 2689
Permanent link to this record
 

 
Author Debora Gil; David Roche; Agnes Borras; Jesus Giraldo
Title Terminating Evolutionary Algorithms at their Steady State Type Journal Article
Year 2015 Publication Computational Optimization and Applications Abbreviated Journal COA
Volume 61 Issue 2 Pages 489-515
Keywords Evolutionary algorithms; Termination condition; Steady state; Differential evolution
Abstract Assessing the reliability of termination conditions for evolutionary algorithms (EAs) is of prime importance. An erroneous or weak stop criterion can negatively affect both the computational effort and the final result. We introduce a statistical framework for assessing whether a termination condition is able to stop an EA at its steady state, so that its results can not be improved anymore. We use a regression model in order to determine the requirements ensuring that a measure derived from EA evolving population is related to the distance to the optimum in decision variable space. Our framework is analyzed across 24 benchmark test functions and two standard termination criteria based on function fitness value in objective function space and EA population decision variable space distribution for the differential evolution (DE) paradigm. Results validate our framework as a powerful tool for determining the capability of a measure for terminating EA and the results also identify the decision variable space distribution as the best-suited for accurately terminating DE in real-world applications.
Address
Corporate Author Thesis
Publisher Springer US Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 0926-6003 ISBN Medium
Area Expedition Conference
Notes IAM; 600.044; 605.203; 600.060; 600.075 Approved no
Call Number Admin @ si @ GRB2015 Serial 2560
Permanent link to this record
 

 
Author Alvaro Cepero; Albert Clapes; Sergio Escalera
Title Automatic non-verbal communication skills analysis: a quantitative evaluation Type Journal Article
Year 2015 Publication AI Communications Abbreviated Journal AIC
Volume 28 Issue 1 Pages 87-101
Keywords Social signal processing; human behavior analysis; multi-modal data description; multi-modal data fusion; non-verbal communication analysis; e-Learning
Abstract The oral communication competence is defined on the top of the most relevant skills for one's professional and personal life. Because of the importance of communication in our activities of daily living, it is crucial to study methods to evaluate and provide the necessary feedback that can be used in order to improve these communication capabilities and, therefore, learn how to express ourselves better. In this work, we propose a system capable of evaluating quantitatively the quality of oral presentations in an automatic fashion. The system is based on a multi-modal RGB, depth, and audio data description and a fusion approach in order to recognize behavioral cues and train classifiers able to eventually predict communication quality levels. The performance of the proposed system is tested on a novel dataset containing Bachelor thesis' real defenses, presentations from an 8th semester Bachelor courses, and Master courses' presentations at Universitat de Barcelona. Using as groundtruth the marks assigned by actual instructors, our system achieves high performance categorizing and ranking presentations by their quality, and also making real-valued mark predictions.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 0921-7126 ISBN Medium
Area Expedition Conference
Notes HUPBA;MILAB Approved no
Call Number Admin @ si @ CCE2015 Serial 2549
Permanent link to this record
 

 
Author Jorge Bernal; F. Javier Sanchez; Gloria Fernandez Esparrach; Debora Gil; Cristina Rodriguez de Miguel; Fernando Vilariño
Title WM-DOVA Maps for Accurate Polyp Highlighting in Colonoscopy: Validation vs. Saliency Maps from Physicians Type Journal Article
Year 2015 Publication Computerized Medical Imaging and Graphics Abbreviated Journal CMIG
Volume 43 Issue Pages 99-111
Keywords Polyp localization; Energy Maps; Colonoscopy; Saliency; Valley detection
Abstract We introduce in this paper a novel polyp localization method for colonoscopy videos. Our method is based on a model of appearance for polyps which defines polyp boundaries in terms of valley information. We propose the integration of valley information in a robust way fostering complete, concave and continuous boundaries typically associated to polyps. This integration is done by using a window of radial sectors which accumulate valley information to create WMDOVA1 energy maps related with the likelihood of polyp presence. We perform a double validation of our maps, which include the introduction of two new databases, including the first, up to our knowledge, fully annotated database with clinical metadata associated. First we assess that the highest value corresponds with the location of the polyp in the image. Second, we show that WM-DOVA energy maps can be comparable with saliency maps obtained from physicians' fixations obtained via an eye-tracker. Finally, we prove that our method outperforms state-of-the-art computational saliency results. Our method shows good performance, particularly for small polyps which are reported to be the main sources of polyp miss-rate, which indicates the potential applicability of our method in clinical practice.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 0895-6111 ISBN Medium
Area Expedition Conference
Notes MV; IAM; 600.047; 600.060; 600.075;SIAI Approved no
Call Number Admin @ si @ BSF2015 Serial 2609
Permanent link to this record
 

 
Author Marc Bolaños; Maite Garolera; Petia Radeva
Title Object Discovery using CNN Features in Egocentric Videos Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 67-74
Keywords Object discovery; Egocentric videos; Lifelogging; CNN
Abstract Lifelogging devices based on photo/video are spreading faster everyday. This growth can represent great benefits to develop methods for extraction of meaningful information about the user wearing the device and his/her environment. In this paper, we propose a semi-supervised strategy for easily discovering objects relevant to the person wearing a first-person camera. The egocentric video sequence acquired by the camera, uses both the appearance extracted by means of a deep convolutional neural network and an object refill methodology that allow to discover objects even in case of small amount of object appearance in the collection of images. We validate our method on a sequence of 1000 egocentric daily images and obtain results with an F-measure of 0.5, 0.17 better than the state of the art approach.
Address Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (down) 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MILAB Approved no
Call Number Admin @ si @ BGR2015 Serial 2596
Permanent link to this record
 

 
Author Estefania Talavera; Mariella Dimiccoli; Marc Bolaños; Maedeh Aghaei; Petia Radeva
Title R-clustering for egocentric video segmentation Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 327-336
Keywords Temporal video segmentation; Egocentric videos; Clustering
Abstract In this paper, we present a new method for egocentric video temporal segmentation based on integrating a statistical mean change detector and agglomerative clustering(AC) within an energy-minimization framework. Given the tendency of most AC methods to oversegment video sequences when clustering their frames, we combine the clustering with a concept drift detection technique (ADWIN) that has rigorous guarantee of performances. ADWIN serves as a statistical upper bound for the clustering-based video segmentation. We integrate both techniques in an energy-minimization framework that serves to disambiguate the decision of both techniques and to complete the segmentation taking into account the temporal continuity of video frames descriptors. We present experiments over egocentric sets of more than 13.000 images acquired with different wearable cameras, showing that our method outperforms state-of-the-art clustering methods.
Address Santiago de Compostela; España; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (down) 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MILAB Approved no
Call Number Admin @ si @ TDB2015 Serial 2597
Permanent link to this record
 

 
Author Pau Riba; Josep Llados; Alicia Fornes; Anjan Dutta
Title Large-scale Graph Indexing using Binary Embeddings of Node Contexts Type Conference Article
Year 2015 Publication 10th IAPR-TC15 Workshop on Graph-based Representations in Pattern Recognition Abbreviated Journal
Volume 9069 Issue Pages 208-217
Keywords Graph matching; Graph indexing; Application in document analysis; Word spotting; Binary embedding
Abstract Graph-based representations are experiencing a growing usage in visual recognition and retrieval due to their representational power in front of classical appearance-based representations in terms of feature vectors. Retrieving a query graph from a large dataset of graphs has the drawback of the high computational complexity required to compare the query and the target graphs. The most important property for a large-scale retrieval is the search time complexity to be sub-linear in the number of database examples. In this paper we propose a fast indexation formalism for graph retrieval. A binary embedding is defined as hashing keys for graph nodes. Given a database of labeled graphs, graph nodes are complemented with vectors of attributes representing their local context. Hence, each attribute counts the length of a walk of order k originated in a vertex with label l. Each attribute vector is converted to a binary code applying a binary-valued hash function. Therefore, graph retrieval is formulated in terms of finding target graphs in the database whose nodes have a small Hamming distance from the query nodes, easily computed with bitwise logical operators. As an application example, we validate the performance of the proposed methods in a handwritten word spotting scenario in images of historical documents.
Address Beijing; China; May 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor C.-L.Liu; B.Luo; W.G.Kropatsch; J.Cheng
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (down) 0302-9743 ISBN 978-3-319-18223-0 Medium
Area Expedition Conference GbRPR
Notes DAG; 600.061; 602.006; 600.077 Approved no
Call Number Admin @ si @ RLF2015a Serial 2618
Permanent link to this record
 

 
Author Onur Ferhat; Arcadi Llanza; Fernando Vilariño
Title A Feature-Based Gaze Estimation Algorithm for Natural Light Scenarios Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 569-576
Keywords Eye tracking; Gaze estimation; Natural light; Webcam
Abstract We present an eye tracking system that works with regular webcams. We base our work on open source CVC Eye Tracker [7] and we propose a number of improvements and a novel gaze estimation method. The new method uses features extracted from iris segmentation and it does not fall into the traditional categorization of appearance–based/model–based methods. Our experiments show that our approach reduces the gaze estimation errors by 34 % in the horizontal direction and by 12 % in the vertical direction compared to the baseline system.
Address Santiago de Compostela; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (down) 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes MV;SIAI Approved no
Call Number Admin @ si @ FLV2015a Serial 2646
Permanent link to this record
 

 
Author Dennis G.Romero; Anselmo Frizera; Angel Sappa; Boris X. Vintimilla; Teodiano F.Bastos
Title A predictive model for human activity recognition by observing actions and context Type Conference Article
Year 2015 Publication Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015 Abbreviated Journal
Volume 9386 Issue Pages 323-333
Keywords
Abstract This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.
Address Catania; Italy; October 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (down) 0302-9743 ISBN 978-3-319-25902-4 Medium
Area Expedition Conference ACIVS
Notes ADAS; 600.076 Approved no
Call Number Admin @ si @ RFS2015 Serial 2661
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Muhammad Anwer Rao; Joost Van de Weijer; Michael Felsberg; J.Laaksonen
Title Deep semantic pyramids for human attributes and action recognition Type Conference Article
Year 2015 Publication Image Analysis, Proceedings of 19th Scandinavian Conference , SCIA 2015 Abbreviated Journal
Volume 9127 Issue Pages 341-353
Keywords Action recognition; Human attributes; Semantic pyramids
Abstract Describing persons and their actions is a challenging problem due to variations in pose, scale and viewpoint in real-world images. Recently, semantic pyramids approach [1] for pose normalization has shown to provide excellent results for gender and action recognition. The performance of semantic pyramids approach relies on robust image description and is therefore limited due to the use of shallow local features. In the context of object recognition [2] and object detection [3], convolutional neural networks (CNNs) or deep features have shown to improve the performance over the conventional shallow features.
We propose deep semantic pyramids for human attributes and action recognition. The method works by constructing spatial pyramids based on CNNs of different part locations. These pyramids are then combined to obtain a single semantic representation. We validate our approach on the Berkeley and 27 Human Attributes datasets for attributes classification. For action recognition, we perform experiments on two challenging datasets: Willow and PASCAL VOC 2010. The proposed deep semantic pyramids provide a significant gain of 17.2%, 13.9%, 24.3% and 22.6% compared to the standard shallow semantic pyramids on Berkeley, 27 Human Attributes, Willow and PASCAL VOC 2010 datasets respectively. Our results also show that deep semantic pyramids outperform conventional CNNs based on the full bounding box of the person. Finally, we compare our approach with state-of-the-art methods and show a gain in performance compared to best methods in literature.
Address Denmark; Copenhagen; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN (down) 0302-9743 ISBN 978-3-319-19664-0 Medium
Area Expedition Conference SCIA
Notes LAMP; 600.068; 600.079;ADAS Approved no
Call Number Admin @ si @ KRW2015b Serial 2672
Permanent link to this record
 

 
Author Suman Ghosh; Ernest Valveny
Title A Sliding Window Framework for Word Spotting Based on Word Attributes Type Conference Article
Year 2015 Publication Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 Abbreviated Journal
Volume 9117 Issue Pages 652-661
Keywords Word spotting; Sliding window; Word attributes
Abstract In this paper we propose a segmentation-free approach to word spotting. Word images are first encoded into feature vectors using Fisher Vector. Then, these feature vectors are used together with pyramidal histogram of characters labels (PHOC) to learn SVM-based attribute models. Documents are represented by these PHOC based word attributes. To efficiently compute the word attributes over a sliding window, we propose to use an integral image representation of the document using a simplified version of the attribute model. Finally we re-rank the top word candidates using the more discriminative full version of the word attributes. We show state-of-the-art results for segmentation-free query-by-example word spotting in single-writer and multi-writer standard datasets.
Address Santiago de Compostela; June 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (down) 0302-9743 ISBN 978-3-319-19389-2 Medium
Area Expedition Conference IbPRIA
Notes DAG; 600.077 Approved no
Call Number Admin @ si @ GhV2015b Serial 2716
Permanent link to this record
 

 
Author Hanne Kause; Aura Hernandez-Sabate; Patricia Marquez; Andrea Fuster; Luc Florack; Hans van Assen; Debora Gil
Title Confidence Measures for Assessing the HARP Algorithm in Tagged Magnetic Resonance Imaging Type Book Chapter
Year 2015 Publication Statistical Atlases and Computational Models of the Heart. Revised selected papers of Imaging and Modelling Challenges 6th International Workshop, STACOM 2015, Held in Conjunction with MICCAI 2015 Abbreviated Journal
Volume 9534 Issue Pages 69-79
Keywords
Abstract Cardiac deformation and changes therein have been linked to pathologies. Both can be extracted in detail from tagged Magnetic Resonance Imaging (tMRI) using harmonic phase (HARP) images. Although point tracking algorithms have shown to have high accuracies on HARP images, these vary with position. Detecting and discarding areas with unreliable results is crucial for use in clinical support systems. This paper assesses the capability of two confidence measures (CMs), based on energy and image structure, for detecting locations with reduced accuracy in motion tracking results. These CMs were tested on a database of simulated tMRI images containing the most common artifacts that may affect tracking accuracy. CM performance is assessed based on its capability for HARP tracking error bounding and compared in terms of significant differences detected using a multi comparison analysis of variance that takes into account the most influential factors on HARP tracking performance. Results showed that the CM based on image structure was better suited to detect unreliable optical flow vectors. In addition, it was shown that CMs can be used to detect optical flow vectors with large errors in order to improve the optical flow obtained with the HARP tracking algorithm.
Address Munich; Germany; January 2015
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (down) 0302-9743 ISBN 978-3-319-28711-9 Medium
Area Expedition Conference STACOM
Notes ADAS; IAM; 600.075; 600.076; 600.060; 601.145 Approved no
Call Number Admin @ si @ KHM2015 Serial 2734
Permanent link to this record
 

 
Author Aleksandr Setkov; Fabio Martinez Carillo; Michele Gouiffes; Christian Jacquemin; Maria Vanrell; Ramon Baldrich
Title DAcImPro: A Novel Database of Acquired Image Projections and Its Application to Object Recognition Type Conference Article
Year 2015 Publication Advances in Visual Computing. Proceedings of 11th International Symposium, ISVC 2015 Part II Abbreviated Journal
Volume 9475 Issue Pages 463-473
Keywords Projector-camera systems; Feature descriptors; Object recognition
Abstract Projector-camera systems are designed to improve the projection quality by comparing original images with their captured projections, which is usually complicated due to high photometric and geometric variations. Many research works address this problem using their own test data which makes it extremely difficult to compare different proposals. This paper has two main contributions. Firstly, we introduce a new database of acquired image projections (DAcImPro) that, covering photometric and geometric conditions and providing data for ground-truth computation, can serve to evaluate different algorithms in projector-camera systems. Secondly, a new object recognition scenario from acquired projections is presented, which could be of a great interest in such domains, as home video projections and public presentations. We show that the task is more challenging than the classical recognition problem and thus requires additional pre-processing, such as color compensation or projection area selection.
Address
Corporate Author Thesis
Publisher Springer International Publishing Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN (down) 0302-9743 ISBN 978-3-319-27862-9 Medium
Area Expedition Conference ISVC
Notes CIC Approved no
Call Number Admin @ si @ SMG2015 Serial 2736
Permanent link to this record