|   | 
Details
   web
Records
Author Antonio Clavelli; Dimosthenis Karatzas; Josep Llados; Mario Ferraro; Giuseppe Boccignone
Title Towards Modelling an Attention-Based Text Localization Process Type Conference Article
Year 2013 Publication 6th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal
Volume 7887 Issue Pages 296-303
Keywords (up) text localization; visual attention; eye guidance
Abstract This note introduces a visual attention model of text localization in real-world scenes. The core of the model built upon the proto-object concept is discussed. It is shown how such dynamic mid-level representation of the scene can be derived in the framework of an action-perception loop engaging salience, text information value computation, and eye guidance mechanisms.
Preliminary results that compare model generated scanpaths with those eye-tracked from human subjects are presented.
Address Madeira; Portugal; June 2013
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-38627-5 Medium
Area Expedition Conference IbPRIA
Notes DAG Approved no
Call Number Admin @ si @ CKL2013 Serial 2291
Permanent link to this record
 

 
Author David Geronimo; Joan Serrat; Antonio Lopez; Ramon Baldrich
Title Traffic sign recognition for computer vision project-based learning Type Journal Article
Year 2013 Publication IEEE Transactions on Education Abbreviated Journal T-EDUC
Volume 56 Issue 3 Pages 364-371
Keywords (up) traffic signs
Abstract This paper presents a graduate course project on computer vision. The aim of the project is to detect and recognize traffic signs in video sequences recorded by an on-board vehicle camera. This is a demanding problem, given that traffic sign recognition is one of the most challenging problems for driving assistance systems. Equally, it is motivating for the students given that it is a real-life problem. Furthermore, it gives them the opportunity to appreciate the difficulty of real-world vision problems and to assess the extent to which this problem can be solved by modern computer vision and pattern classification techniques taught in the classroom. The learning objectives of the course are introduced, as are the constraints imposed on its design, such as the diversity of students' background and the amount of time they and their instructors dedicate to the course. The paper also describes the course contents, schedule, and how the project-based learning approach is applied. The outcomes of the course are discussed, including both the students' marks and their personal feedback.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0018-9359 ISBN Medium
Area Expedition Conference
Notes ADAS; CIC Approved no
Call Number Admin @ si @ GSL2013; ADAS @ adas @ Serial 2160
Permanent link to this record
 

 
Author Andrew Nolan; Daniel Serrano; Aura Hernandez-Sabate; Daniel Ponsa; Antonio Lopez
Title Obstacle mapping module for quadrotors on outdoor Search and Rescue operations Type Conference Article
Year 2013 Publication International Micro Air Vehicle Conference and Flight Competition Abbreviated Journal
Volume Issue Pages
Keywords (up) UAV
Abstract Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments.
Address Toulouse; France; September 2013
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference IMAV
Notes ADAS; 600.054; 600.057;IAM Approved no
Call Number Admin @ si @ NSH2013 Serial 2371
Permanent link to this record
 

 
Author Ferran Diego; Joan Serrat; Antonio Lopez
Title Joint spatio-temporal alignment of sequences Type Journal Article
Year 2013 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM
Volume 15 Issue 6 Pages 1377-1387
Keywords (up) video alignment
Abstract Video alignment is important in different areas of computer vision such as wide baseline matching, action recognition, change detection, video copy detection and frame dropping prevention. Current video alignment methods usually deal with a relatively simple case of fixed or rigidly attached cameras or simultaneous acquisition. Therefore, in this paper we propose a joint video alignment for bringing two video sequences into a spatio-temporal alignment. Specifically, the novelty of the paper is to formulate the video alignment to fold the spatial and temporal alignment into a single alignment framework. This simultaneously satisfies a frame-correspondence and frame-alignment similarity; exploiting the knowledge among neighbor frames by a standard pairwise Markov random field (MRF). This new formulation is able to handle the alignment of sequences recorded at different times by independent moving cameras that follows a similar trajectory, and also generalizes the particular cases that of fixed geometric transformation and/or linear temporal mapping. We conduct experiments on different scenarios such as sequences recorded simultaneously or by moving cameras to validate the robustness of the proposed approach. The proposed method provides the highest video alignment accuracy compared to the state-of-the-art methods on sequences recorded from vehicles driving along the same track at different times.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1520-9210 ISBN Medium
Area Expedition Conference
Notes ADAS Approved no
Call Number Admin @ si @ DSL2013; ADAS @ adas @ Serial 2228
Permanent link to this record
 

 
Author Carles Sanchez; Debora Gil; Antoni Rosell; Albert Andaluz; F. Javier Sanchez
Title Segmentation of Tracheal Rings in Videobronchoscopy combining Geometry and Appearance Type Conference Article
Year 2013 Publication Proceedings of the International Conference on Computer Vision Theory and Applications Abbreviated Journal
Volume 1 Issue Pages 153--161
Keywords (up) Video-bronchoscopy, tracheal ring segmentation, trachea geometric and appearance model
Abstract Videobronchoscopy is a medical imaging technique that allows interactive navigation inside the respiratory pathways and minimal invasive interventions. Tracheal procedures are ordinary interventions that require measurement of the percentage of obstructed pathway for injury (stenosis) assessment. Visual assessment of stenosis in videobronchoscopic sequences requires high expertise of trachea anatomy and is prone to human error. Accurate detection of tracheal rings is the basis for automated estimation of the size of stenosed trachea. Processing of videobronchoscopic images acquired at the operating room is a challenging task due to the wide range of artifacts and acquisition conditions. We present a model of the geometric-appearance of tracheal rings for its detection in videobronchoscopic videos. Experiments on sequences acquired at the operating room, show a performance close to inter-observer variability
Address Barcelona; February 2013
Corporate Author Thesis
Publisher SciTePress Place of Publication Portugal Editor Sebastiano Battiato and José Braz
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN 978-989-8565-47-1 Medium
Area 800 Expedition Conference VISAPP
Notes IAM;MV; 600.044; 600.047; 600.060; 605.203 Approved no
Call Number IAM @ iam @ SGR2013 Serial 2123
Permanent link to this record
 

 
Author Albert Gordo; Florent Perronnin; Ernest Valveny
Title Large-scale document image retrieval and classification with runlength histograms and binary embeddings Type Journal Article
Year 2013 Publication Pattern Recognition Abbreviated Journal PR
Volume 46 Issue 7 Pages 1898-1905
Keywords (up) visual document descriptor; compression; large-scale; retrieval; classification
Abstract We present a new document image descriptor based on multi-scale runlength
histograms. This descriptor does not rely on layout analysis and can be
computed efficiently. We show how this descriptor can achieve state-of-theart
results on two very different public datasets in classification and retrieval
tasks. Moreover, we show how we can compress and binarize these descriptors
to make them suitable for large-scale applications. We can achieve state-ofthe-
art results in classification using binary descriptors of as few as 16 to 64
bits.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0031-3203 ISBN Medium
Area Expedition Conference
Notes DAG; 600.042; 600.045; 605.203 Approved no
Call Number Admin @ si @ GPV2013 Serial 2306
Permanent link to this record
 

 
Author Fadi Dornaika; Abdelmalik Moujahid; Bogdan Raducanu
Title Facial expression recognition using tracked facial actions: Classifier performance analysis Type Journal Article
Year 2013 Publication Engineering Applications of Artificial Intelligence Abbreviated Journal EAAI
Volume 26 Issue 1 Pages 467-477
Keywords (up) Visual face tracking; 3D deformable models; Facial actions; Dynamic facial expression recognition; Human–computer interaction
Abstract In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%.
Address
Corporate Author Thesis
Publisher Elsevier Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes OR; 600.046;MV Approved no
Call Number Admin @ si @ DMR2013 Serial 2185
Permanent link to this record