Publicacions CVC
Home
|
Show All
|
Simple Search
|
Advanced Search
|
Add Record
|
Import
You must login to submit this form!
Login
Quick Search:
Field:
main fields
author
title
publication
keywords
abstract
created_date
call_number
contains:
...
Edit the following record:
Author
...
is Editor
Title
...
Type
Journal Article
Abstract
Book Chapter
Book Whole
Conference Article
Conference Volume
Journal
Magazine Article
Manual
Manuscript
Map
Miscellaneous
Newspaper Article
Patent
Report
Software
Year
...
Publication
...
Abbreviated Journal
...
Volume
...
Issue
...
Pages
...
Keywords
...
Abstract
Searching for text objects in real scene images is an open problem and a very active computer vision research area. A large number of methods have been proposed tackling the text search as extension of the ones from the document analysis field or inspired by general purpose object detection methods. However the general problem of object search in real scene images remains an extremely challenging problem due to the huge variability in object appearance. This thesis builds on top of the most recent findings in the visual attention literature presenting a novel computational model of eye guidance aiming to better describe text object search in real scene images. First are presented the relevant state-of-the-art results from the visual attention literature regarding eye movements and visual search. Relevant models of attention are discussed and integrated with recent observations on the role of top-down constraints and the emerging need for a layered model of attention in which saliency is not the only factor guiding attention. Visual attention is then explained by the interaction of several modulating factors, such as objects, value, plans and saliency. Then we introduce our probabilistic formulation of attention deployment in real scene. The model is based on the rationale that oculomotor control depends on two interacting but distinct processes: an attentional process that assigns value to the sources of information and motor process that flexibly links information with action. In such framework, the choice of where to look next is task-dependent and oriented to classes of objects embedded within pictures of complex scenes. The dependence on task is taken into account by exploiting the value and the reward of gazing at certain image patches or proto-objects that provide a sparse representation of the scene objects. In the experimental section the model is tested in laboratory condition, comparing model simulations with data from eye tracking experiments. The comparison is qualitative in terms of observable scan paths and quantitative in terms of statistical similarity of gaze shift amplitude. Experiments are performed using eye tracking data from both a publicly available dataset of face and text and from newly performed eye-tracking experiments on a dataset of street view pictures containing text. The last part of this thesis is dedicated to study the extent to which the proposed model can account for human eye movements in a low constrained setting. We used a mobile eye tracking device and an ad-hoc developed methodology to compare model simulated eye data with the human eye data from mobile eye tracking recordings. Such setting allow to test the model in an incomplete visual information condition, reproducing a close to real-life search task.
Address
...
Corporate Author
...
Thesis
Bachelor's thesis
Master's thesis
Ph.D. thesis
Diploma thesis
Doctoral thesis
Habilitation thesis
Publisher
...
Place of Publication
...
Editor
...
Language
...
Summary Language
...
Original Title
...
Series Editor
...
Series Title
...
Abbreviated Series Title
...
Series Volume
...
Series Issue
...
Edition
...
ISSN
...
ISBN
...
Medium
...
Area
...
Expedition
...
Conference
...
Notes
...
Approved
yes
no
Location
Call Number
...
Serial
Marked
yes
no
Copy
true
fetch
ordered
false
Selected
yes
no
User Keys
...
User Notes
...
User File
...
User Groups
...
Cite Key
...
Related
...
File
URL
...
DOI
...
Online publication. Cite with this text:
...
Location Field:
don't touch
add
remove
my name & email address
Home
SQL Search
|
Library Search
|
Show Record
|
Extract Citations
Help