toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Fadi Dornaika; Bogdan Raducanu edit  doi
isbn  openurl
  Title Single Snapshot 3D Head Pose Initialization for Tracking in Human Robot Interaction Scenario Type Conference Article
  Year 2010 Publication 1st International Workshop on Computer Vision for Human-Robot Interaction Abbreviated Journal  
  Volume (up) Issue Pages 32–39  
  Keywords 1st International Workshop on Computer Vision for Human-Robot Interaction, in conjunction with IEEE CVPR 2010  
  Abstract This paper presents an automatic 3D head pose initialization scheme for a real-time face tracker with application to human-robot interaction. It has two main contributions. First, we propose an automatic 3D head pose and person specific face shape estimation, based on a 3D deformable model. The proposed approach serves to initialize our realtime 3D face tracker. What makes this contribution very attractive is that the initialization step can cope with faces
under arbitrary pose, so it is not limited only to near-frontal views. Second, the previous framework is used to develop an application in which the orientation of an AIBO’s camera can be controlled through the imitation of user’s head pose.
In our scenario, this application is used to build panoramic images from overlapping snapshots. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
 
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2160-7508 ISBN 978-1-4244-7029-7 Medium  
  Area Expedition Conference CVPRW  
  Notes OR;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DoR2010a Serial 1309  
Permanent link to this record
 

 
Author Bogdan Raducanu; Fadi Dornaika edit  doi
isbn  openurl
  Title Dynamic Facial Expression Recognition Using Laplacian Eigenmaps-Based Manifold Learning Type Conference Article
  Year 2010 Publication IEEE International Conference on Robotics and Automation Abbreviated Journal  
  Volume (up) Issue Pages 156–161  
  Keywords  
  Abstract In this paper, we propose an integrated framework for tracking, modelling and recognition of facial expressions. The main contributions are: (i) a view- and texture independent scheme that exploits facial action parameters estimated by an appearance-based 3D face tracker; (ii) the complexity of the non-linear facial expression space is modelled through a manifold, whose structure is learned using Laplacian Eigenmaps. The projected facial expressions are afterwards recognized based on Nearest Neighbor classifier; (iii) with the proposed approach, we developed an application for an AIBO robot, in which it mirrors the perceived facial expression.  
  Address Anchorage; AK; USA;  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1050-4729 ISBN 978-1-4244-5038-1 Medium  
  Area Expedition Conference ICRA  
  Notes OR; MV Approved no  
  Call Number BCNPCL @ bcnpcl @ RaD2010 Serial 1310  
Permanent link to this record
 

 
Author David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo edit  doi
isbn  openurl
  Title Fast and Robust Object Segmentation with the Integral Linear Classifier Type Conference Article
  Year 2010 Publication 23rd IEEE Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume (up) Issue Pages 1046–1053  
  Keywords  
  Abstract We propose an efficient method, built on the popular Bag of Features approach, that obtains robust multiclass pixel-level object segmentation of an image in less than 500ms, with results comparable or better than most state of the art methods. We introduce the Integral Linear Classifier (ILC), that can readily obtain the classification score for any image sub-window with only 6 additions and 1 product by fusing the accumulation and classification steps in a single operation. In order to design a method as efficient as possible, our building blocks are carefully selected from the quickest in the state of the art. More precisely, we evaluate the performance of three popular local descriptors, that can be very efficiently computed using integral images, and two fast quantization methods: the Hierarchical K-Means, and the Extremely Randomized Forest. Finally, we explore the utility of adding spatial bins to the Bag of Features histograms and that of cascade classifiers to improve the obtained segmentation. Our method is compared to the state of the art in the difficult Graz-02 and PASCAL 2007 Segmentation Challenge datasets.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1063-6919 ISBN 978-1-4244-6984-0 Medium  
  Area Expedition Conference CVPR  
  Notes ADAS Approved no  
  Call Number Admin @ si @ ARL2010a Serial 1311  
Permanent link to this record
 

 
Author Michal Drozdzal; Laura Igual; Petia Radeva; Jordi Vitria; C. Malagelada; Fernando Azpiroz edit  doi
isbn  openurl
  Title Aligning Endoluminal Scene Sequences in Wireless Capsule Endoscopy Type Conference Article
  Year 2010 Publication IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis Abbreviated Journal  
  Volume (up) Issue Pages 117–124  
  Keywords  
  Abstract Intestinal motility analysis is an important examination in detection of various intestinal malfunctions. One of the big challenges of automatic motility analysis is how to compare sequence of images and extract dynamic paterns taking into account the high deformability of the intestine wall as well as the capsule motion. From clinical point of view the ability to align endoluminal scene sequences will help to find regions of similar intestinal activity and in this way will provide a valuable information on intestinal motility problems. This work, for first time, addresses the problem of aligning endoluminal sequences taking into account motion and structure of the intestine. To describe motility in the sequence, we propose different descriptors based on the Sift Flow algorithm, namely: (1) Histograms of Sift Flow Directions to describe the flow course, (2) Sift Descriptors to represent image intestine structure and (3) Sift Flow Magnitude to quantify intestine deformation. We show that the merge of all three descriptors provides robust information on sequence description in terms of motility. Moreover, we develop a novel methodology to rank the intestinal sequences based on the expert feedback about relevance of the results. The experimental results show that the selected descriptors are useful in the alignment and similarity description and the proposed method allows the analysis of the WCE.  
  Address San Francisco; CA; USA; June 2010  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2160-7508 ISBN 978-1-4244-7029-7 Medium  
  Area Expedition Conference MMBIA  
  Notes OR;MILAB;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ DIR2010 Serial 1316  
Permanent link to this record
 

 
Author Albert Gordo; Alicia Fornes; Ernest Valveny; Josep Llados edit  doi
isbn  openurl
  Title A Bag of Notes Approach to Writer Identification in Old Handwritten Music Scores Type Conference Article
  Year 2010 Publication 9th IAPR International Workshop on Document Analysis Systems Abbreviated Journal  
  Volume (up) Issue Pages 247–254  
  Keywords  
  Abstract Determining the authorship of a document, namely writer identification, can be an important source of information for document categorization. Contrary to text documents, the identification of the writer of graphical documents is still a challenge. In this paper we present a robust approach for writer identification in a particular kind of graphical documents, old music scores. This approach adapts the bag of visual terms method for coping with graphic documents. The identification is performed only using the graphical music notation. For this purpose, we generate a graphic vocabulary without recognizing any music symbols, and consequently, avoiding the difficulties in the recognition of hand-drawn symbols in old and degraded documents. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving very high identification rates.  
  Address Boston; USA;  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-60558-773-8 Medium  
  Area Expedition Conference DAS  
  Notes DAG Approved no  
  Call Number DAG @ dag @ GFV2010 Serial 1320  
Permanent link to this record
 

 
Author Alicia Fornes; Josep Llados edit  url
doi  isbn
openurl 
  Title A Symbol-dependent Writer Identifcation Approach in Old Handwritten Music Scores Type Conference Article
  Year 2010 Publication 12th International Conference on Frontiers in Handwriting Recognition Abbreviated Journal  
  Volume (up) Issue Pages 634 - 639  
  Keywords  
  Abstract Writer identification consists in determining the writer of a piece of handwriting from a set of writers. In this paper we introduce a symbol-dependent approach for identifying the writer of old music scores, which is based on two symbol recognition methods. The main idea is to use the Blurred Shape Model descriptor and a DTW-based method for detecting, recognizing and describing the music clefs and notes. The proposed approach has been evaluated in a database of old music scores, achieving very high writer identification rates.  
  Address Kolkata (India)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-1-4244-8353-2 Medium  
  Area Expedition Conference ICFHR  
  Notes DAG Approved no  
  Call Number DAG @ dag @ FoL2010 Serial 1321  
Permanent link to this record
 

 
Author C. Alejandro Parraga; Ramon Baldrich; Maria Vanrell edit  isbn
openurl 
  Title Accurate Mapping of Natural Scenes Radiance to Cone Activation Space: A New Image Dataset Type Conference Article
  Year 2010 Publication 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science Abbreviated Journal  
  Volume (up) Issue Pages 50–57  
  Keywords  
  Abstract The characterization of trichromatic cameras is usually done in terms of a device-independent color space, such as the CIE 1931 XYZ space. This is indeed convenient since it allows the testing of results against colorimetric measures. We have characterized our camera to represent human cone activation by mapping the camera sensor's (RGB) responses to human (LMS) through a polynomial transformation, which can be “customized” according to the types of scenes we want to represent. Here we present a method to test the accuracy of the camera measures and a study on how the choice of training reflectances for the polynomial may alter the results.  
  Address Joensuu, Finland  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 9781617388897 Medium  
  Area Expedition Conference CGIV/MCS  
  Notes CIC Approved no  
  Call Number CAT @ cat @ PBV2010a Serial 1322  
Permanent link to this record
 

 
Author Javier Vazquez; G. D. Finlayson; Maria Vanrell edit  isbn
openurl 
  Title A compact singularity function to predict WCS data and unique hues Type Conference Article
  Year 2010 Publication 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science Abbreviated Journal  
  Volume (up) Issue Pages 33–38  
  Keywords  
  Abstract Understanding how colour is used by the human vision system is a widely studied research field. The field, though quite advanced, still faces important unanswered questions. One of them is the explanation of the unique hues and the assignment of color names. This problem addresses the fact of different perceptual status for different colors.
Recently, Philipona and O'Regan have proposed a biological model that allows to extract the reflection properties of any surface independently of the lighting conditions. These invariant properties are the basis to compute a singularity index that predicts the asymmetries presented in unique hues and basic color categories psychophysical data, therefore is giving a further step in their explanation.

In this paper we build on their formulation and propose a new singularity index. This new formulation equally accounts for the location of the 4 peaks of the World colour survey and has two main advantages. First, it is a simple elegant numerical measure (the Philipona measurement is a rather cumbersome formula). Second, we develop a colour-based explanation for the measure.
 
  Address Joensuu, Finland  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 9781617388897 Medium  
  Area Expedition Conference CGIV/MCS  
  Notes CIC Approved no  
  Call Number CAT @ cat @ VFV2010 Serial 1324  
Permanent link to this record
 

 
Author Robert Benavente; C. Alejandro Parraga; Maria Vanrell edit  url
isbn  openurl
  Title La influencia del contexto en la definicion de las fronteras entre las categorias cromaticas Type Conference Article
  Year 2010 Publication 9th Congreso Nacional del Color Abbreviated Journal  
  Volume (up) Issue Pages 92–95  
  Keywords Categorización del color; Apariencia del color; Influencia del contexto; Patrones de Mondrian; Modelos paramétricos  
  Abstract En este artículo presentamos los resultados de un experimento de categorización de color en el que las muestras se presentaron sobre un fondo multicolor (Mondrian) para simular los efectos del contexto. Los resultados se comparan con los de un experimento previo que, utilizando un paradigma diferente, determinó las fronteras sin tener en cuenta el contexto. El análisis de los resultados muestra que las fronteras obtenidas con el experimento en contexto presentan menos confusión que las obtenidas en el experimento sin contexto.  
  Address Alicante (Spain)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-9717-144-1 Medium  
  Area Expedition Conference CNC  
  Notes CIC Approved no  
  Call Number CAT @ cat @ BPV2010 Serial 1327  
Permanent link to this record
 

 
Author Javier Vazquez; Maria Vanrell; Robert Benavente edit  openurl
  Title Color names as a constraint for Computer Vision problems Type Conference Article
  Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
  Volume (up) Issue Pages 324–328  
  Keywords  
  Abstract Computer Vision Problems are usually ill-posed. Constraining de gamut of possible solutions is then a necessary step. Many constrains for different problems have been developed during years. In this paper, we present a different way of constraining some of these problems: the use of color names. In particular, we will focus on segmentation, representation ans constancy.  
  Address Gjovik (Norway)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CREATE  
  Notes CIC Approved no  
  Call Number CAT @ cat @ VVB2010 Serial 1328  
Permanent link to this record
 

 
Author Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell edit  openurl
  Title Who Painted this Painting? Type Conference Article
  Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
  Volume (up) Issue Pages 329–333  
  Keywords  
  Abstract  
  Address Gjovik (Norway)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CREATE  
  Notes CIC Approved no  
  Call Number CAT @ cat @ KWV2010 Serial 1329  
Permanent link to this record
 

 
Author Shida Beigpour; Joost Van de Weijer edit   pdf
openurl 
  Title Photo-Realistic Color Alteration for Architecture and Design Type Conference Article
  Year 2010 Publication Proceedings of The CREATE 2010 Conference Abbreviated Journal  
  Volume (up) Issue Pages 84–88  
  Keywords  
  Abstract As color is a strong stimuli we receive from the exterior world, choosing the right color can prove crucial in creating the desired architecture and desing. We propose a framework to apply a realistic color change on both objects and their illuminant lights for snapshots of architectural designs, in order to visualize and choose the right color before actully applying the change in the real world. The proposed framework is based on the laws of physics in order to accomplish realistic and physically plausible results.  
  Address Gjovik (Norway)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CREATE  
  Notes CIC Approved no  
  Call Number CAT @ cat @ BeW2010 Serial 1330  
Permanent link to this record
 

 
Author Ignasi Rius edit  isbn
openurl 
  Title Motion Priors for Efficient Bayesian Tracking in Human Sequence Evaluation Type Book Whole
  Year 2010 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume (up) Issue Pages  
  Keywords  
  Abstract Recovering human motion by visual analysis is a challenging computer vision research
area with a lot of potential applications. Model-based tracking approaches, and in
particular particle lters, formulate the problem as a Bayesian inference task whose
aim is to sequentially estimate the distribution of the parameters of a human body
model over time. These approaches strongly rely on good dynamical and observation
models to predict and update congurations of the human body according to measurements from the image data. However, it is very dicult to design observation
models which extract useful and reliable information from image sequences robustly.
This results specially challenging in monocular tracking given that only one viewpoint
from the scene is available. Therefore, to overcome these limitations strong motion
priors are needed to guide the exploration of the state space.
The work presented in this Thesis is aimed to retrieve the 3D motion parameters
of a human body model from incomplete and noisy measurements of a monocular
image sequence. These measurements consist of the 2D positions of a reduced set of
joints in the image plane. Towards this end, we present a novel action-specic model
of human motion which is trained from several databases of real motion-captured
performances of an action, and is used as a priori knowledge within a particle ltering
scheme.
Body postures are represented by means of a simple and compact stick gure
model which uses direction cosines to represent the direction of body limbs in the 3D
Cartesian space. Then, for a given action, Principal Component Analysis is applied to
the training data to perform dimensionality reduction over the highly correlated input
data. Before the learning stage of the action model, the input motion performances
are synchronized by means of a novel dense matching algorithm based on Dynamic
Programming. The algorithm synchronizes all the motion sequences of the same
action class, nding an optimal solution in real-time.
Then, a probabilistic action model is learnt, based on the synchronized motion
examples, which captures the variability and temporal evolution of full-body motion
within a specic action. In particular, for each action, the parameters learnt are: a
representative manifold for the action consisting of its mean performance, the standard deviation from the mean performance, the mean observed direction vectors from
each motion subsequence of a given length and the expected error at a given time
instant.
Subsequently, the action-specic model is used as a priori knowledge on human
motion which improves the eciency and robustness of the overall particle filtering tracking framework. First, the dynamic model guides the particles according to similar
situations previously learnt. Then, the state space is constrained so only feasible
human postures are accepted as valid solutions at each time step. As a result, the
state space is explored more eciently as the particle set covers the most probable
body postures.
Finally, experiments are carried out using test sequences from several motion
databases. Results point out that our tracker scheme is able to estimate the rough
3D conguration of a full-body model providing only the 2D positions of a reduced
set of joints. Separate tests on the sequence synchronization method and the subsequence probabilistic matching technique are also provided.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Gonzalez;Xavier Roca  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-937261-9-5 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ Riu2010 Serial 1331  
Permanent link to this record
 

 
Author Ivan Huerta edit  isbn
openurl 
  Title Foreground Object Segmentation and Shadow Detection for Video Sequences in Uncontrolled Environments Type Book Whole
  Year 2010 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume (up) Issue Pages  
  Keywords  
  Abstract This Thesis is mainly divided in two parts. The first one presents a study of motion
segmentation problems. Based on this study, a novel algorithm for mobile-object
segmentation from a static background scene is also presented. This approach is
demonstrated robust and accurate under most of the common problems in motion
segmentation. The second one tackles the problem of shadows in depth. Firstly, a
bottom-up approach based on a chromatic shadow detector is presented to deal with
umbra shadows. Secondly, a top-down approach based on a tracking system has been
developed in order to enhance the chromatic shadow detection.
In our first contribution, a case analysis of motion segmentation problems is presented by taking into account the problems associated with different cues, namely
colour, edge and intensity. Our second contribution is a hybrid architecture which
handles the main problems observed in such a case analysis, by fusing (i) the knowledge from these three cues and (ii) a temporal difference algorithm. On the one hand,
we enhance the colour and edge models to solve both global/local illumination changes
(shadows and highlights) and camouflage in intensity. In addition, local information is
exploited to cope with a very challenging problem such as the camouflage in chroma.
On the other hand, the intensity cue is also applied when colour and edge cues are not
available, such as when beyond the dynamic range. Additionally, temporal difference
is included to segment motion when these three cues are not available, such as that
background not visible during the training period. Lastly, the approach is enhanced
for allowing ghost detection. As a result, our approach obtains very accurate and robust motion segmentation in both indoor and outdoor scenarios, as quantitatively and
qualitatively demonstrated in the experimental results, by comparing our approach
with most best-known state-of-the-art approaches.
Motion Segmentation has to deal with shadows to avoid distortions when detecting
moving objects. Most segmentation approaches dealing with shadow detection are
typically restricted to penumbra shadows. Therefore, such techniques cannot cope
well with umbra shadows. Consequently, umbra shadows are usually detected as part
of moving objects.
Firstly, a bottom-up approach for detection and removal of chromatic moving
shadows in surveillance scenarios is proposed. Secondly, a top-down approach based
on kalman filters to detect and track shadows has been developed in order to enhance
the chromatic shadow detection. In the Bottom-up part, the shadow detection approach applies a novel technique based on gradient and colour models for separating
chromatic moving shadows from moving objects.
Well-known colour and gradient models are extended and improved into an invariant colour cone model and an invariant gradient model, respectively, to perform
automatic segmentation while detecting potential shadows. Hereafter, the regions corresponding to potential shadows are grouped by considering ”a bluish effect” and an
edge partitioning. Lastly, (i) temporal similarities between local gradient structures
and (ii) spatial similarities between chrominance angle and brightness distortions are
analysed for all potential shadow regions in order to finally identify umbra shadows.
In the top-down process, after detection of objects and shadows both are tracked
using Kalman filters, in order to enhance the chromatic shadow detection, when it
fails to detect a shadow. Firstly, this implies a data association between the blobs
(foreground and shadow) and Kalman filters. Secondly, an event analysis of the different data association cases is performed, and occlusion handling is managed by a
Probabilistic Appearance Model (PAM). Based on this association, temporal consistency is looked for the association between foregrounds and shadows and their
respective Kalman Filters. From this association several cases are studied, as a result
lost chromatic shadows are correctly detected. Finally, the tracking results are used
as feedback to improve the shadow and object detection.
Unlike other approaches, our method does not make any a-priori assumptions
about camera location, surface geometries, surface textures, shapes and types of
shadows, objects, and background. Experimental results show the performance and
accuracy of our approach in different shadowed materials and illumination conditions.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Gonzalez;Xavier Roca  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-937261-3-3 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number ISE @ ise @ Hue2010 Serial 1332  
Permanent link to this record
 

 
Author Carles Fernandez edit  isbn
openurl 
  Title Understanding Image Sequences: the Role of Ontologies in Cognitive Vision Type Book Whole
  Year 2010 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume (up) Issue Pages  
  Keywords  
  Abstract The increasing ubiquitousness of digital information in our daily lives has positioned
video as a favored information vehicle, and given rise to an astonishing generation of
social media and surveillance footage. This raises a series of technological demands
for automatic video understanding and management, which together with the compromising attentional limitations of human operators, have motivated the research
community to guide its steps towards a better attainment of such capabilities. As
a result, current trends on cognitive vision promise to recognize complex events and
self-adapt to different environments, while managing and integrating several types of
knowledge. Future directions suggest to reinforce the multi-modal fusion of information sources and the communication with end-users.
In this thesis we tackle the problem of recognizing and describing meaningful
events in video sequences from different domains, and communicating the resulting
knowledge to end-users by means of advanced interfaces for human–computer interaction. This problem is addressed by designing the high-level modules of a cognitive
vision framework exploiting ontological knowledge. Ontologies allow us to define the
relevant concepts in a domain and the relationships among them; we prove that the
use of ontologies to organize, centralize, link, and reuse different types of knowledge
is a key factor in the materialization of our objectives.
The proposed framework contributes to: (i) automatically learn the characteristics
of different scenarios in a domain; (ii) reason about uncertain, incomplete, or vague
information from visual –camera’s– or linguistic –end-user’s– inputs; (iii) derive plausible interpretations of complex events from basic spatiotemporal developments; (iv)
facilitate natural interfaces that adapt to the needs of end-users, and allow them to
communicate efficiently with the system at different levels of interaction; and finally,
(v) find mechanisms to guide modeling processes, maintain and extend the resulting
models, and to exploit multimodal resources synergically to enhance the former tasks.
We describe a holistic methodology to achieve these goals. First, the use of prior
taxonomical knowledge is proved useful to guide MAP-MRF inference processes in
the automatic identification of semantic regions, with independence of a particular scenario. Towards the recognition of complex video events, we combine fuzzy
metric-temporal reasoning with SGTs, thus assessing high-level interpretations from
spatiotemporal data. Here, ontological resources like T–Boxes, onomasticons, or factual databases become useful to derive video indexing and retrieval capabilities, and
also to forward highlighted content to smart user interfaces. There, we explore the
application of ontologies to discourse analysis and cognitive linguistic principles, or scene augmentation techniques towards advanced communication by means of natural language dialogs and synthetic visualizations. Ontologies become fundamental to
coordinate, adapt, and reuse the different modules in the system.
The suitability of our ontological framework is demonstrated by a series of applications that especially benefit the field of smart video surveillance, viz. automatic generation of linguistic reports about the content of video sequences in multiple natural
languages; content-based filtering and summarization of these reports; dialogue-based
interfaces to query and browse video contents; automatic learning of semantic regions
in a scenario; and tools to evaluate the performance of components and models in the
system, via simulation and augmented reality.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Jordi Gonzalez;Xavier Roca  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-937261-2-6 Medium  
  Area Expedition Conference  
  Notes Approved no  
  Call Number Admin @ si @ Fer2010a Serial 1333  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: