|
C. Santa-Marta, Jaume Garcia, A. Bajo, J.J. Vaquero, M. Ledesma-Carbayo, & Debora Gil. (2008). Influence of the Temporal Resolution on the Quantification of Displacement Fields in Cardiac Magnetic Resonance Tagged Images. In S. A. Roberto hornero (Ed.), XXVI Congreso Anual de la Sociedad Española de Ingenieria Biomedica (352–353).
Abstract: It is difficult to acquire tagged cardiac MR images with a high temporal and spatial resolution using clinical MR scanners. However, if such images are used for quantifying scores based on motion, it is essential a resolution as high as possibl e. This paper explores the influence of the temporal resolution of a tagged series on the quantification of myocardial dynamic parameters. To such purpose we have designed a SPAMM (Spatial Modulation of Magnetization) sequence allowing acquisition of sequences at simple and double temporal resolution. Sequences are processed to compute myocardial motion by an automatic technique based on the tracking of the harmonic phase of tagged images (the Harmonic Phase Flow, HPF). The results have been compared to manual tracking of myocardial tags. The error in displacement fields for double resolution sequences reduces 17%.
|
|
|
C. Sbert, & A.F. Sole. (2000). Stereo reconstruction of 3D curves. In 15 th International Conference on Pattern Recognition (Vol. 1, 912–915).
|
|
|
Carles Fernandez. (2010). Understanding Image Sequences: the Role of Ontologies in Cognitive Vision (Jordi Gonzalez, & Xavier Roca, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: The increasing ubiquitousness of digital information in our daily lives has positioned
video as a favored information vehicle, and given rise to an astonishing generation of
social media and surveillance footage. This raises a series of technological demands
for automatic video understanding and management, which together with the compromising attentional limitations of human operators, have motivated the research
community to guide its steps towards a better attainment of such capabilities. As
a result, current trends on cognitive vision promise to recognize complex events and
self-adapt to different environments, while managing and integrating several types of
knowledge. Future directions suggest to reinforce the multi-modal fusion of information sources and the communication with end-users.
In this thesis we tackle the problem of recognizing and describing meaningful
events in video sequences from different domains, and communicating the resulting
knowledge to end-users by means of advanced interfaces for human–computer interaction. This problem is addressed by designing the high-level modules of a cognitive
vision framework exploiting ontological knowledge. Ontologies allow us to define the
relevant concepts in a domain and the relationships among them; we prove that the
use of ontologies to organize, centralize, link, and reuse different types of knowledge
is a key factor in the materialization of our objectives.
The proposed framework contributes to: (i) automatically learn the characteristics
of different scenarios in a domain; (ii) reason about uncertain, incomplete, or vague
information from visual –camera’s– or linguistic –end-user’s– inputs; (iii) derive plausible interpretations of complex events from basic spatiotemporal developments; (iv)
facilitate natural interfaces that adapt to the needs of end-users, and allow them to
communicate efficiently with the system at different levels of interaction; and finally,
(v) find mechanisms to guide modeling processes, maintain and extend the resulting
models, and to exploit multimodal resources synergically to enhance the former tasks.
We describe a holistic methodology to achieve these goals. First, the use of prior
taxonomical knowledge is proved useful to guide MAP-MRF inference processes in
the automatic identification of semantic regions, with independence of a particular scenario. Towards the recognition of complex video events, we combine fuzzy
metric-temporal reasoning with SGTs, thus assessing high-level interpretations from
spatiotemporal data. Here, ontological resources like T–Boxes, onomasticons, or factual databases become useful to derive video indexing and retrieval capabilities, and
also to forward highlighted content to smart user interfaces. There, we explore the
application of ontologies to discourse analysis and cognitive linguistic principles, or scene augmentation techniques towards advanced communication by means of natural language dialogs and synthetic visualizations. Ontologies become fundamental to
coordinate, adapt, and reuse the different modules in the system.
The suitability of our ontological framework is demonstrated by a series of applications that especially benefit the field of smart video surveillance, viz. automatic generation of linguistic reports about the content of video sequences in multiple natural
languages; content-based filtering and summarization of these reports; dialogue-based
interfaces to query and browse video contents; automatic learning of semantic regions
in a scenario; and tools to evaluate the performance of components and models in the
system, via simulation and augmented reality.
|
|
|
Carles Fernandez. (2007). Natural Language for Human Behavior Evaluation in Video Sequences.
|
|
|
Carles Fernandez, & Jordi Gonzalez. (2008). A Multilingually-Extensible Module for Natural Language Generation.
|
|
|
Carles Fernandez, & Jordi Gonzalez. (2007). Ontology for Semantic Integration in a Cognitive Surveillance System. In Semantic Multimedia, 2nd International Conference on Semantics and Digital Media Technologies (Vol. 4816, 263–263). LNCS.
|
|
|
Carles Fernandez, Jordi Gonzalez, Joao Manuel R. S. Taveres, & Xavier Roca. (2013). Towards Ontological Cognitive System. In Topics in Medical Image Processing and Computational Vision (Vol. 8, pp. 87–99). Springer Netherlands.
Abstract: The increasing ubiquitousness of digital information in our daily lives has positioned video as a favored information vehicle, and given rise to an astonishing generation of social media and surveillance footage. This raises a series of technological demands for automatic video understanding and management, which together with the compromising attentional limitations of human operators, have motivated the research community to guide its steps towards a better attainment of such capabilities. As a result, current trends on cognitive vision promise to recognize complex events and self-adapt to different environments, while managing and integrating several types of knowledge. Future directions suggest to reinforce the multi-modal fusion of information sources and the communication with end-users.
|
|
|
Carles Fernandez, Jordi Gonzalez, & Xavier Roca. (2010). Automatic Learning of Background Semantics in Generic Surveilled Scenes. In 11th European Conference on Computer Vision (Vol. 6313, 678–692). LNCS. Springer Berlin Heidelberg.
Abstract: Advanced surveillance systems for behavior recognition in outdoor traffic scenes depend strongly on the particular configuration of the scenario. Scene-independent trajectory analysis techniques statistically infer semantics in locations where motion occurs, and such inferences are typically limited to abnormality. Thus, it is interesting to design contributions that automatically categorize more specific semantic regions. State-of-the-art approaches for unsupervised scene labeling exploit trajectory data to segment areas like sources, sinks, or waiting zones. Our method, in addition, incorporates scene-independent knowledge to assign more meaningful labels like crosswalks, sidewalks, or parking spaces. First, a spatiotemporal scene model is obtained from trajectory analysis. Subsequently, a so-called GI-MRF inference process reinforces spatial coherence, and incorporates taxonomy-guided smoothness constraints. Our method achieves automatic and effective labeling of conceptual regions in urban scenarios, and is robust to tracking errors. Experimental validation on 5 surveillance databases has been conducted to assess the generality and accuracy of the segmentations. The resulting scene models are used for model-based behavior analysis.
|
|
|
Carles Fernandez, Pau Baiget, & Jordi Gonzalez. (2008). Cognitive-Guided Semantic Exploitation in Video Surveillance Interfaces. In First International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences. BMVC 2008, (53–60).
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2011). Determining the Best Suited Semantic Events for Cognitive Surveillance. EXSY - Expert Systems with Applications, 38(4), 4068–4079.
Abstract: State-of-the-art systems on cognitive surveillance identify and describe complex events in selected domains, thus providing end-users with tools to easily access the contents of massive video footage. Nevertheless, as the complexity of events increases in semantics and the types of indoor/outdoor scenarios diversify, it becomes difficult to assess which events describe better the scene, and how to model them at a pixel level to fulfill natural language requests. We present an ontology-based methodology that guides the identification, step-by-step modeling, and generalization of the most relevant events to a specific domain. Our approach considers three steps: (1) end-users provide textual evidence from surveilled video sequences; (2) transcriptions are analyzed top-down to build the knowledge bases for event description; and (3) the obtained models are used to generalize event detection to different image sequences from the surveillance domain. This framework produces user-oriented knowledge that improves on existing advanced interfaces for video indexing and retrieval, by determining the best suited events for video understanding according to end-users. We have conducted experiments with outdoor and indoor scenes showing thefts, chases, and vandalism, demonstrating the feasibility and generalization of this proposal.
Keywords: Cognitive surveillance; Event modeling; Content-based video retrieval; Ontologies; Advanced user interfaces
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2011). Augmenting Video Surveillance Footage with Virtual Agents for Incremental Event Evaluation. PRL - Pattern Recognition Letters, 32(6), 878–889.
Abstract: The fields of segmentation, tracking and behavior analysis demand for challenging video resources to test, in a scalable manner, complex scenarios like crowded environments or scenes with high semantics. Nevertheless, existing public databases cannot scale the presence of appearing agents, which would be useful to study long-term occlusions and crowds. Moreover, creating these resources is expensive and often too particularized to specific needs. We propose an augmented reality framework to increase the complexity of image sequences in terms of occlusions and crowds, in a scalable and controllable manner. Existing datasets can be increased with augmented sequences containing virtual agents. Such sequences are automatically annotated, thus facilitating evaluation in terms of segmentation, tracking, and behavior recognition. In order to easily specify the desired contents, we propose a natural language interface to convert input sentences into virtual agent behaviors. Experimental tests and validation in indoor, street, and soccer environments are provided to show the feasibility of the proposed approach in terms of robustness, scalability, and semantics.
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2009). Exploiting Natural Language Generation in Scene Interpretation. In Human–Centric Interfaces for Ambient Intelligence (Vol. 4, 71–93). Elsevier Science and Tech.
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2008). Interpretation of Complex Situations in a Semantic-based Surveillance Framework. Signal Processing: Image Communication, Special Issue on Semantic Analysis for Interactive Multimedia Services, 554–569.
Abstract: The integration of cognitive capabilities in computer vision systems requires both to enable high semantic expressiveness and to deal with high computational costs as large amounts of data are involved in the analysis. This contribution describes a cognitive vision system conceived to automatically provide high-level interpretations of complex real-time situations in outdoor and indoor scenarios, and to eventually maintain communication with casual end users in multiple languages. The main contributions are: (i) the design of an integrative multilevel architecture for cognitive surveillance purposes; (ii) the proposal of a coherent taxonomy of knowledge to guide the process of interpretation, which leads to the conception of a situation-based ontology; (iii) the use of situational analysis for content detection and a progressive interpretation of semantically rich scenes, by managing incomplete or uncertain knowledge, and (iv) the use of such an ontological background to enable multilingual capabilities and advanced end-user interfaces. Experimental results are provided to show the feasibility of the proposed approach.
Keywords: Cognitive vision system; Situation analysis; Applied ontologies
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2007). Natural Language Descriptions of Human Behavior from Video Sequences. In Advances in Artificial Intelligence, 30th Annual Conference on Artificial Intelligence (Vol. 4667, 279–292). LNCS.
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2007). Semantic Annotation of Complex Human Scenes for Multimedia Surveillance. In AI* Artificial Intelligence and Human–Oriented Computing. 10th Congress of the Italian Association for Artificial Intelligence, (Vol. 4733, 698–709). LNCS.
|
|