Oriol Rodriguez-Leon, Josefina Mauri, Eduard Fernandez-Nofrerias, M.Gomez, Antonio Tovar, L.Cano, et al. (2002). Ecografia Intracoronaria: Segmentacio Automatica de area de la llum. Revista Societat Catalana de Cardiologia, 42.
|
Matthias S. Keil. (2006). Smooth Gradient Representations as a Unifying Account of Chevreul’s Illusion, Mach Bands, and a Variant of the Ehrenstein Disk. NEURALCOMPUT - Neural Computation, 871–903.
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2006). On the Use of External Face Features for Identity Verification. Journal of Multimedia, 1(4): 11–20, 11–20.
Abstract: In general automatic face classification applications images are captured in natural environments. In these cases, the performance is affected by variations in facial images related to illumination, pose, occlusion or expressions. Most of the existing face classification systems use only the internal features information, composed by eyes, nose and mouth, since they are more difficult to imitate. Nevertheless, nowadays a lot of applications not related to security are developed, and in these cases the information located at head, chin or ears zones (external features) can be useful to improve the current accuracies. However, the lack of a natural alignment in these areas makes difficult to extract these features applying classic Bottom-Up methods. In this paper, we propose a complete scheme based on a Top-Down reconstruction algorithm to extract external features of face images. To test our system we have performed face verification experiments using public databases, given that identity verification is a general task that has many real life applications. We have considered images uniformly illuminated, images with occlusions and images with high local changes in the illumination, and the obtained results show that the information contributed by the external features can be useful for verification purposes, specially significant when faces are partially occluded.
Keywords: Face Verification, Computer Vision, Machine Learning
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2008). Rank Estimation in 3D Multibody Motion Segmentation. Electronic Letters, 44(4), 279–280.
Abstract: A novel technique for rank estimation in 3D multibody motion segmentation is proposed. It is based on the study of the frequency spectra of moving rigid objects and does not use or assume a prior knowledge of the objects contained in the scene (i.e. number of objects and motion). The significance of rank estimation on multibody motion segmentation results is shown by using two motion segmentation algorithms over both synthetic and real data.
|
David Masip, & Jordi Vitria. (2008). Shared Feature Extraction for Nearest Neighbor Face Recognition. IEEE Transactions on Neural Networks, 586–595.
|
C. Malagelada, Fosca De Iorio, Fernando Azpiroz, Anna Accarino, Santiago Segui, Petia Radeva, et al. (2008). New Insight Into Intestinal Motor Function via Noninvasive Endoluminal Image Analysis. Gastroenterology, 1155–1162.
|
Hugo Berti, Angel Sappa, & Osvaldo Agamennoni. (2008). Improved Dynamic Window Approach by Using Lyapunov Stability Criteria. Latin American Applied Research, 289–298.
|
Pau Baiget, Carles Fernandez, Xavier Roca, & Jordi Gonzalez. (2009). Generation of Augmented Video Sequences Combining Behavioral Animation and Multi Object Tracking. Computer Animation and Virtual Worlds, 20(4), 473–489.
Abstract: In this paper we present a novel approach to generate augmented video sequences in real-time, involving interactions between virtual and real agents in real scenarios. On the one hand, real agent motion is estimated by means of a multi-object tracking algorithm, which determines real objects' position over the scenario for each time step. On the other hand, virtual agents are provided with behavior models considering their interaction with the environment and with other agents. The resulting framework allows to generate video sequences involving behavior-based virtual agents that react to real agent behavior and has applications in education, simulation, and in the game and movie industries. We show the performance of the proposed approach in an indoor and outdoor scenario simulating human and vehicle agents. Copyright © 2009 John Wiley & Sons, Ltd.
We present a novel approach to generate augmented video sequences in real-time, involving interactions between virtual and real agents in real scenarios. On the one hand, real agent motion is estimated by means of a multi-object tracking algorithm, which determines real objects' position over the scenario for each time step. On the other hand, virtual agents are provided with behavior models considering their interaction with the environment and with other agents. © 2009 Wiley Periodicals, Inc.
|
Fadi Dornaika, & Bogdan Raducanu. (2009). Three-Dimensional Face Pose Detection and Tracking Using Monocular Videos: Tool and Application. TSMCB - IEEE Transactions on Systems, Man and Cybernetics part B, 39(4), 935–944.
Abstract: Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
|
Arnau Ramisa, Adriana Tapus, David Aldavert, Ricardo Toledo, & Ramon Lopez de Mantaras. (2009). Robust Vision-Based Localization using Combinations of Local Feature Regions Detectors. AR - Autonomous Robots, 27(4), 373–385.
Abstract: This paper presents a vision-based approach for mobile robot localization. The model of the environment is topological. The new approach characterizes a place using a signature. This signature consists of a constellation of descriptors computed over different types of local affine covariant regions extracted from an omnidirectional image acquired rotating a standard camera with a pan-tilt unit. This type of representation permits a reliable and distinctive environment modelling. Our objectives were to validate the proposed method in indoor environments and, also, to find out if the combination of complementary local feature region detectors improves the localization versus using a single region detector. Our experimental results show that if false matches are effectively rejected, the combination of different covariant affine region detectors increases notably the performance of the approach by combining the different strengths of the individual detectors. In order to reduce the localization time, two strategies are evaluated: re-ranking the map nodes using a global similarity measure and using standard perspective view field of 45°.
In order to systematically test topological localization methods, another contribution proposed in this work is a novel method to see the degradation in localization performance as the robot moves away from the point where the original signature was acquired. This allows to know the robustness of the proposed signature. In order for this to be effective, it must be done in several, variated, environments that test all the possible situations in which the robot may have to perform localization.
|
Miquel Ferrer, Ernest Valveny, F. Serratosa, K. Riesen, & Horst Bunke. (2010). Generalized Median Graph Computation by Means of Graph Embedding in Vector Spaces. PR - Pattern Recognition, 43(4), 1642–1655.
Abstract: The median graph has been presented as a useful tool to represent a set of graphs. Nevertheless its computation is very complex and the existing algorithms are restricted to use limited amount of data. In this paper we propose a new approach for the computation of the median graph based on graph embedding. Graphs are embedded into a vector space and the median is computed in the vector domain. We have designed a procedure based on the weighted mean of a pair of graphs to go from the vector domain back to the graph domain in order to obtain a final approximation of the median graph. Experiments on three different databases containing large graphs show that we succeed to compute good approximations of the median graph. We have also applied the median graph to perform some basic classification tasks achieving reasonable good results. These experiments on real data open the door to the application of the median graph to a number of more complex machine learning algorithms where a representative of a set of graphs is needed.
Keywords: Graph matching; Weighted mean of graphs; Median graph; Graph embedding; Vector spaces
|
Simone Balocco, O. Camara, E. Vivas, T. Sola, L. Guimaraens, H. A. van Andel, et al. (2010). Feasibility of Estimating Regional Mechanical Properties of Cerebral Aneurysms In Vivo. MEDPHYS - Medical Physics, 37(4), 1689–1706.
Abstract: PURPOSE:
In this article, the authors studied the feasibility of estimating regional mechanical properties in cerebral aneurysms, integrating information extracted from imaging and physiological data with generic computational models of the arterial wall behavior.
METHODS:
A data assimilation framework was developed to incorporate patient-specific geometries into a given biomechanical model, whereas wall motion estimates were obtained from applying registration techniques to a pair of simulated MR images and guided the mechanical parameter estimation. A simple incompressible linear and isotropic Hookean model coupled with computational fluid-dynamics was employed as a first approximation for computational purposes. Additionally, an automatic clustering technique was developed to reduce the number of parameters to assimilate at the optimization stage and it considerably accelerated the convergence of the simulations. Several in silico experiments were designed to assess the influence of aneurysm geometrical characteristics and the accuracy of wall motion estimates on the mechanical property estimates. Hence, the proposed methodology was applied to six real cerebral aneurysms and tested against a varying number of regions with different elasticity, different mesh discretization, imaging resolution, and registration configurations.
RESULTS:
Several in silico experiments were conducted to investigate the feasibility of the proposed workflow, results found suggesting that the estimation of the mechanical properties was mainly influenced by the image spatial resolution and the chosen registration configuration. According to the in silico experiments, the minimal spatial resolution needed to extract wall pulsation measurements with enough accuracy to guide the proposed data assimilation framework was of 0.1 mm.
CONCLUSIONS:
Current routine imaging modalities do not have such a high spatial resolution and therefore the proposed data assimilation framework cannot currently be used on in vivo data to reliably estimate regional properties in cerebral aneurysms. Besides, it was observed that the incorporation of fluid-structure interaction in a biomechanical model with linear and isotropic material properties did not have a substantial influence in the final results.
|
Alicia Fornes, Josep Llados, Gemma Sanchez, Xavier Otazu, & Horst Bunke. (2010). A Combination of Features for Symbol-Independent Writer Identification in Old Music Scores. IJDAR - International Journal on Document Analysis and Recognition, 13(4), 243–259.
Abstract: The aim of writer identification is determining the writer of a piece of handwriting from a set of writers. In this paper, we present an architecture for writer identification in old handwritten music scores. Even though an important amount of music compositions contain handwritten text, the aim of our work is to use only music notation to determine the author. The main contribution is therefore the use of features extracted from graphical alphabets. Our proposal consists in combining the identification results of two different approaches, based on line and textural features. The steps of the ensemble architecture are the following. First of all, the music sheet is preprocessed for removing the staff lines. Then, music lines and texture images are generated for computing line features and textural features. Finally, the classification results are combined for identifying the writer. The proposed method has been tested on a database of old music scores from the seventeenth to nineteenth centuries, achieving a recognition rate of about 92% with 20 writers.
|
Olivier Penacchio. (2011). Mixed Hodge Structures and Equivariant Sheaves on the Projective Plane. MN - Mathematische Nachrichten, 284(4), 526–542.
Abstract: We describe an equivalence of categories between the category of mixed Hodge structures and a category of equivariant vector bundles on a toric model of the complex projective plane which verify some semistability condition. We then apply this correspondence to define an invariant which generalizes the notion of R-split mixed Hodge structure and give calculations for the first group of cohomology of possibly non smooth or non-complete curves of genus 0 and 1. Finally, we describe some extension groups of mixed Hodge structures in terms of equivariant extensions of coherent sheaves. © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Keywords: Mixed Hodge structures, equivariant sheaves, MSC (2010) Primary: 14C30, Secondary: 14F05, 14M25
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2011). Determining the Best Suited Semantic Events for Cognitive Surveillance. EXSY - Expert Systems with Applications, 38(4), 4068–4079.
Abstract: State-of-the-art systems on cognitive surveillance identify and describe complex events in selected domains, thus providing end-users with tools to easily access the contents of massive video footage. Nevertheless, as the complexity of events increases in semantics and the types of indoor/outdoor scenarios diversify, it becomes difficult to assess which events describe better the scene, and how to model them at a pixel level to fulfill natural language requests. We present an ontology-based methodology that guides the identification, step-by-step modeling, and generalization of the most relevant events to a specific domain. Our approach considers three steps: (1) end-users provide textual evidence from surveilled video sequences; (2) transcriptions are analyzed top-down to build the knowledge bases for event description; and (3) the obtained models are used to generalize event detection to different image sequences from the surveillance domain. This framework produces user-oriented knowledge that improves on existing advanced interfaces for video indexing and retrieval, by determining the best suited events for video understanding according to end-users. We have conducted experiments with outdoor and indoor scenes showing thefts, chases, and vandalism, demonstrating the feasibility and generalization of this proposal.
Keywords: Cognitive surveillance; Event modeling; Content-based video retrieval; Ontologies; Advanced user interfaces
|