|
Murad Al Haj, Francisco Javier Orozco, Jordi Gonzalez, & Juan J. Villanueva. (2008). Automatic Face and Facial Features Initialization for Robust and Accurate Tracking. In 19th International Conference on Pattern Recognition. (1– 4).
|
|
|
Jose Luis Alba, A. Pujol, & Juan J. Villanueva. (2001). Novel SOM-PCA Network for Face Identification..
|
|
|
Jose Luis Alba, A. Pujol, & Juan J. Villanueva. (2001). ST-SOM: A Shape+Texture Self Organizing Map..
|
|
|
Jose Luis Alba, A. Pujol, & Juan J. Villanueva. (2001). Separating Geometry from Texture to Improve Face Analysis..
|
|
|
A. Auge, X. Varona, & Juan J. Villanueva. (1997). Tumour Segmentation in Mammographies with Neural Networks. Application to Tumoural Volume Approximation. Proceedings of the VII NSPRIA, Vol. II, CVC–UAB, .
|
|
|
Pau Baiget, Carles Fernandez, Xavier Roca, & Jordi Gonzalez. (2007). Automatic Learning of Conceptual Knowledge for the Interpretation of Human Behavior in Video Sequences. In 3rd Iberian Conference on Pattern Recognition and Image Analysis (IbPRIA 2007), J. Marti et al. (Eds.) LNCS 4477:507–514.
|
|
|
Pau Baiget, Carles Fernandez, Xavier Roca, & Jordi Gonzalez. (2009). Generation of Augmented Video Sequences Combining Behavioral Animation and Multi Object Tracking. Computer Animation and Virtual Worlds, 20(4), 473–489.
Abstract: In this paper we present a novel approach to generate augmented video sequences in real-time, involving interactions between virtual and real agents in real scenarios. On the one hand, real agent motion is estimated by means of a multi-object tracking algorithm, which determines real objects' position over the scenario for each time step. On the other hand, virtual agents are provided with behavior models considering their interaction with the environment and with other agents. The resulting framework allows to generate video sequences involving behavior-based virtual agents that react to real agent behavior and has applications in education, simulation, and in the game and movie industries. We show the performance of the proposed approach in an indoor and outdoor scenario simulating human and vehicle agents. Copyright © 2009 John Wiley & Sons, Ltd.
We present a novel approach to generate augmented video sequences in real-time, involving interactions between virtual and real agents in real scenarios. On the one hand, real agent motion is estimated by means of a multi-object tracking algorithm, which determines real objects' position over the scenario for each time step. On the other hand, virtual agents are provided with behavior models considering their interaction with the environment and with other agents. © 2009 Wiley Periodicals, Inc.
|
|
|
Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2008). Autonomous Virtual Agents for Performance Evaluation of Tracking Algorithms. In Articulated Motion and Deformable Objects, 5th International Conference AMDO 2008, (Vol. 5098, pp. 299–308). LNCS.
|
|
|
Nicola Bellotto, Eric Sommerlade, Ben Benfold, Charles Bibby, I. Reid, Daniel Roth, et al. (2009). A Distributed Camera System for Multi-Resolution Surveillance. In 3rd ACM/IEEE International Conference on Distributed Smart Cameras.
Abstract: We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance.
Keywords: 10.1109/ICDSC.2009.5289413
|
|
|
Pau Baiget, Joan Soto, Xavier Roca, & Jordi Gonzalez. (2007). Automatic Generation of Computer-Animated Sequences based on Human Behaviour Modelling. In 10th International Conference on Computer Graphics and Artificial Intelligence.
|
|
|
Pau Baiget, Eric Sommerlade, I. Reid, & Jordi Gonzalez. (2008). Finding Prototypes to Estimate Trajectory Development in Outdoor Scenarios. In First International Workshop on Tracking Humans for the Evaluation of their Motion in Image Sequences BMVC 2008, (27–34).
|
|
|
Bhaskar Chakraborty, Marco Pedersoli, & Jordi Gonzalez. (2008). View-Invariant Human Action Detection using Component-Wise HMM of Body Parts. In Articulated Motion and Deformable Objects, 5th International Conference (Vol. 5098, 208–217). LNCS.
|
|
|
Bhaskar Chakraborty, Ognjen Rudovic, & Jordi Gonzalez. (2008). View-Invariant Human-Body Detection with Extension to Human Action Recognition using Component-Wise HMM of Body Parts. In 8th IEEE International Conference on Automatic Face and Gesture Recognition.
|
|
|
Fadi Dornaika, Francisco Javier Orozco, & Jordi Gonzalez. (2006). Combined Head, Lips, Eyebrows, and Eyelids Tracking Using Adaptive Appearance Models. In IV Conference on Articulated Motion and Deformable Objects (AMDO´06), LNCS 4069: 110–119.
|
|
|
Paramveer S. Dhillon, Francisco Javier Orozco, & Jordi Gonzalez. (2008). Real-Time Monocular Face Tracking Using and Active Camera.
|
|