|
Francisco Javier Orozco, Xavier Roca, & Jordi Gonzalez. (2008). Real-Time Gaze Tracking with Appearance-Based Models. MVAP - Machine Vision Applications, 20(6), 353–364.
Abstract: Psychological evidence has emphasized the importance of eye gaze analysis in human computer interaction and emotion interpretation. To this end, current image analysis algorithms take into consideration eye-lid and iris motion detection using colour information and edge detectors. However, eye movement is fast and and hence difficult to use to obtain a precise and robust tracking. Instead, our
method proposed to describe eyelid and iris movements as continuous variables using appearance-based tracking. This approach combines the strengths of adaptive appearance models, optimization methods and backtracking techniques.Thus,
in the proposed method textures are learned on-line from near frontal images and illumination changes, occlusions and fast movements are managed. The method achieves real-time performance by combining two appearance-based trackers to a
backtracking algorithm for eyelid estimation and another for iris estimation. These contributions represent a significant advance towards a reliable gaze motion description for HCI and expression analysis, where the strength of complementary
methodologies are combined to avoid using high quality images, colour information, texture training, camera settings and other time-consuming processes.
Keywords: Keywords Eyelid and iris tracking, Appearance models, Blinking, Iris saccade, Real-time gaze tracking
|
|
|
Maria Vanrell, Jordi Vitria, & Xavier Roca. (1997). A multidimensional scaling approach to explore the behavior of a texture perception algorithm. Machine Vision and Applications, 9, 262–271.
|
|
|
Dani Rowe, Jordi Gonzalez, Marco Pedersoli, & Juan J. Villanueva. (2010). On Tracking Inside Groups. MVA - Machine Vision and Applications, 21(2), 113–127.
Abstract: This work develops a new architecture for multiple-target tracking in unconstrained dynamic scenes, which consists of a detection level which feeds a two-stage tracking system. A remarkable characteristic of the system is its ability to track several targets while they group and split, without using 3D information. Thus, special attention is given to the feature-selection and appearance-computation modules, and to those modules involved in tracking through groups. The system aims to work as a stand-alone application in complex and dynamic scenarios. No a-priori knowledge about either the scene or the targets, based on a previous training period, is used. Hence, the scenario is completely unknown beforehand. Successful tracking has been demonstrated in well-known databases of both indoor and outdoor scenarios. Accurate and robust localisations have been yielded during long-term target merging and occlusions.
|
|
|
Thierry Brouard, Jordi Gonzalez, Caifeng Shan, Massimo Piccardi, & Larry S. Davis. (2014). Special issue on background modeling for foreground detection in real-world dynamic scenes. MVAP - Machine Vision and Applications, 25(5), 1101–1103.
Abstract: Although background modeling and foreground detection are not mandatory steps for computer vision applications, they may prove useful as they separate the primal objects usually called “foreground” from the remaining part of the scene called “background”, and permits different algorithmic treatment in the video processing field such as video surveillance, optical motion capture, multimedia applications, teleconferencing and human–computer interfaces. Conventional background modeling methods exploit the temporal variation of each pixel to model the background, and the foreground detection is made using change detection. The last decade witnessed very significant publications on background modeling but recently new applications in which background is not static, such as recordings taken from mobile devices or Internet videos, need new developments to detect robustly moving objects in challenging environments. Thus, effective methods for robustness to deal both with dynamic backgrounds, i
|
|
|
Yecong Wan, Yuanshuo Cheng, Miingwen Shao, & Jordi Gonzalez. (2022). Image rain removal and illumination enhancement done in one go. KBS - Knowledge-Based Systems, 252, 109244.
Abstract: Rain removal plays an important role in the restoration of degraded images. Recently, CNN-based methods have achieved remarkable success. However, these approaches neglect that the appearance of real-world rain is often accompanied by low light conditions, which will further degrade the image quality, thereby hindering the restoration mission. Therefore, it is very indispensable to jointly remove the rain and enhance illumination for real-world rain image restoration. To this end, we proposed a novel spatially-adaptive network, dubbed SANet, which can remove the rain and enhance illumination in one go with the guidance of degradation mask. Meanwhile, to fully utilize negative samples, a contrastive loss is proposed to preserve more natural textures and consistent illumination. In addition, we present a new synthetic dataset, named DarkRain, to boost the development of rain image restoration algorithms in practical scenarios. DarkRain not only contains different degrees of rain, but also considers different lighting conditions, and more realistically simulates real-world rainfall scenarios. SANet is extensively evaluated on the proposed dataset and attains new state-of-the-art performance against other combining methods. Moreover, after a simple transformation, our SANet surpasses existing the state-of-the-art algorithms in both rain removal and low-light image enhancement.
|
|