|
A. Pujol, & Juan J. Villanueva. (2002). A supervised Modification of the Hausdorff distance for visual shape classification. International Journal of Pattern Recognition and Artificial Intelligence, 349–359.
|
|
|
V. Kober, Mikhail Mozerov, J. Alvarez-Borrego, & I.A. Ovseyevich. (2006). Adaptive Correlation Filters for Pattern Recognition. Pattern Recognition and Image Analysis, 425–431.
Abstract: Adaptive correlation filters based on synthetic discriminant functions (SDFs) for reliable pattern recognition are proposed. A given value of discrimination capability can be achieved by adapting a SDF filter to the input scene. This can be done by iterative training. Computer simulation results obtained with the proposed filters are compared with those of various correlation filters in terms of recognition performance.
Keywords: Pattern recognition, Correlation filters, A adaptive filters
|
|
|
A. Diplaros, N. Vlassis, & Theo Gevers. (2007). A Spatially Constrained Generative Model and an EM Algorithm for Image Segmentation. IEEE Transactions on Neural Networks, 798–808.
|
|
|
Eduard Vazquez, Theo Gevers, M. Lucassen, Joost Van de Weijer, & Ramon Baldrich. (2010). Saliency of Color Image Derivatives: A Comparison between Computational Models and Human Perception. JOSA A - Journal of the Optical Society of America A, 27(3), 613–621.
Abstract: In this paper, computational methods are proposed to compute color edge saliency based on the information content of color edges. The computational methods are evaluated on bottom-up saliency in a psychophysical experiment, and on a more complex task of salient object detection in real-world images. The psychophysical experiment demonstrates the relevance of using information theory as a saliency processing model and that the proposed methods are significantly better in predicting color saliency (with a human-method correspondence up to 74.75% and an observer agreement of 86.8%) than state-of-the-art models. Furthermore, results from salient object detection confirm that an early fusion of color and contrast provide accurate performance to compute visual saliency with a hit rate up to 95.2%.
|
|
|
Zeynep Yucel, Albert Ali Salah, Çetin Meriçli, Tekin Meriçli, Roberto Valenti, & Theo Gevers. (2013). Joint Attention by Gaze Interpolation and Saliency. T-CIBER - IEEE Transactions on cybernetics, 829–842.
Abstract: Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention.
|
|