|
Mikhail Mozerov, Ariel Amato, Xavier Roca, & Jordi Gonzalez. (2008). Trajectory Occlusion Handling with Multiple View Distance Minimisation Clustering. Optical Engineering, vol. 47(04)04702, DOI:10.11781.2909665.
|
|
|
Mikhail Mozerov, Ariel Amato, Xavier Roca, & Jordi Gonzalez. (2009). Solving the Multi Object Occlusion Problem in a Multiple Camera Tracking System. Pattern Recognition and Image Analysis, 165–171.
Abstract: An efficient method to overcome adverse effects of occlusion upon object tracking is presented. The method is based on matching paths of objects in time and solves a complex occlusion-caused problem of merging separate segments of the same path.
|
|
|
Mikhail Mozerov. (2013). Constrained Optical Flow Estimation as a Matching Problem. TIP - IEEE Transactions on Image Processing, 22(5), 2044–2055.
Abstract: In general, discretization in the motion vector domain yields an intractable number of labels. In this paper we propose an approach that can reduce general optical flow to the constrained matching problem by pre-estimating a 2D disparity labeling map of the desired discrete motion vector function. One of the goals of the proposed paper is estimating coarse distribution of motion vectors and then utilizing this distribution as global constraints for discrete optical flow estimation. This pre-estimation is done with a simple frame-to-frame correlation technique also known as the digital symmetric-phase-only-filter (SPOF). We discover a strong correlation between the output of the SPOF and the motion vector distribution of the related optical flow. The two step matching paradigm for optical flow estimation is applied: pixel accuracy (integer flow), and subpixel accuracy estimation. The matching problem is solved by global optimization. Experiments on the Middlebury optical flow datasets confirm our intuitive assumptions about strong correlation between motion vector distribution of optical flow and maximal peaks of SPOF outputs. The overall performance of the proposed method is promising and achieves state-of-the-art results on the Middlebury benchmark.
|
|
|
Michael Holte, Bhaskar Chakraborty, Jordi Gonzalez, & Thomas B. Moeslund. (2012). A Local 3D Motion Descriptor for Multi-View Human Action Recognition from 4D Spatio-Temporal Interest Points. J-STSP - IEEE Journal of Selected Topics in Signal Processing, 6(5), 553–565.
Abstract: In this paper, we address the problem of human action recognition in reconstructed 3-D data acquired by multi-camera systems. We contribute to this field by introducing a novel 3-D action recognition approach based on detection of 4-D (3-D space $+$ time) spatio-temporal interest points (STIPs) and local description of 3-D motion features. STIPs are detected in multi-view images and extended to 4-D using 3-D reconstructions of the actors and pixel-to-vertex correspondences of the multi-camera setup. Local 3-D motion descriptors, histogram of optical 3-D flow (HOF3D), are extracted from estimated 3-D optical flow in the neighborhood of each 4-D STIP and made view-invariant. The local HOF3D descriptors are divided using 3-D spatial pyramids to capture and improve the discrimination between arm- and leg-based actions. Based on these pyramids of HOF3D descriptors we build a bag-of-words (BoW) vocabulary of human actions, which is compressed and classified using agglomerative information bottleneck (AIB) and support vector machines (SVMs), respectively. Experiments on the publicly available i3DPost and IXMAS datasets show promising state-of-the-art results and validate the performance and view-invariance of the approach.
|
|
|
Meysam Madadi, Sergio Escalera, Xavier Baro, & Jordi Gonzalez. (2022). End-to-end Global to Local CNN Learning for Hand Pose Recovery in Depth data. IETCV - IET Computer Vision, 16(1), 50–66.
Abstract: Despite recent advances in 3D pose estimation of human hands, especially thanks to the advent of CNNs and depth cameras, this task is still far from being solved. This is mainly due to the highly non-linear dynamics of fingers, which make hand model training a challenging task. In this paper, we exploit a novel hierarchical tree-like structured CNN, in which branches are trained to become specialized in predefined subsets of hand joints, called local poses. We further fuse local pose features, extracted from hierarchical CNN branches, to learn higher order dependencies among joints in the final pose by end-to-end training. Lastly, the loss function used is also defined to incorporate appearance and physical constraints about doable hand motion and deformation. Finally, we introduce a non-rigid data augmentation approach to increase the amount of training depth data. Experimental results suggest that feeding a tree-shaped CNN, specialized in local poses, into a fusion network for modeling joints correlations and dependencies, helps to increase the precision of final estimations, outperforming state-of-the-art results on NYU and SyntheticHand datasets.
Keywords: Computer vision; data acquisition; human computer interaction; learning (artificial intelligence); pose estimation
|
|