|
Wenjuan Gong, Y.Huang, Jordi Gonzalez, & Liang Wang. (2015). An Effective Solution to Double Counting Problem in Human Pose Estimation.
Abstract: The mixture of parts model has been successfully applied to solve the 2D
human pose estimation problem either as an explicitly trained body part model
or as latent variables for pedestrian detection. Even in the era of massive
applications of deep learning techniques, the mixture of parts model is still
effective in solving certain problems, especially in the case with limited
numbers of training samples. In this paper, we consider using the mixture of
parts model for pose estimation, wherein a tree structure is utilized for
representing relations between connected body parts. This strategy facilitates
training and inferencing of the model but suffers from double counting
problems, where one detected body part is counted twice due to lack of
constrains among unconnected body parts. To solve this problem, we propose a
generalized solution in which various part attributes are captured by multiple
features so as to avoid the double counted problem. Qualitative and
quantitative experimental results on a public available dataset demonstrate the
effectiveness of our proposed method.
An Effective Solution to Double Counting Problem in Human Pose Estimation – ResearchGate. Available from: http://www.researchgate.net/publication/271218491AnEffectiveSolutiontoDoubleCountingProbleminHumanPose_Estimation [accessed Oct 22, 2015].
Keywords: Pose estimation; double counting problem; mix-ture of parts Model
|
|
|
Wenjuan Gong, Xuena Zhang, Jordi Gonzalez, Andrews Sobral, Thierry Bouwmans, Changhe Tu, et al. (2016). Human Pose Estimation from Monocular Images: A Comprehensive Survey. SENS - Sensors, 16(12), 1966.
Abstract: Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling
methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.
Keywords: human pose estimation; human bodymodels; generativemethods; discriminativemethods; top-down methods; bottom-up methods
|
|
|
Wenjuan Gong, W.Zhang, Jordi Gonzalez, Y.Ren, & Z.Li. (2015). Enhanced Asymmetric Bilinear Model for Face Recognition. IJDSN - International Journal of Distributed Sensor Networks, , Article ID 218514.
Abstract: Bilinear models have been successfully applied to separate two factors, for example, pose variances and different identities in face recognition problems. Asymmetric model is a type of bilinear model which models a system in the most concise way. But seldom there are works exploring the applications of asymmetric bilinear model on face recognition problem with illumination changes. In this work, we propose enhanced asymmetric model for illumination-robust face recognition. Instead of initializing the factor probabilities randomly, we initialize them with nearest neighbor method and optimize them for the test data. Above that, we update the factor model to be identified. We validate the proposed method on a designed data sample and extended Yale B dataset. The experiment results show that the enhanced asymmetric models give promising results and good recognition accuracies.
|
|
|
Wenjuan Gong, Jordi Gonzalez, & Xavier Roca. (2012). Human Action Recognition based on Estimated Weak Poses. EURASIPJ - EURASIP Journal on Advances in Signal Processing, .
Abstract: We present a novel method for human action recognition (HAR) based on estimated poses from image sequences. We use 3D human pose data as additional information and propose a compact human pose representation, called a weak pose, in a low-dimensional space while still keeping the most discriminative information for a given pose. With predicted poses from image features, we map the problem from image feature space to pose space, where a Bag of Poses (BOP) model is learned for the final goal of HAR. The BOP model is a modified version of the classical bag of words pipeline by building the vocabulary based on the most representative weak poses for a given action. Compared with the standard k-means clustering, our vocabulary selection criteria is proven to be more efficient and robust against the inherent challenges of action recognition. Moreover, since for action recognition the ordering of the poses is discriminative, the BOP model incorporates temporal information: in essence, groups of consecutive poses are considered together when computing the vocabulary and assignment. We tested our method on two well-known datasets: HumanEva and IXMAS, to demonstrate that weak poses aid to improve action recognition accuracies. The proposed method is scene-independent and is comparable with the state-of-art method.
|
|
|
Wenjuan Gong, Jordi Gonzalez, Joao Manuel R. S. Taveres, & Xavier Roca. (2012). A New Image Dataset on Human Interactions. In 7th Conference on Articulated Motion and Deformable Objects (Vol. 7378, pp. 204–209). Springer Berlin Heidelberg.
Abstract: This article describes a new collection of still image dataset which are dedicated to interactions between people. Human action recognition from still images have been a hot topic recently, but most of them are actions performed by a single person, like running, walking, riding bikes, phoning and so on and there is no interactions between people in one image. The dataset collected in this paper are concentrating on human interaction between two people aiming to explore this new topic in the research area of action recognition from still images.
|
|
|
Wenjuan Gong, Jürgen Brauer, Michael Arens, & Jordi Gonzalez. (2011). Modeling vs. Learning Approaches for Monocular 3D Human Pose Estimation. In 1st IEEE International Workshop on Performance Evaluation on Recognition of Human Actions and Pose Estimation Methods.
|
|
|
Wenjuan Gong, Andrew Bagdanov, Xavier Roca, & Jordi Gonzalez. (2010). Automatic Key Pose Selection for 3D Human Action Recognition. In 6th International Conference on Articulated Motion and Deformable Objects (Vol. 6169, 290–299). Springer Verlag.
Abstract: This article describes a novel approach to the modeling of human actions in 3D. The method we propose is based on a “bag of poses” model that represents human actions as histograms of key-pose occurrences over the course of a video sequence. Actions are first represented as 3D poses using a sequence of 36 direction cosines corresponding to the angles 12 joints form with the world coordinate frame in an articulated human body model. These pose representations are then projected to three-dimensional, action-specific principal eigenspaces which we refer to as aSpaces. We introduce a method for key-pose selection based on a local-motion energy optimization criterion and we show that this method is more stable and more resistant to noisy data than other key-poses selection criteria for action recognition.
|
|
|
Wenjuan Gong. (2013). 3D Motion Data aided Human Action Recognition and Pose Estimation (Jordi Gonzalez, & Xavier Roca, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: In this work, we explore human action recognition and pose estimation prob-
lems. Different from traditional works of learning from 2D images or video
sequences and their annotated output, we seek to solve the problems with ad-
ditional 3D motion capture information, which helps to fill the gap between 2D
image features and human interpretations.
We first compare two different schools of approaches commonly used for 3D
pose estimation from 2D pose configuration: modeling and learning methods.
By looking into experiments results and considering our problems, we fixed a
learning method as the following approaches to do pose estimation. We then
establish a framework by adding a module of detecting 2D pose configuration
from images with varied background, which widely extend the application of
the approach. We also seek to directly estimate 3D poses from image features,
instead of estimating 2D poses as a intermediate module. We explore a robust
input feature, which combined with the proposed distance measure, provides
a solution for noisy or corrupted inputs. We further utilize the above method
to estimate weak poses,which is a concise representation of the original poses
by using dimension deduction technologies, from image features. Weak pose
space is where we calculate vocabulary and label action types using a bog of
words pipeline. Temporal information of an action is taken into consideration by
considering several consecutive frames as a single unit for computing vocabulary
and histogram assignments.
|
|
|
Wenjuan Gong. (2009). Action priors for human pose tracking by particle filter. Master's thesis, , Bellaterra, Barcelona.
|
|
|
Weiqing Min, Shuqiang Jiang, Jitao Sang, Huayang Wang, Xinda Liu, & Luis Herranz. (2017). Being a Supercook: Joint Food Attributes and Multimodal Content Modeling for Recipe Retrieval and Exploration. TMM - IEEE Transactions on Multimedia, 19(5), 1100–1113.
Abstract: This paper considers the problem of recipe-oriented image-ingredient correlation learning with multi-attributes for recipe retrieval and exploration. Existing methods mainly focus on food visual information for recognition while we model visual information, textual content (e.g., ingredients), and attributes (e.g., cuisine and course) together to solve extended recipe-oriented problems, such as multimodal cuisine classification and attribute-enhanced food image retrieval. As a solution, we propose a multimodal multitask deep belief network (M3TDBN) to learn joint image-ingredient representation regularized by different attributes. By grouping ingredients into visible ingredients (which are visible in the food image, e.g., “chicken” and “mushroom”) and nonvisible ingredients (e.g., “salt” and “oil”), M3TDBN is capable of learning both midlevel visual representation between images and visible ingredients and nonvisual representation. Furthermore, in order to utilize different attributes to improve the intermodality correlation, M3TDBN incorporates multitask learning to make different attributes collaborate each other. Based on the proposed M3TDBN, we exploit the derived deep features and the discovered correlations for three extended novel applications: 1) multimodal cuisine classification; 2) attribute-augmented cross-modal recipe image retrieval; and 3) ingredient and attribute inference from food images. The proposed approach is evaluated on the constructed Yummly dataset and the evaluation results have validated the effectiveness of the proposed approach.
|
|
|
Weijia Wu, Yuzhong Zhao, Zhuang Li, Jiahong Li, Mike Zheng Shou, Umapada Pal, et al. (2023). ICDAR 2023 Competition on Video Text Reading for Dense and Small Text. In 17th International Conference on Document Analysis and Recognition (Vol. 14188, 405–419). LNCS.
Abstract: Recently, video text detection, tracking and recognition in natural scenes are becoming very popular in the computer vision community. However, most existing algorithms and benchmarks focus on common text cases (e.g., normal size, density) and single scenario, while ignore extreme video texts challenges, i.e., dense and small text in various scenarios. In this competition report, we establish a video text reading benchmark, named DSText, which focuses on dense and small text reading challenge in the video with various scenarios. Compared with the previous datasets, the proposed dataset mainly include three new challenges: 1) Dense video texts, new challenge for video text spotter. 2) High-proportioned small texts. 3) Various new scenarios, e.g., ‘Game’, ‘Sports’, etc. The proposed DSText includes 100 video clips from 12 open scenarios, supporting two tasks (i.e., video text tracking (Task 1) and end-to-end video text spotting (Task2)). During the competition period (opened on 15th February, 2023 and closed on 20th March, 2023), a total of 24 teams participated in the three proposed tasks with around 30 valid submissions, respectively. In this article, we describe detailed statistical information of the dataset, tasks, evaluation protocols and the results summaries of the ICDAR 2023 on DSText competition. Moreover, we hope the benchmark will promise the video text research in the community.
Keywords: Video Text Spotting; Small Text; Text Tracking; Dense Text
|
|
|
W.Win, B.Bao, Q.Xu, Luis Herranz, & Shuqiang Jiang. (2019). Editorial Note: Efficient Multimedia Processing Methods and Applications (Vol. 78).
|
|
|
W. Niessen, Antonio Lopez, W. Van Enk, P. Van Roermund, Bart M. Ter Haar Romeny, & M. Viergever. (1997). Multiscale Trabecular Bone Orientation Analysis. In (SNRFAI’97) 7th Spanish National Symposium on Pattern Recognition and Image Analysis (pp. 19–24).
|
|
|
W. Niessen, Antonio Lopez, W. Van Enk, P. Van Roermund, Bart M. Ter Haar Romeny, & M. Viergever. (1997). In Vivo Analysis of Trabecular Bone Architecture. In Information Processing in Medical Imaging. IMPI 1997 (Vol. 1230, pp. 435–440). LNCS.
|
|
|
W. Liu, & Josep Llados. (2006). Graphics Recognition. Ten Years Review and Future Perspectives (Vol. 3926). LNCS.
|
|