|   | 
Details
   web
Records
Author (up) Wenjuan Gong
Title Action priors for human pose tracking by particle filter Type Report
Year 2009 Publication CVC Technical Report Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address
Corporate Author Computer Vision Center Thesis Master's thesis
Publisher Place of Publication Bellaterra, Barcelona Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ Gon2009 Serial 2401
Permanent link to this record
 

 
Author (up) Wenjuan Gong; Andrew Bagdanov; Xavier Roca; Jordi Gonzalez
Title Automatic Key Pose Selection for 3D Human Action Recognition Type Conference Article
Year 2010 Publication 6th International Conference on Articulated Motion and Deformable Objects Abbreviated Journal
Volume 6169 Issue Pages 290–299
Keywords
Abstract This article describes a novel approach to the modeling of human actions in 3D. The method we propose is based on a “bag of poses” model that represents human actions as histograms of key-pose occurrences over the course of a video sequence. Actions are first represented as 3D poses using a sequence of 36 direction cosines corresponding to the angles 12 joints form with the world coordinate frame in an articulated human body model. These pose representations are then projected to three-dimensional, action-specific principal eigenspaces which we refer to as aSpaces. We introduce a method for key-pose selection based on a local-motion energy optimization criterion and we show that this method is more stable and more resistant to noisy data than other key-poses selection criteria for action recognition.
Address
Corporate Author Thesis
Publisher Springer Verlag Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-14060-0 Medium
Area Expedition Conference AMDO
Notes ISE Approved no
Call Number DAG @ dag @ GBR2010 Serial 1317
Permanent link to this record
 

 
Author (up) Wenjuan Gong; Jürgen Brauer; Michael Arens; Jordi Gonzalez
Title Modeling vs. Learning Approaches for Monocular 3D Human Pose Estimation Type Conference Article
Year 2011 Publication 1st IEEE International Workshop on Performance Evaluation on Recognition of Human Actions and Pose Estimation Methods Abbreviated Journal
Volume Issue Pages
Keywords
Abstract
Address London, United Kingdom
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference PERHAPS
Notes ISE Approved no
Call Number Admin @ si @ GBA2011 Serial 1812
Permanent link to this record
 

 
Author (up) Wenjuan Gong; Jordi Gonzalez; Joao Manuel R. S. Taveres; Xavier Roca
Title A New Image Dataset on Human Interactions Type Conference Article
Year 2012 Publication 7th Conference on Articulated Motion and Deformable Objects Abbreviated Journal
Volume 7378 Issue Pages 204-209
Keywords
Abstract This article describes a new collection of still image dataset which are dedicated to interactions between people. Human action recognition from still images have been a hot topic recently, but most of them are actions performed by a single person, like running, walking, riding bikes, phoning and so on and there is no interactions between people in one image. The dataset collected in this paper are concentrating on human interaction between two people aiming to explore this new topic in the research area of action recognition from still images.
Address Mallorca
Corporate Author Thesis
Publisher Springer Berlin Heidelberg Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0302-9743 ISBN 978-3-642-31566-4 Medium
Area Expedition Conference AMDO
Notes ISE Approved no
Call Number Admin @ si @ GGT2012 Serial 2030
Permanent link to this record
 

 
Author (up) Wenjuan Gong; Jordi Gonzalez; Xavier Roca
Title Human Action Recognition based on Estimated Weak Poses Type Journal Article
Year 2012 Publication EURASIP Journal on Advances in Signal Processing Abbreviated Journal EURASIPJ
Volume Issue Pages
Keywords
Abstract We present a novel method for human action recognition (HAR) based on estimated poses from image sequences. We use 3D human pose data as additional information and propose a compact human pose representation, called a weak pose, in a low-dimensional space while still keeping the most discriminative information for a given pose. With predicted poses from image features, we map the problem from image feature space to pose space, where a Bag of Poses (BOP) model is learned for the final goal of HAR. The BOP model is a modified version of the classical bag of words pipeline by building the vocabulary based on the most representative weak poses for a given action. Compared with the standard k-means clustering, our vocabulary selection criteria is proven to be more efficient and robust against the inherent challenges of action recognition. Moreover, since for action recognition the ordering of the poses is discriminative, the BOP model incorporates temporal information: in essence, groups of consecutive poses are considered together when computing the vocabulary and assignment. We tested our method on two well-known datasets: HumanEva and IXMAS, to demonstrate that weak poses aid to improve action recognition accuracies. The proposed method is scene-independent and is comparable with the state-of-art method.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ GGR2012 Serial 2003
Permanent link to this record
 

 
Author (up) Wenjuan Gong; W.Zhang; Jordi Gonzalez; Y.Ren; Z.Li
Title Enhanced Asymmetric Bilinear Model for Face Recognition Type Journal Article
Year 2015 Publication International Journal of Distributed Sensor Networks Abbreviated Journal IJDSN
Volume Issue Pages Article ID 218514
Keywords
Abstract Bilinear models have been successfully applied to separate two factors, for example, pose variances and different identities in face recognition problems. Asymmetric model is a type of bilinear model which models a system in the most concise way. But seldom there are works exploring the applications of asymmetric bilinear model on face recognition problem with illumination changes. In this work, we propose enhanced asymmetric model for illumination-robust face recognition. Instead of initializing the factor probabilities randomly, we initialize them with nearest neighbor method and optimize them for the test data. Above that, we update the factor model to be identified. We validate the proposed method on a designed data sample and extended Yale B dataset. The experiment results show that the enhanced asymmetric models give promising results and good recognition accuracies.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 600.063; 600.078 Approved no
Call Number Admin @ si @ GZG2015 Serial 2592
Permanent link to this record
 

 
Author (up) Wenjuan Gong; Xuena Zhang; Jordi Gonzalez; Andrews Sobral; Thierry Bouwmans; Changhe Tu; El-hadi Zahzah
Title Human Pose Estimation from Monocular Images: A Comprehensive Survey Type Journal Article
Year 2016 Publication Sensors Abbreviated Journal SENS
Volume 16 Issue 12 Pages 1966
Keywords human pose estimation; human bodymodels; generativemethods; discriminativemethods; top-down methods; bottom-up methods
Abstract Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling
methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 600.098; 600.119 Approved no
Call Number Admin @ si @ GZG2016 Serial 2933
Permanent link to this record
 

 
Author (up) Wenjuan Gong; Y.Huang; Jordi Gonzalez; Liang Wang
Title An Effective Solution to Double Counting Problem in Human Pose Estimation Type Miscellaneous
Year 2015 Publication Arxiv Abbreviated Journal
Volume Issue Pages
Keywords Pose estimation; double counting problem; mix-ture of parts Model
Abstract The mixture of parts model has been successfully applied to solve the 2D
human pose estimation problem either as an explicitly trained body part model
or as latent variables for pedestrian detection. Even in the era of massive
applications of deep learning techniques, the mixture of parts model is still
effective in solving certain problems, especially in the case with limited
numbers of training samples. In this paper, we consider using the mixture of
parts model for pose estimation, wherein a tree structure is utilized for
representing relations between connected body parts. This strategy facilitates
training and inferencing of the model but suffers from double counting
problems, where one detected body part is counted twice due to lack of
constrains among unconnected body parts. To solve this problem, we propose a
generalized solution in which various part attributes are captured by multiple
features so as to avoid the double counted problem. Qualitative and
quantitative experimental results on a public available dataset demonstrate the
effectiveness of our proposed method.

An Effective Solution to Double Counting Problem in Human Pose Estimation – ResearchGate. Available from: http://www.researchgate.net/publication/271218491AnEffectiveSolutiontoDoubleCountingProbleminHumanPose_Estimation [accessed Oct 22, 2015].
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 600.078 Approved no
Call Number Admin @ si @ GHG2015 Serial 2590
Permanent link to this record
 

 
Author (up) Wenjuan Gong; Yue Zhang; Wei Wang; Peng Cheng; Jordi Gonzalez
Title Meta-MMFNet: Meta-learning-based Multi-model Fusion Network for Micro-expression Recognition Type Journal Article
Year 2023 Publication ACM Transactions on Multimedia Computing, Communications, and Applications Abbreviated Journal TMCCA
Volume 20 Issue 2 Pages 1–20
Keywords
Abstract Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning-based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ GZW2023 Serial 3862
Permanent link to this record
 

 
Author (up) Wenjuan Gong; Zhang Yue; Wei Wang; Cheng Peng; Jordi Gonzalez
Title Meta-MMFNet: Meta-Learning Based Multi-Model Fusion Network for Micro-Expression Recognition Type Journal Article
Year 2022 Publication ACM Transactions on Multimedia Computing, Communications, and Applications Abbreviated Journal ACMTMC
Volume Issue Pages
Keywords Feature Fusion; Model Fusion; Meta-Learning; Micro-Expression Recognition
Abstract Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method.
Address May 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE; 600.157 Approved no
Call Number Admin @ si @ GYW2022 Serial 3692
Permanent link to this record
 

 
Author (up) Wenlong Deng; Yongli Mou; Takahiro Kashiwa; Sergio Escalera; Kohei Nagai; Kotaro Nakayama; Yutaka Matsuo; Helmut Prendinger
Title Vision based Pixel-level Bridge Structural Damage Detection Using a Link ASPP Network Type Journal Article
Year 2020 Publication Automation in Construction Abbreviated Journal AC
Volume 110 Issue Pages 102973
Keywords Semantic image segmentation; Deep learning
Abstract Structural Health Monitoring (SHM) has greatly benefited from computer vision. Recently, deep learning approaches are widely used to accurately estimate the state of deterioration of infrastructure. In this work, we focus on the problem of bridge surface structural damage detection, such as delamination and rebar exposure. It is well known that the quality of a deep learning model is highly dependent on the quality of the training dataset. Bridge damage detection, our application domain, has the following main challenges: (i) labeling the damages requires knowledgeable civil engineering professionals, which makes it difficult to collect a large annotated dataset; (ii) the damage area could be very small, whereas the background area is large, which creates an unbalanced training environment; (iii) due to the difficulty to exactly determine the extension of the damage, there is often a variation among different labelers who perform pixel-wise labeling. In this paper, we propose a novel model for bridge structural damage detection to address the first two challenges. This paper follows the idea of an atrous spatial pyramid pooling (ASPP) module that is designed as a novel network for bridge damage detection. Further, we introduce the weight balanced Intersection over Union (IoU) loss function to achieve accurate segmentation on a highly unbalanced small dataset. The experimental results show that (i) the IoU loss function improves the overall performance of damage detection, as compared to cross entropy loss or focal loss, and (ii) the proposed model has a better ability to detect a minority class than other light segmentation networks.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA; no proj Approved no
Call Number Admin @ si @ DMK2020 Serial 3314
Permanent link to this record
 

 
Author (up) Wenwen Fu; Zhihong An; Wendong Huang; Haoran Sun; Wenjuan Gong; Jordi Gonzalez
Title A Spatio-Temporal Spotting Network with Sliding Windows for Micro-Expression Detection Type Journal Article
Year 2023 Publication Electronics Abbreviated Journal ELEC
Volume 12 Issue 18 Pages 3947
Keywords micro-expression spotting; sliding window; key frame extraction
Abstract Micro-expressions reveal underlying emotions and are widely applied in political psychology, lie detection, law enforcement and medical care. Micro-expression spotting aims to detect the temporal locations of facial expressions from video sequences and is a crucial task in micro-expression recognition. In this study, the problem of micro-expression spotting is formulated as micro-expression classification per frame. We propose an effective spotting model with sliding windows called the spatio-temporal spotting network. The method involves a sliding window detection mechanism, combines the spatial features from the local key frames and the global temporal features and performs micro-expression spotting. The experiments are conducted on the CAS(ME)2 database and the SAMM Long Videos database, and the results demonstrate that the proposed method outperforms the state-of-the-art method by 30.58% for the CAS(ME)2 and 23.98% for the SAMM Long Videos according to overall F-scores.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes ISE Approved no
Call Number Admin @ si @ FAH2023 Serial 3864
Permanent link to this record
 

 
Author (up) Wenwen Yu; Chengquan Zhang; Haoyu Cao; Wei Hua; Bohan Li; Huang Chen; Mingyu Liu; Mingrui Chen; Jianfeng Kuang; Mengjun Cheng; Yuning Du; Shikun Feng; Xiaoguang Hu; Pengyuan Lyu; Kun Yao; Yuechen Yu; Yuliang Liu; Wanxiang Che; Errui Ding; Cheng-Lin Liu; Jiebo Luo; Shuicheng Yan; Min Zhang; Dimosthenis Karatzas; Xing Sun; Jingdong Wang; Xiang Bai
Title ICDAR 2023 Competition on Structured Text Extraction from Visually-Rich Document Images Type Conference Article
Year 2023 Publication 17th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume 14188 Issue Pages 536–552
Keywords
Abstract Structured text extraction is one of the most valuable and challenging application directions in the field of Document AI. However, the scenarios of past benchmarks are limited, and the corresponding evaluation protocols usually focus on the submodules of the structured text extraction scheme. In order to eliminate these problems, we organized the ICDAR 2023 competition on Structured text extraction from Visually-Rich Document images (SVRD). We set up two tracks for SVRD including Track 1: HUST-CELL and Track 2: Baidu-FEST, where HUST-CELL aims to evaluate the end-to-end performance of Complex Entity Linking and Labeling, and Baidu-FEST focuses on evaluating the performance and generalization of Zero-shot/Few-shot Structured Text extraction from an end-to-end perspective. Compared to the current document benchmarks, our two tracks of competition benchmark enriches the scenarios greatly and contains more than 50 types of visually-rich document images (mainly from the actual enterprise applications). The competition opened on 30th December, 2022 and closed on 24th March, 2023. There are 35 participants and 91 valid submissions received for Track 1, and 15 participants and 26 valid submissions received for Track 2. In this report we will presents the motivation, competition datasets, task definition, evaluation protocol, and submission summaries. According to the performance of the submissions, we believe there is still a large gap on the expected information extraction performance for complex and zero-shot scenarios. It is hoped that this competition will attract many researchers in the field of CV and NLP, and bring some new thoughts to the field of Document AI.
Address San Jose; CA; USA; August 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG Approved no
Call Number Admin @ si @ YZC2023 Serial 3896
Permanent link to this record
 

 
Author (up) Wenwen Yu; Mingyu Liu; Mingrui Chen; Ning Lu; Yinlong We; Yuliang Liu; Dimosthenis Karatzas; Xiang Bai
Title ICDAR 2023 Competition on Reading the Seal Title Type Conference Article
Year 2023 Publication 17th International Conference on Document Analysis and Recognition Abbreviated Journal
Volume 14188 Issue Pages 522–535
Keywords
Abstract Reading seal title text is a challenging task due to the variable shapes of seals, curved text, background noise, and overlapped text. However, this important element is commonly found in official and financial scenarios, and has not received the attention it deserves in the field of OCR technology. To promote research in this area, we organized ICDAR 2023 competition on reading the seal title (ReST), which included two tasks: seal title text detection (Task 1) and end-to-end seal title recognition (Task 2). We constructed a dataset of 10,000 real seal data, covering the most common classes of seals, and labeled all seal title texts with text polygons and text contents. The competition opened on 30th December, 2022 and closed on 20th March, 2023. The competition attracted 53 participants and received 135 submissions from academia and industry, including 28 participants and 72 submissions for Task 1, and 25 participants and 63 submissions for Task 2, which demonstrated significant interest in this challenging task. In this report, we present an overview of the competition, including the organization, challenges, and results. We describe the dataset and tasks, and summarize the submissions and evaluation results. The results show that significant progress has been made in the field of seal title text reading, and we hope that this competition will inspire further research and development in this important area of OCR technology.
Address San Jose; CA; USA; August 2023
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title LNCS
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference ICDAR
Notes DAG Approved no
Call Number Admin @ si @ YLC2023 Serial 3897
Permanent link to this record
 

 
Author (up) X. Binefa; F. Javier Sanchez; F.X. Perez; Xavier Roca; Jordi Vitria; Juan J. Villanueva
Title Using defocus in optical inspection of integrated circuits Type Conference Article
Year 1993 Publication Institute of Physics Conferences Series Abbreviated Journal
Volume 135 Issue 10 Pages 389-392
Keywords
Abstract
Address Bristol
Corporate Author Thesis
Publisher Institute of Physics Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MV;OR;ISE Approved no
Call Number BCNPCL @ bcnpcl @ BSP1993; IAM @ iam @ BSP1993 Serial 151
Permanent link to this record