toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Egils Avots; Meysam Madadi; Sergio Escalera; Jordi Gonzalez; Xavier Baro; Paul Pallin; Gholamreza Anbarjafari edit   pdf
url  doi
openurl 
  Title From 2D to 3D geodesic-based garment matching Type Journal Article
  Year 2019 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 78 Issue (up) 18 Pages 25829–25853  
  Keywords Shape matching; Geodesic distance; Texture mapping; RGBD image processing; Gaussian mixture model  
  Abstract A new approach for 2D to 3D garment retexturing is proposed based on Gaussian mixture models and thin plate splines (TPS). An automatically segmented garment of an individual is matched to a new source garment and rendered, resulting in augmented images in which the target garment has been retextured using the texture of the source garment. We divide the problem into garment boundary matching based on Gaussian mixture models and then interpolate inner points using surface topology extracted through geodesic paths, which leads to a more realistic result than standard approaches. We evaluated and compared our system quantitatively by root mean square error (RMS) and qualitatively using the mean opinion score (MOS), showing the benefits of the proposed methodology on our gathered dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; ISE; 600.098; 600.119; 602.133 Approved no  
  Call Number Admin @ si @ AME2019 Serial 3317  
Permanent link to this record
 

 
Author Wenwen Fu; Zhihong An; Wendong Huang; Haoran Sun; Wenjuan Gong; Jordi Gonzalez edit  url
openurl 
  Title A Spatio-Temporal Spotting Network with Sliding Windows for Micro-Expression Detection Type Journal Article
  Year 2023 Publication Electronics Abbreviated Journal ELEC  
  Volume 12 Issue (up) 18 Pages 3947  
  Keywords micro-expression spotting; sliding window; key frame extraction  
  Abstract Micro-expressions reveal underlying emotions and are widely applied in political psychology, lie detection, law enforcement and medical care. Micro-expression spotting aims to detect the temporal locations of facial expressions from video sequences and is a crucial task in micro-expression recognition. In this study, the problem of micro-expression spotting is formulated as micro-expression classification per frame. We propose an effective spotting model with sliding windows called the spatio-temporal spotting network. The method involves a sliding window detection mechanism, combines the spatial features from the local key frames and the global temporal features and performs micro-expression spotting. The experiments are conducted on the CAS(ME)2 database and the SAMM Long Videos database, and the results demonstrate that the proposed method outperforms the state-of-the-art method by 30.58% for the CAS(ME)2 and 23.98% for the SAMM Long Videos according to overall F-scores.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ FAH2023 Serial 3864  
Permanent link to this record
 

 
Author Dani Rowe; Jordi Gonzalez; Marco Pedersoli; Juan J. Villanueva edit   pdf
doi  openurl
  Title On Tracking Inside Groups Type Journal Article
  Year 2010 Publication Machine Vision and Applications Abbreviated Journal MVA  
  Volume 21 Issue (up) 2 Pages 113–127  
  Keywords  
  Abstract This work develops a new architecture for multiple-target tracking in unconstrained dynamic scenes, which consists of a detection level which feeds a two-stage tracking system. A remarkable characteristic of the system is its ability to track several targets while they group and split, without using 3D information. Thus, special attention is given to the feature-selection and appearance-computation modules, and to those modules involved in tracking through groups. The system aims to work as a stand-alone application in complex and dynamic scenarios. No a-priori knowledge about either the scene or the targets, based on a previous training period, is used. Hence, the scenario is completely unknown beforehand. Successful tracking has been demonstrated in well-known databases of both indoor and outdoor scenarios. Accurate and robust localisations have been yielded during long-term target merging and occlusions.  
  Address  
  Corporate Author Thesis  
  Publisher Springer-Verlag Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0932-8092 ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number ISE @ ise @ RGP2010 Serial 1158  
Permanent link to this record
 

 
Author Jasper Uilings; Koen E.A. van de Sande; Theo Gevers; Arnold Smeulders edit  doi
openurl 
  Title Selective Search for Object Recognition Type Journal Article
  Year 2013 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 104 Issue (up) 2 Pages 154-171  
  Keywords  
  Abstract This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 % recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http://disi.unitn.it/~uijlings/SelectiveSearch.html).  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0920-5691 ISBN Medium  
  Area Expedition Conference  
  Notes ALTRES;ISE Approved no  
  Call Number Admin @ si @ USG2013 Serial 2362  
Permanent link to this record
 

 
Author Bhaskar Chakraborty; Andrew Bagdanov; Jordi Gonzalez; Xavier Roca edit   pdf
doi  openurl
  Title Human Action Recognition Using an Ensemble of Body-Part Detectors Type Journal Article
  Year 2013 Publication Expert Systems Abbreviated Journal EXSY  
  Volume 30 Issue (up) 2 Pages 101-114  
  Keywords Human action recognition;body-part detection;hidden Markov model  
  Abstract This paper describes an approach to human action recognition based on a probabilistic optimization model of body parts using hidden Markov model (HMM). Our method is able to distinguish between similar actions by only considering the body parts having major contribution to the actions, for example, legs for walking, jogging and running; arms for boxing, waving and clapping. We apply HMMs to model the stochastic movement of the body parts for action recognition. The HMM construction uses an ensemble of body-part detectors, followed by grouping of part detections, to perform human identification. Three example-based body-part detectors are trained to detect three components of the human body: the head, legs and arms. These detectors cope with viewpoint changes and self-occlusions through the use of ten sub-classifiers that detect body parts over a specific range of viewpoints. Each sub-classifier is a support vector machine trained on features selected for the discriminative power for each particular part/viewpoint combination. Grouping of these detections is performed using a simple geometric constraint model that yields a viewpoint-invariant human detector. We test our approach on three publicly available action datasets: the KTH dataset, Weizmann dataset and HumanEva dataset. Our results illustrate that with a simple and compact representation we can achieve robust recognition of human actions comparable to the most complex, state-of-the-art methods.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ CBG2013 Serial 1809  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: