toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Francisco Javier Orozco; Ognjen Rudovic; Jordi Gonzalez; Maja Pantic edit   pdf
url  doi
openurl 
  Title Hierarchical On-line Appearance-Based Tracking for 3D Head Pose, Eyebrows, Lips, Eyelids and Irises Type Journal Article
  Year 2013 Publication Image and Vision Computing Abbreviated Journal IMAVIS  
  Volume 31 Issue 4 Pages 322-340  
  Keywords On-line appearance models; Levenberg–Marquardt algorithm; Line-search optimization; 3D face tracking; Facial action tracking; Eyelid tracking; Iris tracking  
  Abstract (down) In this paper, we propose an On-line Appearance-Based Tracker (OABT) for simultaneous tracking of 3D head pose, lips, eyebrows, eyelids and irises in monocular video sequences. In contrast to previously proposed tracking approaches, which deal with face and gaze tracking separately, our OABT can also be used for eyelid and iris tracking, as well as 3D head pose, lips and eyebrows facial actions tracking. Furthermore, our approach applies an on-line learning of changes in the appearance of the tracked target. Hence, the prior training of appearance models, which usually requires a large amount of labeled facial images, is avoided. Moreover, the proposed method is built upon a hierarchical combination of three OABTs, which are optimized using a Levenberg–Marquardt Algorithm (LMA) enhanced with line-search procedures. This, in turn, makes the proposed method robust to changes in lighting conditions, occlusions and translucent textures, as evidenced by our experiments. Finally, the proposed method achieves head and facial actions tracking in real-time.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 605.203; 302.012; 302.018; 600.049 Approved no  
  Call Number ORG2013 Serial 2221  
Permanent link to this record
 

 
Author Mikhail Mozerov; Joost Van de Weijer edit   pdf
doi  openurl
  Title One-view occlusion detection for stereo matching with a fully connected CRF model Type Journal Article
  Year 2019 Publication IEEE Transactions on Image Processing Abbreviated Journal TIP  
  Volume 28 Issue 6 Pages 2936-2947  
  Keywords Stereo matching; energy minimization; fully connected MRF model; geodesic distance filter  
  Abstract (down) In this paper, we extend the standard belief propagation (BP) sequential technique proposed in the tree-reweighted sequential method [15] to the fully connected CRF models with the geodesic distance affinity. The proposed method has been applied to the stereo matching problem. Also a new approach to the BP marginal solution is proposed that we call one-view occlusion detection (OVOD). In contrast to the standard winner takes all (WTA) estimation, the proposed OVOD solution allows to find occluded regions in the disparity map and simultaneously improve the matching result. As a result we can perform only
one energy minimization process and avoid the cost calculation for the second view and the left-right check procedure. We show that the OVOD approach considerably improves results for cost augmentation and energy minimization techniques in comparison with the standard one-view affinity space implementation. We apply our method to the Middlebury data set and reach state-ofthe-art especially for median, average and mean squared error metrics.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.098; 600.109; 602.133; 600.120;ISE Approved no  
  Call Number Admin @ si @ MoW2019 Serial 3221  
Permanent link to this record
 

 
Author Ignasi Rius; Jordi Gonzalez; Javier Varona; Xavier Roca edit  doi
openurl 
  Title Action-specific motion prior for efficient bayesian 3D human body tracking Type Journal Article
  Year 2009 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 42 Issue 11 Pages 2907–2921  
  Keywords  
  Abstract (down) In this paper, we aim to reconstruct the 3D motion parameters of a human body
model from the known 2D positions of a reduced set of joints in the image plane.
Towards this end, an action-specific motion model is trained from a database of real
motion-captured performances. The learnt motion model is used within a particle
filtering framework as a priori knowledge on human motion. First, our dynamic
model guides the particles according to similar situations previously learnt. Then, the solution space is constrained so only feasible human postures are accepted as valid solutions at each time step. As a result, we are able to track the 3D configuration of the full human body from several cycles of walking motion sequences using only the 2D positions of a very reduced set of joints from lateral or frontal viewpoints.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0031-3203 ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number ISE @ ise @ RGV2009 Serial 1159  
Permanent link to this record
 

 
Author Michael Holte; Bhaskar Chakraborty; Jordi Gonzalez; Thomas B. Moeslund edit   pdf
url  doi
openurl 
  Title A Local 3D Motion Descriptor for Multi-View Human Action Recognition from 4D Spatio-Temporal Interest Points Type Journal Article
  Year 2012 Publication IEEE Journal of Selected Topics in Signal Processing Abbreviated Journal J-STSP  
  Volume 6 Issue 5 Pages 553-565  
  Keywords  
  Abstract (down) In this paper, we address the problem of human action recognition in reconstructed 3-D data acquired by multi-camera systems. We contribute to this field by introducing a novel 3-D action recognition approach based on detection of 4-D (3-D space $+$ time) spatio-temporal interest points (STIPs) and local description of 3-D motion features. STIPs are detected in multi-view images and extended to 4-D using 3-D reconstructions of the actors and pixel-to-vertex correspondences of the multi-camera setup. Local 3-D motion descriptors, histogram of optical 3-D flow (HOF3D), are extracted from estimated 3-D optical flow in the neighborhood of each 4-D STIP and made view-invariant. The local HOF3D descriptors are divided using 3-D spatial pyramids to capture and improve the discrimination between arm- and leg-based actions. Based on these pyramids of HOF3D descriptors we build a bag-of-words (BoW) vocabulary of human actions, which is compressed and classified using agglomerative information bottleneck (AIB) and support vector machines (SVMs), respectively. Experiments on the publicly available i3DPost and IXMAS datasets show promising state-of-the-art results and validate the performance and view-invariance of the approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1932-4553 ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ HCG2012 Serial 1994  
Permanent link to this record
 

 
Author Eduard Vazquez; Theo Gevers; M. Lucassen; Joost Van de Weijer; Ramon Baldrich edit  doi
openurl 
  Title Saliency of Color Image Derivatives: A Comparison between Computational Models and Human Perception Type Journal Article
  Year 2010 Publication Journal of the Optical Society of America A Abbreviated Journal JOSA A  
  Volume 27 Issue 3 Pages 613–621  
  Keywords  
  Abstract (down) In this paper, computational methods are proposed to compute color edge saliency based on the information content of color edges. The computational methods are evaluated on bottom-up saliency in a psychophysical experiment, and on a more complex task of salient object detection in real-world images. The psychophysical experiment demonstrates the relevance of using information theory as a saliency processing model and that the proposed methods are significantly better in predicting color saliency (with a human-method correspondence up to 74.75% and an observer agreement of 86.8%) than state-of-the-art models. Furthermore, results from salient object detection confirm that an early fusion of color and contrast provide accurate performance to compute visual saliency with a hit rate up to 95.2%.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE;CIC Approved no  
  Call Number CAT @ cat @ VGL2010 Serial 1275  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: