PT Journal AU Antonio Hernandez Miguel Reyes Victor Ponce Sergio Escalera TI GrabCut-Based Human Segmentation in Video Sequences SO Sensors JI SENS PY 2012 BP 15376 EP 15393 VL 12 IS 11 DI 10.3390/s121115376 DE segmentation; human pose recovery; GrabCut; GraphCut; Active Appearance Models; Conditional Random Field AB In this paper, we present a fully-automatic Spatio-Temporal GrabCut human segmentation methodology that combines tracking and segmentation. GrabCut initialization is performed by a HOG-based subject detection, face detection, and skin color model. Spatial information is included by Mean Shift clustering whereas temporal coherence is considered by the historical of Gaussian Mixture Models. Moreover, full face and pose recovery is obtained by combining human segmentation with Active Appearance Models and Conditional Random Fields. Results over public datasets and in a new Human Limb dataset show a robust segmentation and recovery of both face and pose using the presented methodology. ER