%0 Conference Proceedings %T ChaLearn Looking at People RGB-D Isolated and Continuous Datasets for Gesture Recognition %A Jun Wan %A Yibing Zhao %A Shuai Zhou %A Isabelle Guyon %A Sergio Escalera %B 29th IEEE Conference on Computer Vision and Pattern Recognition Worshops %D 2016 %F Jun Wan2016 %O HuPBA;MILAB; %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=2771), last updated on Mon, 21 Jan 2019 14:12:34 +0100 %X In this paper, we present two large video multi-modal datasets for RGB and RGB-D gesture recognition: the ChaLearn LAP RGB-D Isolated Gesture Dataset (IsoGD)and the Continuous Gesture Dataset (ConGD). Both datasets are derived from the ChaLearn Gesture Dataset(CGD) that has a total of more than 50000 gestures for the “one-shot-learning” competition. To increase the potential of the old dataset, we designed new well curated datasets composed of 249 gesture labels, and including 47933 gestures manually labeled the begin and end frames in sequences.Using these datasets we will open two competitionson the CodaLab platform so that researchers can test and compare their methods for “user independent” gesture recognition. The first challenge is designed for gesture spottingand recognition in continuous sequences of gestures while the second one is designed for gesture classification from segmented data. The baseline method based on the bag of visual words model is also presented. %U http://refbase.cvc.uab.es/files/WZZ2016.pdf %U http://dx.doi.org/10.1109/CVPRW.2016.100