%0 Journal Article %T Beyond Oneshot Encoding: lower dimensional target embedding %A Pau Rodriguez %A Miguel Angel Bautista %A Sergio Escalera %A Jordi Gonzalez %J Image and Vision Computing %D 2018 %V 75 %F Pau Rodriguez2018 %O ISE; HuPBA; 600.098; 602.133; 602.121; 600.119;MILAB %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3120), last updated on Thu, 16 Feb 2023 12:03:17 +0100 %X Target encoding plays a central role when learning Convolutional Neural Networks. In this realm, one-hot encoding is the most prevalent strategy due to its simplicity. However, this so widespread encoding schema assumes a flat label space, thus ignoring rich relationships existing among labels that can be exploited during training. In large-scale datasets, data does not span the full label space, but instead lies in a low-dimensional output manifold. Following this observation, we embed the targets into a low-dimensional space, drastically improving convergence speed while preserving accuracy. Our contribution is two fold: (i) We show that random projections of the label space are a valid tool to find such lower dimensional embeddings, boosting dramatically convergence rates at zero computational cost; and (ii) we propose a normalized eigenrepresentation of the class manifold that encodes the targets with minimal information loss, improving the accuracy of random projections encoding while enjoying the same convergence rates. Experiments on CIFAR-100, CUB200-2011, Imagenet, and MIT Places demonstrate that the proposed approach drastically improves convergence speed while reaching very competitive accuracy rates. %K Error correcting output codes %K Output embeddings %K Deep learning %K Computer vision %U https://doi.org/10.1016/j.imavis.2018.04.004 %U http://refbase.cvc.uab.es/files/RBE2018.pdf %U http://dx.doi.org/10.1016/j.imavis.2018.04.004 %P 21-31