PT Unknown AU Alvaro Peris Marc Bolaños Petia Radeva Francisco Casacuberta TI Video Description Using Bidirectional Recurrent Neural Networks BT 25th International Conference on Artificial Neural Networks PY 2016 BP 3 EP 11 VL 2 DE Video description; Neural Machine Translation; Birectional Recurrent Neural Networks; LSTM; Convolutional Neural Networks AB Although traditionally used in the machine translation field, the encoder-decoder framework has been recently applied for the generation of video and image descriptions. The combination of Convolutional and Recurrent Neural Networks in these models has proven to outperform the previous state of the art, obtaining more accurate video descriptions. In this work we propose pushing further this model by introducing two contributions into the encoding stage. First, producing richer image representations by combining object and location information from Convolutional Neural Networks and second, introducing Bidirectional Recurrent Neural Networks for capturing both forward and backward temporal relationships in the input frames. ER