PT Unknown AU Adarsh Tiwari Sanket Biswas Josep Llados TI Can Pre-trained Language Models Help in Understanding Handwritten Symbols? BT 17th International Conference on Document Analysis and Recognition PY 2023 BP 199–211 VL 14193 AB The emergence of transformer models like BERT, GPT-2, GPT-3, RoBERTa, T5 for natural language understanding tasks has opened the floodgates towards solving a wide array of machine learning tasks in other modalities like images, audio, music, sketches and so on. These language models are domain-agnostic and as a result could be applied to 1-D sequences of any kind. However, the key challenge lies in bridging the modality gap so that they could generate strong features beneficial for out-of-domain tasks. This work focuses on leveraging the power of such pre-trained language models and discusses the challenges in predicting challenging handwritten symbols and alphabets. ER