%0 Conference Proceedings %T ICDAR 2023 Competition on Video Text Reading for Dense and Small Text %A Weijia Wu %A Yuzhong Zhao %A Zhuang Li %A Jiahong Li %A Mike Zheng Shou %A Umapada Pal %A Dimosthenis Karatzas %A Xiang Bai %B 17th International Conference on Document Analysis and Recognition %D 2023 %V 14188 %F Weijia Wu2023 %O DAG %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3898), last updated on Tue, 30 Jan 2024 15:38:37 +0100 %X Recently, video text detection, tracking and recognition in natural scenes are becoming very popular in the computer vision community. However, most existing algorithms and benchmarks focus on common text cases (e.g., normal size, density) and single scenario, while ignore extreme video texts challenges, i.e., dense and small text in various scenarios. In this competition report, we establish a video text reading benchmark, named DSText, which focuses on dense and small text reading challenge in the video with various scenarios. Compared with the previous datasets, the proposed dataset mainly include three new challenges: 1) Dense video texts, new challenge for video text spotter. 2) High-proportioned small texts. 3) Various new scenarios, e.g., ‘Game’, ‘Sports’, etc. The proposed DSText includes 100 video clips from 12 open scenarios, supporting two tasks (i.e., video text tracking (Task 1) and end-to-end video text spotting (Task2)). During the competition period (opened on 15th February, 2023 and closed on 20th March, 2023), a total of 24 teams participated in the three proposed tasks with around 30 valid submissions, respectively. In this article, we describe detailed statistical information of the dataset, tasks, evaluation protocols and the results summaries of the ICDAR 2023 on DSText competition. Moreover, we hope the benchmark will promise the video text research in the community. %K Video Text Spotting %K Small Text %K Text Tracking %K Dense Text %U https://link.springer.com/chapter/10.1007/978-3-031-41679-8_23 %U http://refbase.cvc.uab.es/files/WZL2023.pdf %P 405–419