TY - JOUR AU - Zhengying Liu AU - Zhen Xu AU - Sergio Escalera AU - Isabelle Guyon AU - Julio C. S. Jacques Junior AU - Meysam Madadi AU - Adrien Pavao AU - Sebastien Treguer AU - Wei-Wei Tu PY - 2020// TI - Towards automated computer vision: analysis of the AutoCV challenges 2019 T2 - PRL JO - Pattern Recognition Letters SP - 196 EP - 203 VL - 135 KW - Computer vision KW - AutoML KW - Deep learning N2 - We present the results of recent challenges in Automated Computer Vision (AutoCV, renamed here for clarity AutoCV1 and AutoCV2, 2019), which are part of a series of challenge on Automated Deep Learning (AutoDL). These two competitions aim at searching for fully automated solutions for classification tasks in computer vision, with an emphasis on any-time performance. The first competition was limited to image classification while the second one included both images and videos. Our design imposed to the participants to submit their code on a challenge platform for blind testing on five datasets, both for training and testing, without any human intervention whatsoever. Winning solutions adopted deep learning techniques based on already published architectures, such as AutoAugment, MobileNet and ResNet, to reach state-of-the-art performance in the time budget of the challenge (only 20 minutes of GPU time). The novel contributions include strategies to deliver good preliminary results at any time during the learning process, such that a method can be stopped early and still deliver good performance. This feature is key for the adoption of such techniques by data analysts desiring to obtain rapidly preliminary results on large datasets and to speed up the development process. The soundness of our design was verified in several aspects: (1) Little overfitting of the on-line leaderboard providing feedback on 5 development datasets was observed, compared to the final blind testing on the 5 (separate) final test datasets, suggesting that winning solutions might generalize to other computer vision classification tasks; (2) Error bars on the winners’ performance allow us to say with confident that they performed significantly better than the baseline solutions we provided; (3) The ranking of participants according to the any-time metric we designed, namely the Area under the Learning Curve, was different from that of the fixed-time metric, i.e. AUC at the end of the fixed time budget. We released all winning solutions under open-source licenses. At the end of the AutoDL challenge series, all data of the challenge will be made publicly available, thus providing a collection of uniformly formatted datasets, which can serve to conduct further research, particularly on meta-learning. UR - https://doi.org/10.1016/j.patrec.2020.04.030 L1 - http://refbase.cvc.uab.es/files/LXE2020.pdf N1 - HuPBA; no proj ID - Zhengying Liu2020 ER -