%0 Conference Proceedings %T Learning Multi-Object Tracking and Segmentation from Automatic Annotations %A Lorenzo Porzi %A Markus Hofinger %A Idoia Ruiz %A Joan Serrat %A Samuel Rota Bulo %A Peter Kontschieder %B 33rd IEEE Conference on Computer Vision and Pattern Recognition %D 2020 %F Lorenzo Porzi2020 %O ADAS; 600.124; 600.118 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3402), last updated on Tue, 16 Nov 2021 13:32:28 +0100 %X In this work we contribute a novel pipeline to automatically generate training data, and to improve over state-of-the-art multi-object tracking and segmentation (MOTS) methods. Our proposed track mining algorithm turns raw street-level videos into high-fidelity MOTS training data, is scalable and overcomes the need of expensive and time-consuming manual annotation approaches. We leverage state-of-the-art instance segmentation results in combination with optical flow predictions, also trained on automatically harvested training data. Our second major contribution is MOTSNet – a deep learning, tracking-by-detection architecture for MOTS – deploying a novel mask-pooling layer for improved object association over time. Training MOTSNet with our automatically extracted data leads to significantly improved sMOTSA scores on the novel KITTI MOTS dataset (+1.9%/+7.5% on cars/pedestrians), and MOTSNet improves by +4.1% over previously best methods on the MOTSChallenge dataset. Our most impressive finding is that we can improve over previous best-performing works, even in complete absence of manually annotated MOTS training data. %U https://ieeexplore.ieee.org/abstract/document/9157138 %U http://refbase.cvc.uab.es/files/PHR2020.pdf %U http://dx.doi.org/10.1109/cvpr42600.2020.00688 %P 6845-6854