%0 Conference Proceedings %T Learning the Model Update for Siamese Trackers %A Lichao Zhang %A Abel Gonzalez-Garcia %A Joost Van de Weijer %A Martin Danelljan %A Fahad Shahbaz Khan %B 18th IEEE International Conference on Computer Vision %D 2019 %F Lichao Zhang2019 %O LAMP; 600.109; 600.141; 600.120 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3295), last updated on Tue, 08 Feb 2022 12:08:23 +0100 %X Siamese approaches address the visual tracking problem by extracting an appearance template from the current frame, which is used to localize the target in the next frame. In general, this template is linearly combined with the accumulated template from the previous frame, resulting in an exponential decay of information over time. While such an approach to updating has led to improved results, its simplicity limits the potential gain likely to be obtained by learning to update. Therefore, we propose to replace the handcrafted update function with a method which learns to update. We use a convolutional neural network, called UpdateNet, which given the initial template, the accumulated template and the template of the current frame aims to estimate the optimal template for the next frame. The UpdateNet is compact and can easily be integrated into existing Siamese trackers. We demonstrate the generality of the proposed approach by applying it to two Siamese trackers, SiamFC and DaSiamRPN. Extensive experiments on VOT2016, VOT2018, LaSOT, and TrackingNet datasets demonstrate that our UpdateNet effectively predicts the new target template, outperforming the standard linear update. On the large-scale TrackingNet dataset, our UpdateNet improves the results of DaSiamRPN with an absolute gain of 3.9% in terms of success score. %U https://ieeexplore.ieee.org/document/9008117 %U http://refbase.cvc.uab.es/files/ZGW2019.pdf %U http://dx.doi.org/10.1109/ICCV.2019.00411 %P 4009-4018