%0 Generic %T Simple and effective localized attribute representations for zero-shot learning %A Shiqi Yang %A Kai Wang %A Luis Herranz %A Joost Van de Weijer %D 2020 %F Shiqi Yang2020 %O LAMP; 600.120 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3542), last updated on Tue, 08 Feb 2022 12:05:48 +0100 %X arXiv:2006.05938Zero-shot learning (ZSL) aims to discriminate images from unseen classes by exploiting relations to seen classes via their semantic descriptions. Some recent papers have shown the importance of localized features together with fine-tuning the feature extractor to obtain discriminative and transferable features. However, these methods require complex attention or part detection modules to perform explicit localization in the visual space. In contrast, in this paper we propose localizing representations in the semantic/attribute space, with a simple but effective pipeline where localization is implicit. Focusing on attribute representations, we show that our method obtains state-of-the-art performance on CUB and SUN datasets, and also achieves competitive results on AWA2 dataset, outperforming generally more complex methods with explicit localization in the visual space. Our method can be implemented easily, which can be used as a new baseline for zero shot-learning. In addition, our localized representations are highly interpretable as attribute-specific heatmaps. %9 miscellaneous %U http://refbase.cvc.uab.es/files/YWH2020.pdf