%0 Conference Proceedings %T Generalized Zero-shot Learning with Multi-source Semantic Embeddings for Scene Recognition %A Xinhang Song %A Haitao Zeng %A Sixian Zhang %A Luis Herranz %A Shuqiang Jiang %B 28th ACM International Conference on Multimedia %D 2020 %F Xinhang Song2020 %O LAMP; 600.141; 600.120 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=3465), last updated on Tue, 09 Feb 2021 09:53:03 +0100 %X Recognizing visual categories from semantic descriptions is a promising way to extend the capability of a visual classifier beyond the concepts represented in the training data (i.e. seen categories). This problem is addressed by (generalized) zero-shot learning methods (GZSL), which leverage semantic descriptions that connect them to seen categories (e.g. label embedding, attributes). Conventional GZSL are designed mostly for object recognition. In this paper we focus on zero-shot scene recognition, a more challenging setting with hundreds of categories where their differences can be subtle and often localized in certain objects or regions. Conventional GZSL representations are not rich enough to capture these local discriminative differences. Addressing these limitations, we propose a feature generation framework with two novel components: 1) multiple sources of semantic information (i.e. attributes, word embeddings and descriptions), 2) region descriptions that can enhance scene discrimination. To generate synthetic visual features we propose a two-step generative approach, where local descriptions are sampled and used as conditions to generate visual features. The generated features are then aggregated and used together with real features to train a joint classifier. In order to evaluate the proposed method, we introduce a new dataset for zero-shot scene recognition with multi-semantic annotations. Experimental results on the proposed dataset and SUN Attribute dataset illustrate the effectiveness of the proposed method. %U https://dl.acm.org/doi/abs/10.1145/3394171.3413568