PT Journal AU Pedro Martins Paulo Carvalho Carlo Gatta TI Context-aware features and robust image representations SO Journal of Visual Communication and Image Representation JI JVCIR PY 2014 BP 339 EP 348 VL 25 IS 2 DI 10.1016/j.jvcir.2013.10.006 AB Local image features are often used to efficiently represent image content. The limited number of types of features that a local feature extractor responds to might be insufficient to provide a robust image representation. To overcome this limitation, we propose a context-aware feature extraction formulated under an information theoretic framework. The algorithm does not respond to a specific type of features; the idea is to retrieve complementary features which are relevant within the image context. We empirically validate the method by investigating the repeatability, the completeness, and the complementarity of context-aware features on standard benchmarks. In a comparison with strictly local features, we show that our context-aware features produce more robust image representations. Furthermore, we study the complementarity between strictly local features and context-aware ones to produce an even more robust representation. ER