PT Unknown AU Jaume Gibert Ernest Valveny Horst Bunke TI Dimensionality Reduction for Graph of Words Embedding BT 8th IAPR-TC-15 International Workshop. Graph-Based Representations in Pattern Recognition PY 2011 BP 22 EP 31 VL 6658 DI http://dx.doi.org/10.1007/978-3-642-20844-7_3 AB The Graph of Words Embedding consists in mapping every graph of a given dataset to a feature vector by counting unary and binary relations between node attributes of the graph. While it shows good properties in classification problems, it suffers from high dimensionality and sparsity. These two issues are addressed in this article. Two well-known techniques for dimensionality reduction, kernel principal component analysis (kPCA) and independent component analysis (ICA), are applied to the embedded graphs. We discuss their performance compared to the classification of the original vectors on three different public databases of graphs. ER