Ivet Rafegas, & Maria Vanrell. (2018). Color encoding in biologically-inspired convolutional neural networks. VR - Vision Research, 151, 7–17.
Abstract: Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations.
Keywords: Color coding; Computer vision; Deep learning; Convolutional neural networks
|
|
Lu Yu, Xialei Liu, & Joost Van de Weijer. (2022). Self-Training for Class-Incremental Semantic Segmentation. TNNLS - IEEE Transactions on Neural Networks and Learning Systems, .
Abstract: In class-incremental semantic segmentation, we have no access to the labeled data of previous tasks. Therefore, when incrementally learning new classes, deep neural networks suffer from catastrophic forgetting of previously learned knowledge. To address this problem, we propose to apply a self-training approach that leverages unlabeled data, which is used for rehearsal of previous knowledge. Specifically, we first learn a temporary model for the current task, and then, pseudo labels for the unlabeled data are computed by fusing information from the old model of the previous task and the current temporary model. In addition, conflict reduction is proposed to resolve the conflicts of pseudo labels generated from both the old and temporary models. We show that maximizing self-entropy can further improve results by smoothing the overconfident predictions. Interestingly, in the experiments, we show that the auxiliary data can be different from the training data and that even general-purpose, but diverse auxiliary data can lead to large performance gains. The experiments demonstrate the state-of-the-art results: obtaining a relative gain of up to 114% on Pascal-VOC 2012 and 8.5% on the more challenging ADE20K compared to previous state-of-the-art methods.
Keywords: Class-incremental learning; Self-training; Semantic segmentation.
|
|
Naila Murray, Sandra Skaff, Luca Marchesotti, & Florent Perronnin. (2011). Towards Automatic Concept Transfer. In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering (167.176). ACM Press.
Abstract: This paper introduces a novel approach to automatic concept transfer; examples of concepts are “romantic”, “earthy”, and “luscious”. The approach modifies the color content of an input image given only a concept specified by a user in natural language, thereby requiring minimal user input. This approach is particularly useful for users who are aware of the message they wish to convey in the transferred image while being unsure of the color combination needed to achieve the corresponding transfer. The user may adjust the intensity level of the concept transfer to his/her liking with a single parameter. The proposed approach uses a convex clustering algorithm, with a novel pruning mechanism, to automatically set the complexity of models of chromatic content. It also uses the Earth-Mover's Distance to compute a mapping between the models of the input image and the target chromatic concept. Results show that our approach yields transferred images which effectively represent concepts, as confirmed by a user study.
Keywords: chromatic modeling, color concepts, color transfer, concept transfer
|
|
Robert Benavente, C. Alejandro Parraga, & Maria Vanrell. (2010). La influencia del contexto en la definicion de las fronteras entre las categorias cromaticas. In 9th Congreso Nacional del Color (92–95).
Abstract: En este artículo presentamos los resultados de un experimento de categorización de color en el que las muestras se presentaron sobre un fondo multicolor (Mondrian) para simular los efectos del contexto. Los resultados se comparan con los de un experimento previo que, utilizando un paradigma diferente, determinó las fronteras sin tener en cuenta el contexto. El análisis de los resultados muestra que las fronteras obtenidas con el experimento en contexto presentan menos confusión que las obtenidas en el experimento sin contexto.
Keywords: Categorización del color; Apariencia del color; Influencia del contexto; Patrones de Mondrian; Modelos paramétricos
|
|
Domicele Jonauskaite, Lucia Camenzind, C. Alejandro Parraga, Cecile N Diouf, Mathieu Mercapide Ducommun, Lauriane Müller, et al. (2021). Colour-emotion associations in individuals with red-green colour blindness. PeerJ, 9, e11180.
Abstract: Colours and emotions are associated in languages and traditions. Some of us may convey sadness by saying feeling blue or by wearing black clothes at funerals. The first example is a conceptual experience of colour and the second example is an immediate perceptual experience of colour. To investigate whether one or the other type of experience more strongly drives colour-emotion associations, we tested 64 congenitally red-green colour-blind men and 66 non-colour-blind men. All participants associated 12 colours, presented as terms or patches, with 20 emotion concepts, and rated intensities of the associated emotions. We found that colour-blind and non-colour-blind men associated similar emotions with colours, irrespective of whether colours were conveyed via terms (r = .82) or patches (r = .80). The colour-emotion associations and the emotion intensities were not modulated by participants' severity of colour blindness. Hinting at some additional, although minor, role of actual colour perception, the consistencies in associations for colour terms and patches were higher in non-colour-blind than colour-blind men. Together, these results suggest that colour-emotion associations in adults do not require immediate perceptual colour experiences, as conceptual experiences are sufficient.
Keywords: Affect; Chromotherapy; Colour cognition; Colour vision deficiency; Cross-modal correspondences; Daltonism; Deuteranopia; Dichromatic; Emotion; Protanopia.
|
|
Fahad Shahbaz Khan, Muhammad Anwer Rao, Joost Van de Weijer, Michael Felsberg, & J.Laaksonen. (2015). Deep semantic pyramids for human attributes and action recognition. In Image Analysis, Proceedings of 19th Scandinavian Conference , SCIA 2015 (Vol. 9127, pp. 341–353). Springer International Publishing.
Abstract: Describing persons and their actions is a challenging problem due to variations in pose, scale and viewpoint in real-world images. Recently, semantic pyramids approach [1] for pose normalization has shown to provide excellent results for gender and action recognition. The performance of semantic pyramids approach relies on robust image description and is therefore limited due to the use of shallow local features. In the context of object recognition [2] and object detection [3], convolutional neural networks (CNNs) or deep features have shown to improve the performance over the conventional shallow features.
We propose deep semantic pyramids for human attributes and action recognition. The method works by constructing spatial pyramids based on CNNs of different part locations. These pyramids are then combined to obtain a single semantic representation. We validate our approach on the Berkeley and 27 Human Attributes datasets for attributes classification. For action recognition, we perform experiments on two challenging datasets: Willow and PASCAL VOC 2010. The proposed deep semantic pyramids provide a significant gain of 17.2%, 13.9%, 24.3% and 22.6% compared to the standard shallow semantic pyramids on Berkeley, 27 Human Attributes, Willow and PASCAL VOC 2010 datasets respectively. Our results also show that deep semantic pyramids outperform conventional CNNs based on the full bounding box of the person. Finally, we compare our approach with state-of-the-art methods and show a gain in performance compared to best methods in literature.
Keywords: Action recognition; Human attributes; Semantic pyramids
|
|
Muhammad Anwer Rao, Fahad Shahbaz Khan, Joost Van de Weijer, & Jorma Laaksonen. (2017). Top-Down Deep Appearance Attention for Action Recognition. In 20th Scandinavian Conference on Image Analysis (Vol. 10269, pp. 297–309). LNCS.
Abstract: Recognizing human actions in videos is a challenging problem in computer vision. Recently, convolutional neural network based deep features have shown promising results for action recognition. In this paper, we investigate the problem of fusing deep appearance and motion cues for action recognition. We propose a video representation which combines deep appearance and motion based local convolutional features within the bag-of-deep-features framework. Firstly, dense deep appearance and motion based local convolutional features are extracted from spatial (RGB) and temporal (flow) networks, respectively. Both visual cues are processed in parallel by constructing separate visual vocabularies for appearance and motion. A category-specific appearance map is then learned to modulate the weights of the deep motion features. The proposed representation is discriminative and binds the deep local convolutional features to their spatial locations. Experiments are performed on two challenging datasets: JHMDB dataset with 21 action classes and ACT dataset with 43 categories. The results clearly demonstrate that our approach outperforms both standard approaches of early and late feature fusion. Further, our approach is only employing action labels and without exploiting body part information, but achieves competitive performance compared to the state-of-the-art deep features based approaches.
Keywords: Action recognition; CNNs; Feature fusion
|
|
Fahad Shahbaz Khan, Joost Van de Weijer, Muhammad Anwer Rao, Andrew Bagdanov, Michael Felsberg, & Jorma. (2018). Scale coding bag of deep features for human attribute and action recognition. MVAP - Machine Vision and Applications, 29(1), 55–71.
Abstract: Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding. Both in bag-of-words and the recently popular representations based on convolutional neural networks, local features are computed at multiple scales. However, these multi-scale convolutional features are pooled into a single scale-invariant representation. We argue that entirely scale-invariant image representations are sub-optimal and investigate approaches to scale coding within a bag of deep features framework. Our approach encodes multi-scale information explicitly during the image encoding stage. We propose two strategies to encode multi-scale information explicitly in the final image representation. We validate our two scale coding techniques on five datasets: Willow, PASCAL VOC 2010, PASCAL VOC 2012, Stanford-40 and Human Attributes (HAT-27). On all datasets, the proposed scale coding approaches outperform both the scale-invariant method and the standard deep features of the same network. Further, combining our scale coding approaches with standard deep features leads to consistent improvement over the state of the art.
Keywords: Action recognition; Attribute recognition; Bag of deep features
|
|
Felipe Lumbreras, Ramon Baldrich, Maria Vanrell, Joan Serrat, & Juan J. Villanueva. (1999). Multiresolution colour texture representations for tile classification.
|
|
Xavier Roca, Jordi Vitria, Maria Vanrell, & Juan J. Villanueva. (1999). Visual behaviours for binocular navigation with autonomous systems..
|
|
A. Pujol, Jordi Vitria, Petia Radeva, Xavier Binefa, Robert Benavente, Ernest Valveny, et al. (1999). Real time pharmaceutical product recognition using color and shape indexing. In Proceedings of the 2nd International Workshop on European Scientific and Industrial Collaboration (WESIC´99), Promotoring Advanced Technologies in Manufacturing..
|
|
Maria Vanrell, Jordi Vitria, & Xavier Roca. (1997). A multidimensional scaling approach to explore the behavior of a texture perception algorithm. Machine Vision and Applications, 9, 262–271.
|
|
Xavier Roca, Jordi Vitria, Maria Vanrell, & Juan J. Villanueva. (1999). Gaze control in a binocular robot systems. In 7th IEEE International Conference on Emerging Technologies and Factory Automation. Proceedings ETFA '99.
|
|
Felipe Lumbreras, Ramon Baldrich, Maria Vanrell, Joan Serrat, & Juan J. Villanueva. (1999). Multiresolution texture classification of ceramic tiles. In Recent Research developments in optical engineering, Research Signpost, 2: 213–228.
|
|
Robert Benavente, M.C. Olive, Maria Vanrell, & Ramon Baldrich. (1999). Colour Perception: A Simple Method for Colour Naming. In Proc. 2nd Catalan Congress on Artificial Intelligence (CCIA’99 (pp. 340–347).
|
|
Robert Benavente. (1999). Dealing with colour variability: application to a colour naming task.
|
|
Jordi Vitria, Petia Radeva, X. Binefa, A. Pujol, Ernest Valveny, Robert Benavente, et al. (1999). Real time recognition of pharmaceutical products by subspace methods.
|
|
Ramon Baldrich. (2001). Perceptual approach to a computational colour-texture representation for surface inspection..
|
|