Maria Vanrell. (1997). Exploring the space of behaviour of a texture perception algorithm.
|
|
Anna Salvatella, & Maria Vanrell. (2002). Towards a texture representation database.
|
|
A. Martinez, & Robert Benavente. (1998). The AR face database.
|
|
Javier Vazquez. (2007). Content-based Colour Space.
|
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2007). A System to Retrieve Text/Symbols from Color Maps using Connected Component and Skeleton Analysis. In J.M. Ogier W. L. J. Llados (Ed.), Seventh IAPR International Workshop on Graphics Recognition (79–78).
|
|
Marc Serra, Olivier Penacchio, Robert Benavente, Maria Vanrell, & Dimitris Samaras. (2014). The Photometry of Intrinsic Images. In 27th IEEE Conference on Computer Vision and Pattern Recognition (pp. 1494–1501).
Abstract: Intrinsic characterization of scenes is often the best way to overcome the illumination variability artifacts that complicate most computer vision problems, from 3D reconstruction to object or material recognition. This paper examines the deficiency of existing intrinsic image models to accurately account for the effects of illuminant color and sensor characteristics in the estimation of intrinsic images and presents a generic framework which incorporates insights from color constancy research to the intrinsic image decomposition problem. The proposed mathematical formulation includes information about the color of the illuminant and the effects of the camera sensors, both of which modify the observed color of the reflectance of the objects in the scene during the acquisition process. By modeling these effects, we get a “truly intrinsic” reflectance image, which we call absolute reflectance, which is invariant to changes of illuminant or camera sensors. This model allows us to represent a wide range of intrinsic image decompositions depending on the specific assumptions on the geometric properties of the scene configuration and the spectral properties of the light source and the acquisition system, thus unifying previous models in a single general framework. We demonstrate that even partial information about sensors improves significantly the estimated reflectance images, thus making our method applicable for a wide range of sensors. We validate our general intrinsic image framework experimentally with both synthetic data and natural images.
|
|
Carlo Gatta, Adriana Romero, & Joost Van de Weijer. (2014). Unrolling loopy top-down semantic feedback in convolutional deep networks. In Workshop on Deep Vision: Deep Learning for Computer Vision (pp. 498–505).
Abstract: In this paper, we propose a novel way to perform top-down semantic feedback in convolutional deep networks for efficient and accurate image parsing. We also show how to add global appearance/semantic features, which have shown to improve image parsing performance in state-of-the-art methods, and was not present in previous convolutional approaches. The proposed method is characterised by an efficient training and a sufficiently fast testing. We use the well known SIFTflow dataset to numerically show the advantages provided by our contributions, and to compare with state-of-the-art image parsing convolutional based approaches.
|
|
Naila Murray, Maria Vanrell, Xavier Otazu, & C. Alejandro Parraga. (2011). Saliency Estimation Using a Non-Parametric Low-Level Vision Model. In IEEE conference on Computer Vision and Pattern Recognition (pp. 433–440).
Abstract: Many successful models for predicting attention in a scene involve three main steps: convolution with a set of filters, a center-surround mechanism and spatial pooling to construct a saliency map. However, integrating spatial information and justifying the choice of various parameter values remain open problems. In this paper we show that an efficient model of color appearance in human vision, which contains a principled selection of parameters as well as an innate spatial pooling mechanism, can be generalized to obtain a saliency model that outperforms state-of-the-art models. Scale integration is achieved by an inverse wavelet transform over the set of scale-weighted center-surround responses. The scale-weighting function (termed ECSF) has been optimized to better replicate psychophysical data on color appearance, and the appropriate sizes of the center-surround inhibition windows have been determined by training a Gaussian Mixture Model on eye-fixation data, thus avoiding ad-hoc parameter selection. Additionally, we conclude that the extension of a color appearance model to saliency estimation adds to the evidence for a common low-level visual front-end for different visual tasks.
Keywords: Gaussian mixture model;ad hoc parameter selection;center-surround inhibition windows;center-surround mechanism;color appearance model;convolution;eye-fixation data;human vision;innate spatial pooling mechanism;inverse wavelet transform;low-level visual front-end;nonparametric low-level vision model;saliency estimation;saliency map;scale integration;scale-weighted center-surround response;scale-weighting function;visual task;Gaussian processes;biology;biology computing;colour vision;computer vision;visual perception;wavelet transforms
|
|
Joost Van de Weijer, & Fahad Shahbaz Khan. (2013). Fusing Color and Shape for Bag-of-Words Based Object Recognition. In 4th Computational Color Imaging Workshop (Vol. 7786, pp. 25–34). Springer Berlin Heidelberg.
Abstract: In this article we provide an analysis of existing methods for the incorporation of color in bag-of-words based image representations. We propose a list of desired properties on which bases fusing methods can be compared. We discuss existing methods and indicate shortcomings of the two well-known fusing methods, namely early and late fusion. Several recent works have addressed these shortcomings by exploiting top-down information in the bag-of-words pipeline: color attention which is motivated from human vision, and Portmanteau vocabularies which are based on information theoretic compression of product vocabularies. We point out several remaining challenges in cue fusion and provide directions for future research.
Keywords: Object Recognition; color features; bag-of-words; image classification
|
|
Xavier Otazu, Olivier Penacchio, & Xim Cerda-Company. (2015). Brightness and colour induction through contextual influences in V1. In Scottish Vision Group 2015 SGV2015 (Vol. 12, pp. 1208–2012).
|
|
Bojana Gajic, Ariel Amato, Ramon Baldrich, & Carlo Gatta. (2019). Bag of Negatives for Siamese Architectures. In 30th British Machine Vision Conference.
Abstract: Training a Siamese architecture for re-identification with a large number of identities is a challenging task due to the difficulty of finding relevant negative samples efficiently. In this work we present Bag of Negatives (BoN), a method for accelerated and improved training of Siamese networks that scales well on datasets with a very large number of identities. BoN is an efficient and loss-independent method, able to select a bag of high quality negatives, based on a novel online hashing strategy.
|
|
Ozan Caglayan, Adrien Bardet, Fethi Bougares, Loic Barrault, Kai Wang, Marc Masana, et al. (2018). LIUM-CVC Submissions for WMT18 Multimodal Translation Task. In 3rd Conference on Machine Translation.
Abstract: This paper describes the multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT18 Shared Task on Multimodal Translation. This year we propose several modifications to our previou multimodal attention architecture in order to better integrate convolutional features and refine them using encoder-side information. Our final constrained submissions
ranked first for English→French and second for English→German language pairs among the constrained submissions according to the automatic evaluation metric METEOR.
|
|
Aitor Alvarez-Gila, Joost Van de Weijer, Yaxing Wang, & Estibaliz Garrote. (2022). MVMO: A Multi-Object Dataset for Wide Baseline Multi-View Semantic Segmentation. In 29th IEEE International Conference on Image Processing.
Abstract: We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116,000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere. MVMO comprises photorealistic, path-traced image renders, together with semantic segmentation ground truth for every view. Unlike existing multi-view datasets, MVMO features wide baselines between cameras and high density of objects, which lead to large disparities, heavy occlusions and view-dependent object appearance. Single view semantic segmentation is hindered by self and inter-object occlusions that could benefit from additional viewpoints. Therefore, we expect that MVMO will propel research in multi-view semantic segmentation and cross-view semantic transfer. We also provide baselines that show that new research is needed in such fields to exploit the complementary information of multi-view setups 1 .
Keywords: multi-view; cross-view; semantic segmentation; synthetic dataset
|
|
Felipe Lumbreras, Ramon Baldrich, Maria Vanrell, Joan Serrat, & Juan J. Villanueva. (1999). Multiresolution colour texture representations for tile classification.
|
|
Xavier Roca, Jordi Vitria, Maria Vanrell, & Juan J. Villanueva. (1999). Visual behaviours for binocular navigation with autonomous systems..
|
|
Arash Akbarinia, C. Alejandro Parraga, Marta Exposito, Bogdan Raducanu, & Xavier Otazu. (2017). Can biological solutions help computers detect symmetry? In 40th European Conference on Visual Perception.
|
|
Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garcıa-Martinez, Fethi Bougares, et al. (2016). Does Multimodality Help Human and Machine for Translation and Image Captioning? In 1st conference on machine translation.
Abstract: This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate theusefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
|
|
Eduard Vazquez, & Maria Vanrell. (2008). Eines per al desenvolupament de competencies de enginyeria en un assignatura de Intel·ligencia Artificial.
|
|