Ramon Baldrich, Maria Vanrell, Robert Benavente, & Anna Salvatella. (2003). Color Enhancement based on perceptual sharpening.
|
|
Ramon Baldrich. (2001). Perceptual approach to a computational colour-texture representation for surface inspection..
|
|
Rahma Kalboussi, Aymen Azaza, Joost Van de Weijer, Mehrez Abdellaoui, & Ali Douik. (2020). Object proposals for salient object segmentation in videos. MTAP - Multimedia Tools and Applications, 79(13), 8677–8693.
Abstract: Salient object segmentation in videos is generally broken up in a video segmentation part and a saliency assignment part. Recently, object proposals, which are used to segment the image, have had significant impact on many computer vision applications, including image segmentation, object detection, and recently saliency detection in still images. However, their usage has not yet been evaluated for salient object segmentation in videos. Therefore, in this paper, we investigate the application of object proposals to salient object segmentation in videos. In addition, we propose a new motion feature derived from the optical flow structure tensor for video saliency detection. Experiments on two standard benchmark datasets for video saliency show that the proposed motion feature improves saliency estimation results, and that object proposals are an efficient method for salient object segmentation. Results on the challenging SegTrack v2 and Fukuchi benchmark data sets show that we significantly outperform the state-of-the-art.
|
|
Rahat Khan, Joost Van de Weijer, Fahad Shahbaz Khan, Damien Muselet, christophe Ducottet, & Cecile Barat. (2013). Discriminative Color Descriptors. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2866–2873).
Abstract: Color description is a challenging task because of large variations in RGB values which occur due to scene accidental events, such as shadows, shading, specularities, illuminant color changes, and changes in viewing geometry. Traditionally, this challenge has been addressed by capturing the variations in physics-based models, and deriving invariants for the undesired variations. The drawback of this approach is that sets of distinguishable colors in the original color space are mapped to the same value in the photometric invariant space. This results in a drop of discriminative power of the color description. In this paper we take an information theoretic approach to color description. We cluster color values together based on their discriminative power in a classification problem. The clustering has the explicit objective to minimize the drop of mutual information of the final representation. We show that such a color description automatically learns a certain degree of photometric invariance. We also show that a universal color representation, which is based on other data sets than the one at hand, can obtain competing performance. Experiments show that the proposed descriptor outperforms existing photometric invariants. Furthermore, we show that combined with shape description these color descriptors obtain excellent results on four challenging datasets, namely, PASCAL VOC 2007, Flowers-102, Stanford dogs-120 and Birds-200.
|
|
Rahat Khan, Joost Van de Weijer, Dimosthenis Karatzas, & Damien Muselet. (2013). Towards multispectral data acquisition with hand-held devices. In 20th IEEE International Conference on Image Processing (pp. 2053–2057).
Abstract: We propose a method to acquire multispectral data with handheld devices with front-mounted RGB cameras. We propose to use the display of the device as an illuminant while the camera captures images illuminated by the red, green and
blue primaries of the display. Three illuminants and three response functions of the camera lead to nine response values which are used for reflectance estimation. Results are promising and show that the accuracy of the spectral reconstruction improves in the range from 30-40% over the spectral
reconstruction based on a single illuminant. Furthermore, we propose to compute sensor-illuminant aware linear basis by discarding the part of the reflectances that falls in the sensorilluminant null-space. We show experimentally that optimizing reflectance estimation on these new basis functions decreases
the RMSE significantly over basis functions that are independent to sensor-illuminant. We conclude that, multispectral data acquisition is potentially possible with consumer hand-held devices such as tablets, mobiles, and laptops, opening up applications which are currently considered to be unrealistic.
Keywords: Multispectral; mobile devices; color measurements
|
|
Rada Deeb, Joost Van de Weijer, Damien Muselet, Mathieu Hebert, & Alain Tremeau. (2019). Deep spectral reflectance and illuminant estimation from self-interreflections. JOSA A - Journal of the Optical Society of America A, 31(1), 105–114.
Abstract: In this work, we propose a convolutional neural network based approach to estimate the spectral reflectance of a surface and spectral power distribution of light from a single RGB image of a V-shaped surface. Interreflections happening in a concave surface lead to gradients of RGB values over its area. These gradients carry a lot of information concerning the physical properties of the surface and the illuminant. Our network is trained with only simulated data constructed using a physics-based interreflection model. Coupling interreflection effects with deep learning helps to retrieve the spectral reflectance under an unknown light and to estimate spectral power distribution of this light as well. In addition, it is more robust to the presence of image noise than classical approaches. Our results show that the proposed approach outperforms state-of-the-art learning-based approaches on simulated data. In addition, it gives better results on real data compared to other interreflection-based approaches.
|
|
Rada Deeb, Damien Muselet, Mathieu Hebert, Alain Tremeau, & Joost Van de Weijer. (2017). 3D color charts for camera spectral sensitivity estimation. In 28th British Machine Vision Conference.
Abstract: Estimating spectral data such as camera sensor responses or illuminant spectral power distribution from raw RGB camera outputs is crucial in many computer vision applications.
Usually, 2D color charts with various patches of known spectral reflectance are
used as reference for such purpose. Deducing n-D spectral data (n»3) from 3D RGB inputs is an ill-posed problem that requires a high number of inputs. Unfortunately, most of the natural color surfaces have spectral reflectances that are well described by low-dimensional linear models, i.e. each spectral reflectance can be approximated by a weighted sum of the others. It has been shown that adding patches to color charts does not help in practice, because the information they add is redundant with the information provided by the first set of patches. In this paper, we propose to use spectral data of
higher dimensionality by using 3D color charts that create inter-reflections between the surfaces. These inter-reflections produce multiplications between natural spectral curves and so provide non-linear spectral curves. We show that such data provide enough information for accurate spectral data estimation.
|
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2007). A System to Retrieve Text/Symbols from Color Maps using Connected Component and Skeleton Analysis. In J.M. Ogier W. L. J. Llados (Ed.), Seventh IAPR International Workshop on Graphics Recognition (79–78).
|
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2008). A System to Segment Text and Symbols from Color Maps. In Graphics Recognition. Recent Advances and New Opportunities (Vol. 5046, pp. 245–256). LNCS.
|
|
Ozan Caglayan, Walid Aransa, Yaxing Wang, Marc Masana, Mercedes Garcıa-Martinez, Fethi Bougares, et al. (2016). Does Multimodality Help Human and Machine for Translation and Image Captioning? In 1st conference on machine translation.
Abstract: This paper presents the systems developed by LIUM and CVC for the WMT16 Multimodal Machine Translation challenge. We explored various comparative methods, namely phrase-based systems and attentional recurrent neural networks models trained using monomodal or multimodal data. We also performed a human evaluation in order to estimate theusefulness of multimodal data for human machine translation and image description generation. Our systems obtained the best results for both tasks according to the automatic evaluation metrics BLEU and METEOR.
|
|
Ozan Caglayan, Walid Aransa, Adrien Bardet, Mercedes Garcia-Martinez, Fethi Bougares, Loic Barrault, et al. (2017). LIUM-CVC Submissions for WMT17 Multimodal Translation Task. In 2nd Conference on Machine Translation.
Abstract: This paper describes the monomodal and multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT17 Shared Task on Multimodal Translation. We mainly explored two multimodal architectures where either global visual features or convolutional feature maps are integrated in order to benefit from visual context. Our final systems ranked first for both En-De and En-Fr language pairs according to the automatic evaluation metrics METEOR and BLEU.
|
|
Ozan Caglayan, Adrien Bardet, Fethi Bougares, Loic Barrault, Kai Wang, Marc Masana, et al. (2018). LIUM-CVC Submissions for WMT18 Multimodal Translation Task. In 3rd Conference on Machine Translation.
Abstract: This paper describes the multimodal Neural Machine Translation systems developed by LIUM and CVC for WMT18 Shared Task on Multimodal Translation. This year we propose several modifications to our previou multimodal attention architecture in order to better integrate convolutional features and refine them using encoder-side information. Our final constrained submissions
ranked first for English→French and second for English→German language pairs among the constrained submissions according to the automatic evaluation metric METEOR.
|
|
Olivier Penacchio, Xavier Otazu, & Laura Dempere-Marco. (2013). A Neurodynamical Model of Brightness Induction in V1. Plos - PloS ONE, 8(5), e64086.
Abstract: Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. Recent neurophysiological evidence suggests that brightness information might be explicitly represented in V1, in contrast to the more common assumption that the striate cortex is an area mostly responsive to sensory information. Here we investigate possible neural mechanisms that offer a plausible explanation for such phenomenon. To this end, a neurodynamical model which is based on neurophysiological evidence and focuses on the part of V1 responsible for contextual influences is presented. The proposed computational model successfully accounts for well known psychophysical effects for static contexts and also for brightness induction in dynamic contexts defined by modulating the luminance of surrounding areas. This work suggests that intra-cortical interactions in V1 could, at least partially, explain brightness induction effects and reveals how a common general architecture may account for several different fundamental processes, such as visual saliency and brightness induction, which emerge early in the visual processing pathway.
|
|
Olivier Penacchio, Xavier Otazu, Arnold J Wilkings, & Sara M. Haigh. (2023). A mechanistic account of visual discomfort. FN - Frontiers in Neuroscience, 17.
Abstract: Much of the neural machinery of the early visual cortex, from the extraction of local orientations to contextual modulations through lateral interactions, is thought to have developed to provide a sparse encoding of contour in natural scenes, allowing the brain to process efficiently most of the visual scenes we are exposed to. Certain visual stimuli, however, cause visual stress, a set of adverse effects ranging from simple discomfort to migraine attacks, and epileptic seizures in the extreme, all phenomena linked with an excessive metabolic demand. The theory of efficient coding suggests a link between excessive metabolic demand and images that deviate from natural statistics. Yet, the mechanisms linking energy demand and image spatial content in discomfort remain elusive. Here, we used theories of visual coding that link image spatial structure and brain activation to characterize the response to images observers reported as uncomfortable in a biologically based neurodynamic model of the early visual cortex that included excitatory and inhibitory layers to implement contextual influences. We found three clear markers of aversive images: a larger overall activation in the model, a less sparse response, and a more unbalanced distribution of activity across spatial orientations. When the ratio of excitation over inhibition was increased in the model, a phenomenon hypothesised to underlie interindividual differences in susceptibility to visual discomfort, the three markers of discomfort progressively shifted toward values typical of the response to uncomfortable stimuli. Overall, these findings propose a unifying mechanistic explanation for why there are differences between images and between observers, suggesting how visual input and idiosyncratic hyperexcitability give rise to abnormal brain responses that result in visual stress.
|
|
Olivier Penacchio, Xavier Otazu, A. wilkins, & J. Harris. (2015). Uncomfortable images prevent lateral interactions in the cortex from providing a sparse code. In European Conference on Visual Perception ECVP2015.
|
|
Olivier Penacchio, Laura Dempere-Marco, & Xavier Otazu. (2012). Switching off brightness induction through induction-reversed images. In Perception (Vol. 41, 208).
Abstract: Brightness induction is the modulation of the perceived intensity of an
area by the luminance of surrounding areas. Although V1 is traditionally regarded as
an area mostly responsive to retinal information, neurophysiological evidence
suggests that it may explicitly represent brightness information. In this work, we
investigate possible neural mechanisms underlying brightness induction. To this end,
we consider the model by Z Li (1999 Computation and Neural Systems10187-212)
which is constrained by neurophysiological data and focuses on the part of V1
responsible for contextual influences. This model, which has proven to account for
phenomena such as contour detection and preattentive segmentation, shares with
brightness induction the relevant effect of contextual influences. Importantly, the
input to our network model derives from a complete multiscale and multiorientation
wavelet decomposition, which makes it possible to recover an image reflecting the
perceived luminance and successfully accounts for well known psychophysical
effects for both static and dynamic contexts. By further considering inverse problem
techniques we define induction-reversed images: given a target image, we build an
image whose perceived luminance matches the actual luminance of the original
stimulus, thus effectively canceling out brightness induction effects. We suggest that
induction-reversed images may help remove undesired perceptual effects and can
find potential applications in fields such as radiological image interpretation
|
|
Olivier Penacchio, Laura Dempere-Marco, & Xavier Otazu. (2012). A Neurodynamical Model Of Brightness Induction In V1 Following Static And Dynamic Contextual Influences. In 8th Federation of European Neurosciences (Vol. 6, pp. 63–64).
Abstract: Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. Although striate cortex is traditionally regarded as an area mostly responsive to ensory (i.e. retinal) information,
neurophysiological evidence suggests that perceived brightness information mightbe explicitly represented in V1.
Such evidence has been observed both in anesthetised cats where neuronal response modulations have been found to follow luminance changes outside the receptive felds and in human fMRI measurements. In this work, possible neural mechanisms that ofer a plausible explanation for such phenomenon are investigated. To this end, we consider the model proposed by Z.Li (Li, Network:Comput. Neural Syst., 10 (1999)) which is based on neurophysiological evidence and focuses on the part of V1 responsible for contextual infuences, i.e. layer 2-3 pyramidal cells, interneurons, and horizontal intracortical connections. This model has reproduced other phenomena such as contour detection and preattentive segmentation, which share with brightness induction the relevant efect of contextual infuences. We have extended the original model such that the input to the network is obtained from a complete multiscale and multiorientation wavelet decomposition, thereby allowing the recovery of an image refecting the perceived intensity. The proposed model successfully accounts for well known psychophysical efects for static contexts (among them: the White's and modifed White's efects, the Todorovic, Chevreul, achromatic ring patterns, and grating induction efects) and also for brigthness induction in dynamic contexts defned by modulating the luminance of surrounding areas (e.g. the brightness of a static central area is perceived to vary in antiphase to the sinusoidal luminance changes of its surroundings). This work thus suggests that intra-cortical interactions in V1 could partially explain perceptual brightness induction efects and reveals how a common general architecture may account for several different fundamental processes emerging early in the visual processing pathway.
|
|
Olivier Penacchio, C. Alejandro Parraga, & Maria Vanrell. (2010). Natural Scene Statistics account for Human Cones Ratios. PER - Perception. ECVP Abstract Supplement, 39, 101.
Abstract: In two previous experiments [Parraga et al, 2009 J. of Im. Sci. and Tech 53(3) 031106; Benavente et al,2009 Perception 38 ECVP Supplement, 36] the boundaries of basic colour categories were measured.
In the first experiment, samples were presented in isolation (ie on a dark background) and boundaries were measured using a yes/no paradigm. In the second, subjects adjusted the chromaticity of a sample presented on a random Mondrian background to find the boundary between pairs of adjacent colours.
Results from these experiments showed significant dierences but it was not possible to conclude whether this discrepancy was due to the absence/presence of a colourful background or to the dierences in the paradigms used. In this work, we settle this question by repeating the first experiment (ie samples presented on a dark background) using the second paradigm. A comparison of results shows that
although boundary locations are very similar, boundaries measured in context are significantly dierent(more diuse) than those measured in isolation (confirmed by a Student’s t-test analysis on the subject’s answers statistical distributions). In addition, we completed the mapping of colour name space by measuring the boundaries between chromatic colours and the achromatic centre. With these results we completed our parametric fuzzy-sets model of colour naming space.
|
|