|
Bojana Gajic, Ariel Amato, Ramon Baldrich, Joost Van de Weijer, & Carlo Gatta. (2022). Area Under the ROC Curve Maximization for Metric Learning. In CVPR 2022 Workshop on Efficien Deep Learning for Computer Vision (ECV 2022, 5th Edition).
Abstract: Most popular metric learning losses have no direct relation with the evaluation metrics that are subsequently applied to evaluate their performance. We hypothesize that training a metric learning model by maximizing the area under the ROC curve (which is a typical performance measure of recognition systems) can induce an implicit ranking suitable for retrieval problems. This hypothesis is supported by previous work that proved that a curve dominates in ROC space if and only if it dominates in Precision-Recall space. To test this hypothesis, we design and maximize an approximated, derivable relaxation of the area under the ROC curve. The proposed AUC loss achieves state-of-the-art results on two large scale retrieval benchmark datasets (Stanford Online Products and DeepFashion In-Shop). Moreover, the AUC loss achieves comparable performance to more complex, domain specific, state-of-the-art methods for vehicle re-identification.
Keywords: Training; Computer vision; Conferences; Area measurement; Benchmark testing; Pattern recognition
|
|
|
Miquel Ferrer, Robert Benavente, Ernest Valveny, J. Garcia, Agata Lapedriza, & Gemma Sanchez. (2008). Aprendizaje Cooperativo Aplicado a la Docencia de las Asignaturas de Programacion en Ingenieria Informatica.
|
|
|
Xavier Otazu, Olivier Penacchio, & Laura Dempere-Marco. (2012). An investigation into plausible neural mechanisms related to the the CIWaM computational model for brightness induction. In 2nd Joint AVA / BMVA Meeting on Biological and Machine Vision.
Abstract: Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. From a purely computational perspective, we built a low-level computational model (CIWaM) of early sensory processing based on multi-resolution wavelets with the aim of replicating brightness and colour (Otazu et al., 2010, Journal of Vision, 10(12):5) induction effects. Furthermore, we successfully used the CIWaM architecture to define a computational saliency model (Murray et al, 2011, CVPR, 433-440; Vanrell et al, submitted to AVA/BMVA'12). From a biological perspective, neurophysiological evidence suggests that perceived brightness information may be explicitly represented in V1. In this work we investigate possible neural mechanisms that offer a plausible explanation for such effects. To this end, we consider the model by Z.Li (Li, 1999, Network:Comput. Neural Syst., 10, 187-212) which is based on biological data and focuses on the part of V1 responsible for contextual influences, namely, layer 2-3 pyramidal cells, interneurons, and horizontal intracortical connections. This model has proven to account for phenomena such as visual saliency, which share with brightness induction the relevant effect of contextual influences (the ones modelled by CIWaM). In the proposed model, the input to the network is derived from a complete multiscale and multiorientation wavelet decomposition taken from the computational model (CIWaM).
This model successfully accounts for well known pyschophysical effects (among them: the White's and modied White's effects, the Todorovic, Chevreul, achromatic ring patterns, and grating induction effects) for static contexts and also for brigthness induction in dynamic contexts defined by modulating the luminance of surrounding areas. From a methodological point of view, we conclude that the results obtained by the computational model (CIWaM) are compatible with the ones obtained by the neurodynamical model proposed here.
|
|
|
Christophe Rigaud, Dimosthenis Karatzas, Joost Van de Weijer, Jean-Christophe Burie, & Jean-Marc Ogier. (2013). An active contour model for speech balloon detection in comics. In 12th International Conference on Document Analysis and Recognition (pp. 1240–1244).
Abstract: Comic books constitute an important cultural heritage asset in many countries. Digitization combined with subsequent comic book understanding would enable a variety of new applications, including content-based retrieval and content retargeting. Document understanding in this domain is challenging as comics are semi-structured documents, combining semantically important graphical and textual parts. Few studies have been done in this direction. In this work we detail a novel approach for closed and non-closed speech balloon localization in scanned comic book pages, an essential step towards a fully automatic comic book understanding. The approach is compared with existing methods for closed balloon localization found in the literature and results are presented.
|
|
|
Xavier Otazu, & J. Nuñez. (2001). Algoritmo de Clasificacion no Supervisada Basado en Wavelets..
|
|
|
M. Danelljan, Fahad Shahbaz Khan, Michael Felsberg, & Joost Van de Weijer. (2014). Adaptive color attributes for real-time visual tracking. In 27th IEEE Conference on Computer Vision and Pattern Recognition (pp. 1090–1097).
Abstract: Visual tracking is a challenging problem in computer vision. Most state-of-the-art visual trackers either rely on luminance information or use simple color representations for image description. Contrary to visual tracking, for object
recognition and detection, sophisticated color features when combined with luminance have shown to provide excellent performance. Due to the complexity of the tracking problem, the desired color feature should be computationally
efficient, and possess a certain amount of photometric invariance while maintaining high discriminative power.
This paper investigates the contribution of color in a tracking-by-detection framework. Our results suggest that color attributes provides superior performance for visual tracking. We further propose an adaptive low-dimensional
variant of color attributes. Both quantitative and attributebased evaluations are performed on 41 challenging benchmark color sequences. The proposed approach improves the baseline intensity-based tracker by 24% in median distance precision. Furthermore, we show that our approach outperforms
state-of-the-art tracking methods while running at more than 100 frames per second.
|
|
|
Ernest Valveny, Robert Benavente, Agata Lapedriza, Miquel Ferrer, Jaume Garcia, & Gemma Sanchez. (2012). Adaptation of a computer programming course to the EXHE requirements: evaluation five years later (Vol. 37).
|
|
|
C. Alejandro Parraga, Ramon Baldrich, & Maria Vanrell. (2010). Accurate Mapping of Natural Scenes Radiance to Cone Activation Space: A New Image Dataset. In 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science (50–57).
Abstract: The characterization of trichromatic cameras is usually done in terms of a device-independent color space, such as the CIE 1931 XYZ space. This is indeed convenient since it allows the testing of results against colorimetric measures. We have characterized our camera to represent human cone activation by mapping the camera sensor's (RGB) responses to human (LMS) through a polynomial transformation, which can be “customized” according to the types of scenes we want to represent. Here we present a method to test the accuracy of the camera measures and a study on how the choice of training reflectances for the polynomial may alter the results.
|
|
|
J. Nuñez, O. Fors, Xavier Otazu, Vicenç Pala, Roman Arbiol, & M.T. Merino. (2006). A Wavelet-Based Method for the Determination of the Relative Resolution Between Remotely Sensed Images. IEEE Transactions on Geoscience and Remote Sensing, 44(9): 2539–2548.
|
|
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2008). A System to Segment Text and Symbols from Color Maps. In Graphics Recognition. Recent Advances and New Opportunities (Vol. 5046, pp. 245–256). LNCS.
|
|
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2007). A System to Retrieve Text/Symbols from Color Maps using Connected Component and Skeleton Analysis. In J.M. Ogier W. L. J. Llados (Ed.), Seventh IAPR International Workshop on Graphics Recognition (79–78).
|
|
|
Xavier Otazu, & Maria Vanrell. (2005). A surround-induction function to unify assimilation and contrast in a computational model of color apearance.
|
|
|
Yasuko Sugito, Trevor Canham, Javier Vazquez, & Marcelo Bertalmio. (2021). A Study of Objective Quality Metrics for HLG-Based HDR/WCG Image Coding. SMPTE - SMPTE Motion Imaging Journal, 53–65.
Abstract: In this work, we study the suitability of high dynamic range, wide color gamut (HDR/WCG) objective quality metrics to assess the perceived deterioration of compressed images encoded using the hybrid log-gamma (HLG) method, which is the standard for HDR television. Several image quality metrics have been developed to deal specifically with HDR content, although in previous work we showed that the best results (i.e., better matches to the opinion of human expert observers) are obtained by an HDR metric that consists simply in applying a given standard dynamic range metric, called visual information fidelity (VIF), directly to HLG-encoded images. However, all these HDR metrics ignore the chroma components for their calculations, that is, they consider only the luminance channel. For this reason, in the current work, we conduct subjective evaluation experiments in a professional setting using compressed HDR/WCG images encoded with HLG and analyze the ability of the best HDR metric to detect perceivable distortions in the chroma components, as well as the suitability of popular color metrics (including ΔITPR , which supports parameters for HLG) to correlate with the opinion scores. Our first contribution is to show that there is a need to consider the chroma components in HDR metrics, as there are color distortions that subjects perceive but that the best HDR metric fails to detect. Our second contribution is the surprising result that VIF, which utilizes only the luminance channel, correlates much better with the subjective evaluation scores than the metrics investigated that do consider the color components.
|
|
|
Robert Benavente. (2007). A Parametric Model for Computational Colour Naming (Maria Vanrell, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
|
|
|
O. Fors, A. Richichi, Xavier Otazu, & J. Nuñez. (2008). A new wavelet-based approach for the automated treatment of large sets of lunar occultation data. Astronomy and Astrohysics, 297–304.
|
|
|
Javier Vazquez, J. Kevin O'Regan, Maria Vanrell, & Graham D. Finlayson. (2012). A new spectrally sharpened basis to predict colour naming, unique hues, and hue cancellation. VSS - Journal of Vision, 12(6 (7)), 1–14.
Abstract: When light is reflected off a surface, there is a linear relation between the three human photoreceptor responses to the incoming light and the three photoreceptor responses to the reflected light. Different colored surfaces have different linear relations. Recently, Philipona and O'Regan (2006) showed that when this relation is singular in a mathematical sense, then the surface is perceived as having a highly nameable color. Furthermore, white light reflected by that surface is perceived as corresponding precisely to one of the four psychophysically measured unique hues. However, Philipona and O'Regan's approach seems unrelated to classical psychophysical models of color constancy. In this paper we make this link. We begin by transforming cone sensors to spectrally sharpened counterparts. In sharp color space, illumination change can be modeled by simple von Kries type scalings of response values within each of the spectrally sharpened response channels. In this space, Philipona and O'Regan's linear relation is captured by a simple Land-type color designator defined by dividing reflected light by incident light. This link between Philipona and O'Regan's theory and Land's notion of color designator gives the model biological plausibility. We then show that Philipona and O'Regan's singular surfaces are surfaces which are very close to activating only one or only two of such newly defined spectrally sharpened sensors, instead of the usual three. Closeness to zero is quantified in a new simplified measure of singularity which is also shown to relate to the chromaticness of colors. As in Philipona and O'Regan's original work, our new theory accounts for a large variety of psychophysical color data.
|
|
|
C. Alejandro Parraga, Javier Vazquez, & Maria Vanrell. (2009). A new cone activation-based natural images dataset. PER - Perception, 36, 180.
Abstract: We generated a new dataset of digital natural images where each colour plane corresponds to the human LMS (long-, medium-, short-wavelength) cone activations. The images were chosen to represent five different visual environments (eg forest, seaside, mountain snow, urban, motorways) and were taken under natural illumination at different times of day. At the bottom-left corner of each picture there was a matte grey ball of approximately constant spectral reflectance (across the camera's response spectrum,) and nearly Lambertian reflective properties, which allows to compute (and remove, if necessary) the illuminant's colour and intensity. The camera (Sigma Foveon SD10) was calibrated by measuring its sensor's spectral responses using a set of 31 spectrally narrowband interference filters. This allowed conversion of the final camera-dependent RGB colour space into the Smith and Pokorny (1975) cone activation space by means of a polynomial transformation, optimised for a set of 1269 Munsell chip reflectances. This new method is an improvement over the usual 3 × 3 matrix transformation which is only accurate for spectrally-narrowband colours. The camera-to-LMS transformation can be recalculated to consider other non-human visual systems. The dataset is available to download from our website.
|
|
|
Olivier Penacchio, Laura Dempere-Marco, & Xavier Otazu. (2012). A Neurodynamical Model Of Brightness Induction In V1 Following Static And Dynamic Contextual Influences. In 8th Federation of European Neurosciences (Vol. 6, pp. 63–64).
Abstract: Brightness induction is the modulation of the perceived intensity of an area by the luminance of surrounding areas. Although striate cortex is traditionally regarded as an area mostly responsive to ensory (i.e. retinal) information,
neurophysiological evidence suggests that perceived brightness information mightbe explicitly represented in V1.
Such evidence has been observed both in anesthetised cats where neuronal response modulations have been found to follow luminance changes outside the receptive felds and in human fMRI measurements. In this work, possible neural mechanisms that ofer a plausible explanation for such phenomenon are investigated. To this end, we consider the model proposed by Z.Li (Li, Network:Comput. Neural Syst., 10 (1999)) which is based on neurophysiological evidence and focuses on the part of V1 responsible for contextual infuences, i.e. layer 2-3 pyramidal cells, interneurons, and horizontal intracortical connections. This model has reproduced other phenomena such as contour detection and preattentive segmentation, which share with brightness induction the relevant efect of contextual infuences. We have extended the original model such that the input to the network is obtained from a complete multiscale and multiorientation wavelet decomposition, thereby allowing the recovery of an image refecting the perceived intensity. The proposed model successfully accounts for well known psychophysical efects for static contexts (among them: the White's and modifed White's efects, the Todorovic, Chevreul, achromatic ring patterns, and grating induction efects) and also for brigthness induction in dynamic contexts defned by modulating the luminance of surrounding areas (e.g. the brightness of a static central area is perceived to vary in antiphase to the sinusoidal luminance changes of its surroundings). This work thus suggests that intra-cortical interactions in V1 could partially explain perceptual brightness induction efects and reveals how a common general architecture may account for several different fundamental processes emerging early in the visual processing pathway.
|
|