|
Jaime Moreno, Xavier Otazu, & Maria Vanrell. (2010). Contribution of CIWaM in JPEG2000 Quantization for Color Images. In Proceedings of The CREATE 2010 Conference (132–136).
Abstract: The aim of this work is to explain how to apply perceptual concepts to define a perceptual pre-quantizer and to improve JPEG2000 compressor. The approach consists in quantizing wavelet transform coefficients using some of the human visual system behavior properties. Noise is fatal to image compression performance, because it can be both annoying for the observer and consumes excessive bandwidth when the imagery is transmitted. Perceptual pre-quantization reduces unperceivable details and thus improve both visual impression and transmission properties. The comparison between JPEG2000 without and with perceptual pre-quantization shows that the latter is not favorable in PSNR, but the recovered image is more compressed at the same or even better visual quality measured with a weighted PSNR. Perceptual criteria were taken from the CIWaM(ChromaticInductionWaveletModel).
|
|
|
Javier Vazquez, Maria Vanrell, & Robert Benavente. (2010). Color names as a constraint for Computer Vision problems. In Proceedings of The CREATE 2010 Conference (324–328).
Abstract: Computer Vision Problems are usually ill-posed. Constraining de gamut of possible solutions is then a necessary step. Many constrains for different problems have been developed during years. In this paper, we present a different way of constraining some of these problems: the use of color names. In particular, we will focus on segmentation, representation ans constancy.
|
|
|
Fahad Shahbaz Khan, Joost Van de Weijer, & Maria Vanrell. (2010). Who Painted this Painting? In Proceedings of The CREATE 2010 Conference (329–333).
|
|
|
Shida Beigpour, & Joost Van de Weijer. (2010). Photo-Realistic Color Alteration for Architecture and Design. In Proceedings of The CREATE 2010 Conference (84–88).
Abstract: As color is a strong stimuli we receive from the exterior world, choosing the right color can prove crucial in creating the desired architecture and desing. We propose a framework to apply a realistic color change on both objects and their illuminant lights for snapshots of architectural designs, in order to visualize and choose the right color before actully applying the change in the real world. The proposed framework is based on the laws of physics in order to accomplish realistic and physically plausible results.
|
|
|
Karel Paleček, David Geronimo, & Frederic Lerasle. (2012). Pre-attention cues for person detection. In Cognitive Behavioural Systems, COST 2102 International Training School (pp. 225–235). LNCS. Springer Berlin Heidelberg.
Abstract: Current state-of-the-art person detectors have been proven reliable and achieve very good detection rates. However, the performance is often far from real time, which limits their use to low resolution images only. In this paper, we deal with candidate window generation problem for person detection, i.e. we want to reduce the computational complexity of a person detector by reducing the number of regions that has to be evaluated. We base our work on Alexe’s paper [1], which introduced several pre-attention cues for generic object detection. We evaluate these cues in the context of person detection and show that their performance degrades rapidly for scenes containing multiple objects of interest such as pictures from urban environment. We extend this set by new cues, which better suits our class-specific task. The cues are designed to be simple and efficient, so that they can be used in the pre-attention phase of a more complex sliding window based person detector.
|
|
|
Sergio Alloza, Flavio Escribano, Sergi Delgado, Ciprian Corneanu, & Sergio Escalera. (2017). XBadges. Identifying and training soft skills with commercial video games Improving persistence, risk taking & spatial reasoning with commercial video games and facial and emotional recognition system. In 4th Congreso de la Sociedad Española para las Ciencias del Videojuego (Vol. 1957, pp. 13–28).
Abstract: XBadges is a research project based on the hypothesis that commercial video games (nonserious games) can train soft skills. We measure persistence, patial reasoning and risk taking before and after subjects paticipate in controlled game playing sessions.
In addition, we have developed an automatic facial expression recognition system capable of inferring their emotions while playing, allowing us to study the role of emotions in soft skills acquisition. We have used Flappy Bird, Pacman and Tetris for assessing changes in persistence, risk taking and spatial reasoning respectively.
Results show how playing Tetris significantly improves spatial reasoning and how playing Pacman significantly improves prudence in certain areas of behavior. As for emotions, they reveal that being concentrated helps to improve performance and skills acquisition. Frustration is also shown as a key element. With the results obtained we are able to glimpse multiple applications in areas which need soft skills development.
Keywords: Video Games; Soft Skills; Training; Skilling Development; Emotions; Cognitive Abilities; Flappy Bird; Pacman; Tetris
|
|
|
Alexey Dosovitskiy, German Ros, Felipe Codevilla, Antonio Lopez, & Vladlen Koltun. (2017). CARLA: An Open Urban Driving Simulator. In 1st Annual Conference on Robot Learning. Proceedings of Machine Learning (Vol. 78, pp. 1–16).
Abstract: We introduce CARLA, an open-source simulator for autonomous driving research. CARLA has been developed from the ground up to support development, training, and validation of autonomous urban driving systems. In addition to open-source code and protocols, CARLA provides open digital assets (urban layouts, buildings, vehicles) that were created for this purpose and can be used freely. The simulation platform supports flexible specification of sensor suites and environmental conditions. We use CARLA to study the performance of three approaches to autonomous driving: a classic modular pipeline, an endto-end
model trained via imitation learning, and an end-to-end model trained via
reinforcement learning. The approaches are evaluated in controlled scenarios of
increasing difficulty, and their performance is examined via metrics provided by CARLA, illustrating the platform’s utility for autonomous driving research.
Keywords: Autonomous driving; sensorimotor control; simulation
|
|
|
Yi Xiao, Felipe Codevilla, Christopher Pal, & Antonio Lopez. (2020). Action-Based Representation Learning for Autonomous Driving. In Conference on Robot Learning.
Abstract: Human drivers produce a vast amount of data which could, in principle, be used to improve autonomous driving systems. Unfortunately, seemingly straightforward approaches for creating end-to-end driving models that map sensor data directly into driving actions are problematic in terms of interpretability, and typically have significant difficulty dealing with spurious correlations. Alternatively, we propose to use this kind of action-based driving data for learning representations. Our experiments show that an affordance-based driving model pre-trained with this approach can leverage a relatively small amount of weakly annotated imagery and outperform pure end-to-end driving models, while being more interpretable. Further, we demonstrate how this strategy outperforms previous methods based on learning inverse dynamics models as well as other methods based on heavy human supervision (ImageNet).
|
|
|
Maya Dimitrova, Ch. Roumenin, Siya Lozanova, David Rotger, & Petia Radeva. (2007). An Interface System Based on Multimodal Principle for Cardiological Diagnosis Assistance. In International Conference On Computer Systems And Technologies (Vol. IIIB.4, 1–6).
|
|
|
Albin Soutif, Antonio Carta, & Joost Van de Weijer. (2023). Improving Online Continual Learning Performance and Stability with Temporal Ensembles. In 2nd Conference on Lifelong Learning Agents.
Abstract: Neural networks are very effective when trained on large datasets for a large number of iterations. However, when they are trained on non-stationary streams of data and in an online fashion, their performance is reduced (1) by the online setup, which limits the availability of data, (2) due to catastrophic forgetting because of the non-stationary nature of the data. Furthermore, several recent works (Caccia et al., 2022; Lange et al., 2023) arXiv:2205.13452 showed that replay methods used in continual learning suffer from the stability gap, encountered when evaluating the model continually (rather than only on task boundaries). In this article, we study the effect of model ensembling as a way to improve performance and stability in online continual learning. We notice that naively ensembling models coming from a variety of training tasks increases the performance in online continual learning considerably. Starting from this observation, and drawing inspirations from semi-supervised learning ensembling methods, we use a lightweight temporal ensemble that computes the exponential moving average of the weights (EMA) at test time, and show that it can drastically increase the performance and stability when used in combination with several methods from the literature.
|
|
|
David Berga, & Xavier Otazu. (2019). Computations of inhibition of return mechanisms by modulating V1 dynamics. In 28th Annual Computational Neuroscience Meeting.
Abstract: In this study we present a unifed model of the visual cortex for predicting visual attention using real image scenes. Feedforward mechanisms from RGC and LGN have been functionally modeled using wavelet filters at distinct orientations and scales for each chromatic pathway (Magno-, Parvo-, Konio-cellular) and polarity (ON-/OFF-center), by processing image components in the CIE Lab space. In V1, we process cortical interactions with an excitatory-inhibitory network of fring rate neurons, initially proposed by (Li, 1999), later extended by (Penacchio et al. 2013). Firing rates from model’s output have been used as predictors of neuronal activity to be projected in a map in superior colliculus (with WTA-like computations), determining locations of visual fxations. These locations will be considered as already visited areas for future saccades, therefore we integrated a spatiotemporal function of inhibition of return mechanisms (where LIP/FEF is responsible) to feed to the model with spatial memory for next saccades. Foveation mechanisms have been simulated with a cortical magnifcation function, which distort spatial viewing properties for each fxation. Results show lower prediction errors than with respect no IoR cases (Fig. 1), and it is functionally consistent with human psychophysical measurements. Our model follows a biologically-constrained architecture, previously shown to reproduce visual saliency (Berga & Otazu, 2018), visual discomfort (Penacchio et al. 2016), brightness (Penacchio et al. 2013) and chromatic induction (Cerda & Otazu, 2016).
|
|
|
Robert Benavente, & Maria Vanrell. (2007). Parametrizacion del Espacio de Categorias de Color.
|
|
|
Robert Benavente, C. Alejandro Parraga, & Maria Vanrell. (2010). La influencia del contexto en la definicion de las fronteras entre las categorias cromaticas. In 9th Congreso Nacional del Color (92–95).
Abstract: En este artículo presentamos los resultados de un experimento de categorización de color en el que las muestras se presentaron sobre un fondo multicolor (Mondrian) para simular los efectos del contexto. Los resultados se comparan con los de un experimento previo que, utilizando un paradigma diferente, determinó las fronteras sin tener en cuenta el contexto. El análisis de los resultados muestra que las fronteras obtenidas con el experimento en contexto presentan menos confusión que las obtenidas en el experimento sin contexto.
Keywords: Categorización del color; Apariencia del color; Influencia del contexto; Patrones de Mondrian; Modelos paramétricos
|
|
|
Debora Gil, Jaume Garcia, Ruth Aris, Guillaume Houzeaux, & Manuel Vazquez. (2009). A Riemmanian approach to cardiac fiber architecture modelling. In R. L. R. V. L. Nithiarasu (Ed.), 1st International Conference on Mathematical & Computational Biomedical Engineering (pp. 59–62). Swansea (UK).
Abstract: There is general consensus that myocardial fiber architecture should be modelled in order to fully understand the electromechanical properties of the Left Ventricle (LV). Diffusion Tensor magnetic resonance Imaging (DTI) is the reference image modality for rapid measurement of fiber orientations by means of the tensor principal eigenvectors. In this work, we present a mathematical framework for across subject comparison of the local geometry of the LV anatomy including the fiber architecture from the statistical analysis of DTI studies. We use concepts of differential geometry for defining a parametric domain suitable for statistical analysis of a low number of samples. We use Riemannian metrics to define a consistent computation of DTI principal eigenvector modes of variation. Our framework has been applied to build an atlas of the LV fiber architecture from 7 DTI normal canine hearts.
Keywords: cardiac fiber architecture; diffusion tensor magnetic resonance imaging; differential (Rie- mannian) geometry.
|
|
|
Ali Furkan Biten, R. Tito, Andres Mafla, Lluis Gomez, Marçal Rusiñol, M. Mathew, et al. (2019). ICDAR 2019 Competition on Scene Text Visual Question Answering. In 3rd Workshop on Closing the Loop Between Vision and Language, in conjunction with ICCV2019.
Abstract: This paper presents final results of ICDAR 2019 Scene Text Visual Question Answering competition (ST-VQA). ST-VQA introduces an important aspect that is not addressed
by any Visual Question Answering system up to date, namely the incorporation of scene text to answer questions asked about an image. The competition introduces a new dataset comprising 23, 038 images annotated with 31, 791 question / answer pairs where the answer is always grounded on text instances present in the image. The images are taken from 7 different public computer vision datasets, covering a wide range of scenarios.
The competition was structured in three tasks of increasing difficulty, that require reading the text in a scene and understanding it in the context of the scene, to correctly answer a given question. A novel evaluation metric is presented, which elegantly assesses both key capabilities expected from an optimal model: text recognition and image understanding. A detailed analysis of results from different participants is showcased, which provides insight into the current capabilities of VQA systems that can read. We firmly believe the dataset proposed in this challenge will be an important milestone to consider towards a path of more robust and general models that
can exploit scene text to achieve holistic image understanding.
|
|