|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2011). Depth of Valleys Accumulation Algorithm for Object Detection. In 14th Congrès Català en Intel·ligencia Artificial (Vol. 1, pp. 71–80).
Abstract: This work aims at detecting in which regions the objects in the image are by using information about the intensity of valleys, which appear to surround ob- jects in images where the source of light is in the line of direction than the camera. We present our depth of valleys accumulation method, which consists of two stages: first, the definition of the depth of valleys image which combines the output of a ridges and valleys detector with the morphological gradient to measure how deep is a point inside a valley and second, an algorithm that denotes points of the image as interior to objects those which are inside complete or incomplete boundaries in the depth of valleys image. To evaluate the performance of our method we have tested it on several application domains. Our results on object region identification are promising, specially in the field of polyp detection in colonoscopy videos, and we also show its applicability in different areas.
Keywords: Object Recognition, Object Region Identification, Image Analysis, Image Processing
|
|
|
Carme Julia, Felipe Lumbreras, & Angel Sappa. (2011). A Factorization-based Approach to Photometric Stereo. IJIST - International Journal of Imaging Systems and Technology, 21(1), 115–119.
Abstract: This article presents an adaptation of a factorization technique to tackle the photometric stereo problem. That is to recover the surface normals and reflectance of an object from a set of images obtained under different lighting conditions. The main contribution of the proposed approach is to consider pixels in shadow and saturated regions as missing data, in order to reduce their influence to the result. Concretely, an adapted Alternation technique is used to deal with missing data. Experimental results considering both synthetic and real images show the viability of the proposed factorization-based strategy. © 2011 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 21, 115–119, 2011.
|
|
|
Koen E.A. van de Sande, Theo Gevers, & Cees G.M. Snoek. (2011). Empowering Visual Categorization with the GPU. TMM - IEEE Transactions on Multimedia, 13(1), 60–70.
Abstract: Visual categorization is important to manage large collections of digital images and video, where textual meta-data is often incomplete or simply unavailable. The bag-of-words model has become the most powerful method for visual categorization of images and video. Despite its high accuracy, a severe drawback of this model is its high computational cost. As the trend to increase computational power in newer CPU and GPU architectures is to increase their level of parallelism, exploiting this parallelism becomes an important direction to handle the computational cost of the bag-of-words approach. When optimizing a system based on the bag-of-words approach, the goal is to minimize the time it takes to process batches of images. Additionally, we also consider power usage as an evaluation metric. In this paper, we analyze the bag-of-words model for visual categorization in terms of computational cost and identify two major bottlenecks: the quantization step and the classification step. We address these two bottlenecks by proposing two efficient algorithms for quantization and classification by exploiting the GPU hardware and the CUDA parallel programming model. The algorithms are designed to (1) keep categorization accuracy intact, (2) decompose the problem and (3) give the same numerical results. In the experiments on large scale datasets it is shown that, by using a parallel implementation on the Geforce GTX260 GPU, classifying unseen images is 4.8 times faster than a quad-core CPU version on the Core i7 920, while giving the exact same numerical results. In addition, we show how the algorithms can be generalized to other applications, such as text retrieval and video retrieval. Moreover, when the obtained speedup is used to process extra video frames in a video retrieval benchmark, the accuracy of visual categorization is improved by 29%.
|
|
|
Victor Ponce, Mario Gorga, Xavier Baro, Petia Radeva, & Sergio Escalera. (2011). Analisis de la Expresion Oral y Gestual en Proyectos Fin de Carrera Via un Sistema de Vision Artificial (Vol. 4).
Abstract: La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación.
|
|
|
Kaida Xiao, Chenyang Fu, Dimosthenis Karatzas, & Sophie Wuerger. (2011). Visual Gamma Correction for LCD Displays. DIS - Displays, 32(1), 17–23.
Abstract: An improved method for visual gamma correction is developed for LCD displays to increase the accuracy of digital colour reproduction. Rather than utilising a photometric measurement device, we use observ- ers’ visual luminance judgements for gamma correction. Eight half tone patterns were designed to gen- erate relative luminances from 1/9 to 8/9 for each colour channel. A psychophysical experiment was conducted on an LCD display to find the digital signals corresponding to each relative luminance by visually matching the half-tone background to a uniform colour patch. Both inter- and intra-observer vari- ability for the eight luminance matches in each channel were assessed and the luminance matches proved to be consistent across observers (DE00 < 3.5) and repeatable (DE00 < 2.2). Based on the individual observer judgements, the display opto-electronic transfer function (OETF) was estimated by using either a 3rd order polynomial regression or linear interpolation for each colour channel. The performance of the proposed method is evaluated by predicting the CIE tristimulus values of a set of coloured patches (using the observer-based OETFs) and comparing them to the expected CIE tristimulus values (using the OETF obtained from spectro-radiometric luminance measurements). The resulting colour differences range from 2 to 4.6 DE00. We conclude that this observer-based method of visual gamma correction is useful to estimate the OETF for LCD displays. Its major advantage is that no particular functional relationship between digital inputs and luminance outputs has to be assumed.
Keywords: Display calibration; Psychophysics ; Perceptual; Visual gamma correction; Luminance matching; Observer-based calibration
|
|
|
M. Visani, Oriol Ramos Terrades, & Salvatore Tabbone. (2011). A Protocol to Characterize the Descriptive Power and the Complementarity of Shape Descriptors. IJDAR - International Journal on Document Analysis and Recognition, 14(1), 87–100.
Abstract: Most document analysis applications rely on the extraction of shape descriptors, which may be grouped into different categories, each category having its own advantages and drawbacks (O.R. Terrades et al. in Proceedings of ICDAR’07, pp. 227–231, 2007). In order to improve the richness of their description, many authors choose to combine multiple descriptors. Yet, most of the authors who propose a new descriptor content themselves with comparing its performance to the performance of a set of single state-of-the-art descriptors in a specific applicative context (e.g. symbol recognition, symbol spotting...). This results in a proliferation of the shape descriptors proposed in the literature. In this article, we propose an innovative protocol, the originality of which is to be as independent of the final application as possible and which relies on new quantitative and qualitative measures. We introduce two types of measures: while the measures of the first type are intended to characterize the descriptive power (in terms of uniqueness, distinctiveness and robustness towards noise) of a descriptor, the second type of measures characterizes the complementarity between multiple descriptors. Characterizing upstream the complementarity of shape descriptors is an alternative to the usual approach where the descriptors to be combined are selected by trial and error, considering the performance characteristics of the overall system. To illustrate the contribution of this protocol, we performed experimental studies using a set of descriptors and a set of symbols which are widely used by the community namely ART and SC descriptors and the GREC 2003 database.
Keywords: Document analysis; Shape descriptors; Symbol description; Performance characterization; Complementarity analysis
|
|
|
Victor Ponce, Mario Gorga, Xavier Baro, Petia Radeva, & Sergio Escalera. (2011). Análisis de la expresión oral y gestual en proyectos fin de carrera vía un sistema de visión artificial. ReVisión, 4(1).
Abstract: La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación.
|
|
|
Muhammad Anwer Rao, David Vazquez, & Antonio Lopez. (2011). Opponent Colors for Human Detection. In J. Vitria, J.M. Sanches, & M. Hernandez (Eds.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 363–370). LNCS. Berlin Heidelberg: Springer.
Abstract: Human detection is a key component in fields such as advanced driving assistance and video surveillance. However, even detecting non-occluded standing humans remains a challenge of intensive research. Finding good features to build human models for further detection is probably one of the most important issues to face. Currently, shape, texture and motion features have deserve extensive attention in the literature. However, color-based features, which are important in other domains (e.g., image categorization), have received much less attention. In fact, the use of RGB color space has become a kind of choice by default. The focus has been put in developing first and second order features on top of RGB space (e.g., HOG and co-occurrence matrices, resp.). In this paper we evaluate the opponent colors (OPP) space as a biologically inspired alternative for human detection. In particular, by feeding OPP space in the baseline framework of Dalal et al. for human detection (based on RGB, HOG and linear SVM), we will obtain better detection performance than by using RGB space. This is a relevant result since, up to the best of our knowledge, OPP space has not been previously used for human detection. This suggests that in the future it could be worth to compute co-occurrence matrices, self-similarity features, etc., also on top of OPP space, i.e., as we have done with HOG in this paper.
Keywords: Pedestrian Detection; Color; Part Based Models
|
|
|
Mariano Vazquez, Ruth Aris, Guillaume Hozeaux, R.Aubry, P.Villar, Jaume Garcia, et al. (2011). A massively parallel computational electrophysiology model of the heart. IJNMBE - International Journal for Numerical Methods in Biomedical Engineering, 27, 1911–1929.
Abstract: This paper presents a patient-sensitive simulation strategy capable of using the most efficient way the high-performance computational resources. The proposed strategy directly involves three different players: Computational Mechanics Scientists (CMS), Image Processing Scientists and Cardiologists, each one mastering its own expertise area within the project. This paper describes the general integrative scheme but focusing on the CMS side presents a massively parallel implementation of computational electrophysiology applied to cardiac tissue simulation. The paper covers different angles of the computational problem: equations, numerical issues, the algorithm and parallel implementation. The proposed methodology is illustrated with numerical simulations testing all the different possibilities, ranging from small domains up to very large ones. A key issue is the almost ideal scalability not only for large and complex problems but also for medium-size meshes. The explicit formulation is particularly well suited for solving this highly transient problems, with very short time-scale.
Keywords: computational electrophysiology; parallelization; finite element methods
|
|
|
Enric Marti, Ferran Poveda, Antoni Gurgui, & Debora Gil. (2011). Aprendizaje Basado en Proyectos en Ingeniería Informática. Resultados y reflexiones de seis años de experiencia.
Abstract: In this workshop a 6 years experience in Project Based Learning (PBL) in Computer Graphics, Computer Engineering course at the Autonomous University of Barcelona (UAB) is presented. We use a Moodle environment suited to manage the documentation generated in PBL. The course is organized by means of two alternative routes: a classic itinerary of lectures and test-based evaluation and another with PBL. In the PBL itinerary we explain the organization in teamgroups, homework tutoring and monitoring and evaluation guidelines for students. We provide some of the work done by students, and the results of assessment surveys carried out to students during these years. We report the evolution of our PBL itinerary in terms of, both, organization and student surveys.
The workshop aims at discussing about on the advantages and disadvantages of using these active methodologies in technical degrees such as computer engineering, in order to debate about the most suitable way of organizing PBL and assessing students learning rate.
|
|
|
Aura Hernandez-Sabate, Debora Gil, David Roche, Monica M. S. Matsumoto, & Sergio S. Furuie. (2011). Inferring the Performance of Medical Imaging Algorithms. In Pedro Real, Daniel Diaz-Pernil, Helena Molina-Abril, Ainhoa Berciano, & Walter Kropatsch (Eds.), 14th International Conference on Computer Analysis of Images and Patterns (Vol. 6854, pp. 520–528). LNCS. Berlin: Springer-Verlag Berlin Heidelberg.
Abstract: Evaluation of the performance and limitations of medical imaging algorithms is essential to estimate their impact in social, economic or clinical aspects. However, validation of medical imaging techniques is a challenging task due to the variety of imaging and clinical problems involved, as well as, the difficulties for systematically extracting a reliable solely ground truth. Although specific validation protocols are reported in any medical imaging paper, there are still two major concerns: definition of standardized methodologies transversal to all problems and generalization of conclusions to the whole clinical data set.
We claim that both issues would be fully solved if we had a statistical model relating ground truth and the output of computational imaging techniques. Such a statistical model could conclude to what extent the algorithm behaves like the ground truth from the analysis of a sampling of the validation data set. We present a statistical inference framework reporting the agreement and describing the relationship of two quantities. We show its transversality by applying it to validation of two different tasks: contour segmentation and landmark correspondence.
Keywords: Validation, Statistical Inference, Medical Imaging Algorithms.
|
|
|
David Roche, Debora Gil, & Jesus Giraldo. (2011). An inference model for analyzing termination conditions of Evolutionary Algorithms. In 14th Congrès Català en Intel·ligencia Artificial (pp. 216–225).
Abstract: In real-world problems, it is mandatory to design a termination condition for Evolutionary Algorithms (EAs) ensuring stabilization close to the unknown optimum. Distribution-based quantities are good candidates as far as suitable parameters are used. A main limitation for application to real-world problems is that such parameters strongly depend on the topology of the objective function, as well as, the EA paradigm used.
We claim that the termination problem would be fully solved if we had a model measuring to what extent a distribution-based quantity asymptotically behaves like the solution accuracy. We present a regression-prediction model that relates any two given quantities and reports if they can be statistically swapped as termination conditions. Our framework is applied to two issues. First, exploring if the parameters involved in the computation of distribution-based quantities influence their asymptotic behavior. Second, to what extent existing distribution-based quantities can be asymptotically exchanged for the accuracy of the EA solution.
Keywords: Evolutionary Computation Convergence, Termination Conditions, Statistical Inference
|
|
|
David Roche, Debora Gil, & Jesus Giraldo. (2011). Using statistical inference for designing termination conditions ensuring convergence of Evolutionary Algorithms. In 11th European Conference on Artificial Life.
Abstract: A main challenge in Evolutionary Algorithms (EAs) is determining a termination condition ensuring stabilization close to the optimum in real-world applications. Although for known test functions distribution-based quantities are good candidates (as far as suitable parameters are used), in real-world problems an open question still remains unsolved. How can we estimate an upper-bound for the termination condition value ensuring a given accuracy for the (unknown) EA solution?
We claim that the termination problem would be fully solved if we defined a quantity (depending only on the EA output) behaving like the solution accuracy. The open question would be, then, satisfactorily answered if we had a model relating both quantities, since accuracy could be predicted from the alternative quantity. We present a statistical inference framework addressing two topics: checking the correlation between the two quantities and defining a regression model for predicting (at a given confidence level) accuracy values from the EA output.
|
|
|
Ferran Poveda, Debora Gil, Albert Andaluz, & Enric Marti. (2011). Multiscale Tractography for Representing Heart Muscular Architecture. In In MICCAI 2011 Workshop on Computational Diffusion MRI.
Abstract: Deep understanding of myocardial structure of the heart would unravel crucial knowledge for clinical and medical procedures. Although the muscular architecture of the heart has been debated by countless researchers, the controversy is still alive. Diffusion Tensor MRI, DT-MRI, is a unique imaging technique for computational validation of the muscular structure of the heart. By the complex arrangement of myocites, existing techniques can not provide comprehensive descriptions of the global muscular architecture. In this paper we introduce a multiresolution reconstruction technique based on DT-MRI streamlining for simplified global myocardial model generation. Our reconstructions can restore the most complex myocardial structures and indicate a global helical organization
|
|
|
Patricia Marquez, Debora Gil, & Aura Hernandez-Sabate. (2011). A Confidence Measure for Assessing Optical Flow Accuracy in the Absence of Ground Truth. In IEEE International Conference on Computer Vision – Workshops (pp. 2042–2049). Barcelona (Spain): IEEE.
Abstract: Optical flow is a valuable tool for motion analysis in autonomous navigation systems. A reliable application requires determining the accuracy of the computed optical flow. This is a main challenge given the absence of ground truth in real world sequences. This paper introduces a measure of optical flow accuracy for Lucas-Kanade based flows in terms of the numerical stability of the data-term. We call this measure optical flow condition number. A statistical analysis over ground-truth data show a good statistical correlation between the condition number and optical flow error. Experiments on driving sequences illustrate its potential for autonomous navigation systems.
Keywords: IEEE International Conference on Computer Vision – Workshops
|
|