|
Albert Gordo, & Florent Perronnin. (2011). Asymmetric Distances for Binary Embeddings. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 729–736).
Abstract: In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes which binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances which are applicable to a wide variety of embedding techniques including Locality Sensitive Hashing (LSH), Locality Sensitive Binary Codes (LSBC), Spectral Hashing (SH) and Semi-Supervised Hashing (SSH). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques. We also propose a novel simple binary embedding technique – PCA Embedding (PCAE) – which is shown to yield competitive results with respect to more complex algorithms such as SH and SSH.
|
|
|
Enric Marti, Ferran Poveda, Antoni Gurgui, & Debora Gil. (2011). Aprendizaje Basado en Proyectos en Ingeniería Informática. Resultados y reflexiones de seis años de experiencia.
Abstract: In this workshop a 6 years experience in Project Based Learning (PBL) in Computer Graphics, Computer Engineering course at the Autonomous University of Barcelona (UAB) is presented. We use a Moodle environment suited to manage the documentation generated in PBL. The course is organized by means of two alternative routes: a classic itinerary of lectures and test-based evaluation and another with PBL. In the PBL itinerary we explain the organization in teamgroups, homework tutoring and monitoring and evaluation guidelines for students. We provide some of the work done by students, and the results of assessment surveys carried out to students during these years. We report the evolution of our PBL itinerary in terms of, both, organization and student surveys.
The workshop aims at discussing about on the advantages and disadvantages of using these active methodologies in technical degrees such as computer engineering, in order to debate about the most suitable way of organizing PBL and assessing students learning rate.
|
|
|
Pierluigi Casale. (2011). Approximate Ensemble Methods for Physical Activity Recognition Applications (Oriol Pujol, & Petia Radeva, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: The main interest of this thesis focuses on computational methodologies able to
reduce the degree of complexity of learning algorithms and its application to physical
activity recognition.
Random Projections will be used to reduce the computational complexity in Multiple Classifier Systems. A new boosting algorithm and a new one-class classification
methodology have been developed. In both cases, random projections are used for
reducing the dimensionality of the problem and for generating diversity, exploiting in
this way the benefits that ensembles of classifiers provide in terms of performances
and stability. Moreover, the new one-class classification methodology, based on an ensemble strategy able to approximate a multidimensional convex-hull, has been proved
to over-perform state-of-the-art one-class classification methodologies.
The practical focus of the thesis is towards Physical Activity Recognition. A new
hardware platform for wearable computing application has been developed and used
for collecting data of activities of daily living allowing to study the optimal features
set able to successful classify activities.
Based on the classification methodologies developed and the study conducted on
physical activity classification, a machine learning architecture capable to provide a
continuous authentication mechanism for mobile-devices users has been worked out,
as last part of the thesis. The system, based on a personalized classifier, states on
the analysis of the characteristic gait patterns typical of each individual ensuring an
unobtrusive and continuous authentication mechanism
|
|
|
Pierluigi Casale, Oriol Pujol, & Petia Radeva. (2011). Approximate Convex Hulls Family for One-Class Cassification. In Carlo Sansone, Josef Kittler, & Fabio Roli (Eds.), 10th International Workshop on Multiple Classifier Systems (Vol. 6713, pp. 106–115). LNCS. Springer Berlin Heidelberg.
Abstract: In this work, a new method for one-class classification based on the Convex Hull geometric structure is proposed. The new method creates a family of convex hulls able to fit the geometrical shape of the training points. The increased computational cost due to the creation of the convex hull in multiple dimensions is circumvented using random projections. This provides an approximation of the original structure with multiple bi-dimensional views. In the projection planes, a mechanism for noisy points rejection has also been elaborated and evaluated. Results show that the approach performs considerably well with respect to the state the art in one-class classification.
|
|
|
Lluis Pere de las Heras, & Gemma Sanchez. (2011). And-Or Graph Grammar for Architectural Floorplan Representation, Learning and Recognition. A Semantic, Structural and Hierarchical Model. In 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 17–24).
Abstract: This paper presents a syntactic model for architectural floor plan interpretation. A stochastic image grammar over an And-Or graph is inferred to represent the hierarchical, structural and semantic relations between elements of all possible floor plans. This grammar is augmented with three different probabilistic models, learnt from a training set, to account the frequency of that relations. Then, a Bottom-Up/Top-Down parser with a pruning strategy has been used for floor plan recognition. For a given input, the parser generates the most probable parse graph for that document. This graph not only contains the structural and semantic relations of its elements, but also its hierarchical composition, that allows to interpret the floor plan at different levels of abstraction.
|
|
|
Victor Ponce, Mario Gorga, Xavier Baro, Petia Radeva, & Sergio Escalera. (2011). Análisis de la expresión oral y gestual en proyectos fin de carrera vía un sistema de visión artificial. ReVisión, 4(1).
Abstract: La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación.
|
|
|
Victor Ponce, Mario Gorga, Xavier Baro, Petia Radeva, & Sergio Escalera. (2011). Analisis de la Expresion Oral y Gestual en Proyectos Fin de Carrera Via un Sistema de Vision Artificial (Vol. 4).
Abstract: La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación.
|
|
|
David Roche, Debora Gil, & Jesus Giraldo. (2011). An inference model for analyzing termination conditions of Evolutionary Algorithms. In 14th Congrès Català en Intel·ligencia Artificial (pp. 216–225).
Abstract: In real-world problems, it is mandatory to design a termination condition for Evolutionary Algorithms (EAs) ensuring stabilization close to the unknown optimum. Distribution-based quantities are good candidates as far as suitable parameters are used. A main limitation for application to real-world problems is that such parameters strongly depend on the topology of the objective function, as well as, the EA paradigm used.
We claim that the termination problem would be fully solved if we had a model measuring to what extent a distribution-based quantity asymptotically behaves like the solution accuracy. We present a regression-prediction model that relates any two given quantities and reports if they can be statistically swapped as termination conditions. Our framework is applied to two issues. First, exploring if the parameters involved in the computation of distribution-based quantities influence their asymptotic behavior. Second, to what extent existing distribution-based quantities can be asymptotically exchanged for the accuracy of the EA solution.
Keywords: Evolutionary Computation Convergence, Termination Conditions, Statistical Inference
|
|
|
Miguel Reyes, Jose Ramirez Moreno, Juan R Revilla, Petia Radeva, & Sergio Escalera. (2011). ADiBAS: Sistema Multisensor de Adquisicion Automatica de Datos Corporales Objetivos, Robustos y Fiables para el Analisis de la Postura y el Movimiento. In 6th Congreso Iberoamericano de Tecnologia de Apoyo a la Discapacidad (pp. 939–944).
Abstract: El análisis de la postura y del rango de movimiento son fundamentales para conocer la optimización del gesto y mejorar, de este modo, el rendimiento y la detección de posibles lesiones. Esta cuantificación es especialmente interesante en deportistas o en pacientes que presentan alguna lesión neurológica o del sistema musculo-esquelético, ya que permite conocer el proceso evolutivo de estos pacientes, evaluar la eficacia de la terapia aplicada y proponer, en caso necesario, una modificación del protocolo de tratamiento.
En este trabajo presentamos un sistema automático que permite, mediante una tecnología no invasiva, la captación automática de marcadores LED situados sobre el paciente y su posterior análisis con el fin de mostrar al especialista datos objetivos que permitan un mejor soporte diagnóstico. También se describe un
sistema analítico de la postura corporal sin marcadores, donde su ejecución durante secuencias dinámicas aporta un alto grado de naturalidad al paciente a la hora de realizar los ejercicios funcionales.
|
|
|
Oscar Amoros, Sergio Escalera, & Anna Puig. (2011). Adaboost GPU-based Classifier for Direct Volume Rendering. In International Conference on Computer Graphics Theory and Applications (pp. 215–219).
Abstract: In volume visualization, the voxel visibitity and materials are carried out through an interactive editing of Transfer Function. In this paper, we present a two-level GPU-based labeling method that computes in times of rendering a set of labeled structures using the Adaboost machine learning classifier. In a pre-processing step, Adaboost trains a binary classifier from a pre-labeled dataset and, in each sample, takes into account a set of features. This binary classifier is a weighted combination of weak classifiers, which can be expressed as simple decision functions estimated on a single feature values. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. We propose an alternative representation of these classifiers that allow a GPU-based parallelizated testing stage embedded into the visualization pipeline. The empirical results confirm the OpenCL-based classification of biomedical datasets as a tough problem where an opportunity for further research emerges.
|
|
|
Ariel Amato, Mikhail Mozerov, Andrew Bagdanov, & Jordi Gonzalez. (2011). Accurate Moving Cast Shadow Suppression Based on Local Color Constancy detection. TIP - IEEE Transactions on Image Processing, 20(10), 2954–2966.
Abstract: This paper describes a novel framework for detection and suppression of properly shadowed regions for most possible scenarios occurring in real video sequences. Our approach requires no prior knowledge about the scene, nor is it restricted to specific scene structures. Furthermore, the technique can detect both achromatic and chromatic shadows even in the presence of camouflage that occurs when foreground regions are very similar in color to shadowed regions. The method exploits local color constancy properties due to reflectance suppression over shadowed regions. To detect shadowed regions in a scene, the values of the background image are divided by values of the current frame in the RGB color space. We show how this luminance ratio can be used to identify segments with low gradient constancy, which in turn distinguish shadows from foreground. Experimental results on a collection of publicly available datasets illustrate the superior performance of our method compared with the most sophisticated, state-of-the-art shadow detection algorithms. These results show that our approach is robust and accurate over a broad range of shadow types and challenging video conditions.
|
|
|
Antonio Hernandez, Carlo Gatta, Sergio Escalera, Laura Igual, Victoria Martin Yuste, & Petia Radeva. (2011). Accurate and Robust Fully-Automatic QCA: Method and Numerical Validation. In 14th International Conference on Medical Image Computing and Computer Assisted Intervention (Vol. 14, pp. 496–503). Springer.
Abstract: The Quantitative Coronary Angiography (QCA) is a methodology used to evaluate the arterial diseases and, in particular, the degree of stenosis. In this paper we propose AQCA, a fully automatic method for vessel segmentation based on graph cut theory. Vesselness, geodesic paths and a new multi-scale edgeness map are used to compute a globally optimal artery segmentation. We evaluate the method performance in a rigorous numerical way on two datasets. The method can detect an artery with precision 92.9 +/- 5% and sensitivity 94.2 +/- 6%. The average absolute distance error between detected and ground truth centerline is 1.13 +/- 0.11 pixels (about 0.27 +/- 0.025 mm) and the absolute relative error in the vessel caliber estimation is 2.93% with almost no bias. Moreover, the method can discriminate between arteries and catheter with an accuracy of 96.4%.
|
|
|
Bhaskar Chakraborty, Michael Holte, Thomas B. Moeslund, Jordi Gonzalez, & Xavier Roca. (2011). A Selective Spatio-Temporal Interest Point Detector for Human Action Recognition in Complex Scenes. In 13th IEEE International Conference on Computer Vision (pp. 1776–1783).
Abstract: Recent progress in the field of human action recognition points towards the use of Spatio-Temporal Interest Points (STIPs) for local descriptor-based recognition strategies. In this paper we present a new approach for STIP detection by applying surround suppression combined with local and temporal constraints. Our method is significantly different from existing STIP detectors and improves the performance by detecting more repeatable, stable and distinctive STIPs for human actors, while suppressing unwanted background STIPs. For action representation we use a bag-of-visual words (BoV) model of local N-jet features to build a vocabulary of visual-words. To this end, we introduce a novel vocabulary building strategy by combining spatial pyramid and vocabulary compression techniques, resulting in improved performance and efficiency. Action class specific Support Vector Machine (SVM) classifiers are trained for categorization of human actions. A comprehensive set of experiments on existing benchmark datasets, and more challenging datasets of complex scenes, validate our approach and show state-of-the-art performance.
|
|
|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2011). A Region Segmentation Method for Colonoscopy Images Using a Model of Polyp Appearance. In Mario João and Hernández J. and S. Vitrià (Ed.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 134–143 ). LNCS.
Abstract: This work aims at the segmentation of colonoscopy images into a minimum number of informative regions. Our method performs in a way such, if a polyp is present in the image, it will be exclusively and totally contained in a single region. This result can be used in later stages to classify regions as polyp-containing candidates. The output of the algorithm also defines which regions can be considered as non-informative. The algorithm starts with a high number of initial regions and merges them taking into account the model of polyp appearance obtained from available data. The results show that our segmentations of polyp regions are more accurate than state-of-the-art methods.
Keywords: Colonoscopy, Polyp Detection, Region Merging, Region Segmentation.
|
|
|
M. Visani, Oriol Ramos Terrades, & Salvatore Tabbone. (2011). A Protocol to Characterize the Descriptive Power and the Complementarity of Shape Descriptors. IJDAR - International Journal on Document Analysis and Recognition, 14(1), 87–100.
Abstract: Most document analysis applications rely on the extraction of shape descriptors, which may be grouped into different categories, each category having its own advantages and drawbacks (O.R. Terrades et al. in Proceedings of ICDAR’07, pp. 227–231, 2007). In order to improve the richness of their description, many authors choose to combine multiple descriptors. Yet, most of the authors who propose a new descriptor content themselves with comparing its performance to the performance of a set of single state-of-the-art descriptors in a specific applicative context (e.g. symbol recognition, symbol spotting...). This results in a proliferation of the shape descriptors proposed in the literature. In this article, we propose an innovative protocol, the originality of which is to be as independent of the final application as possible and which relies on new quantitative and qualitative measures. We introduce two types of measures: while the measures of the first type are intended to characterize the descriptive power (in terms of uniqueness, distinctiveness and robustness towards noise) of a descriptor, the second type of measures characterizes the complementarity between multiple descriptors. Characterizing upstream the complementarity of shape descriptors is an alternative to the usual approach where the descriptors to be combined are selected by trial and error, considering the performance characteristics of the overall system. To illustrate the contribution of this protocol, we performed experimental studies using a set of descriptors and a set of symbols which are widely used by the community namely ART and SC descriptors and the GREC 2003 database.
Keywords: Document analysis; Shape descriptors; Symbol description; Performance characterization; Complementarity analysis
|
|