|
Marco Pedersoli, Jordi Gonzalez, Andrew Bagdanov, & Juan J. Villanueva. (2010). Recursive Coarse-to-Fine Localization for fast Object Recognition. In 11th European Conference on Computer Vision (Vol. 6313, 280–293). LNCS. Springer Berlin Heidelberg.
Abstract: Cascading techniques are commonly used to speed-up the scan of an image for object detection. However, cascades of detectors are slow to train due to the high number of detectors and corresponding thresholds to learn. Furthermore, they do not use any prior knowledge about the scene structure to decide where to focus the search. To handle these problems, we propose a new way to scan an image, where we couple a recursive coarse-to-fine refinement together with spatial constraints of the object location. For doing that we split an image into a set of uniformly distributed neighborhood regions, and for each of these we apply a local greedy search over feature resolutions. The neighborhood is defined as a scanning region that only one object can occupy. Therefore the best hypothesis is obtained as the location with maximum score and no thresholds are needed. We present an implementation of our method using a pyramid of HOG features and we evaluate it on two standard databases, VOC2007 and INRIA dataset. Results show that the Recursive Coarse-to-Fine Localization (RCFL) achieves a 12x speed-up compared to standard sliding windows. Compared with a cascade of multiple resolutions approach our method has slightly better performance in speed and Average-Precision. Furthermore, in contrast to cascading approach, the speed-up is independent of image conditions, the number of detected objects and clutter.
|
|
|
Mario Rojas, David Masip, A. Todorov, & Jordi Vitria. (2010). Automatic Point-based Facial Trait Judgments Evaluation. In 23rd IEEE Conference on Computer Vision and Pattern Recognition (2715–2720).
Abstract: Humans constantly evaluate the personalities of other people using their faces. Facial trait judgments have been studied in the psychological field, and have been determined to influence important social outcomes of our lives, such as elections outcomes and social relationships. Recent work on textual descriptions of faces has shown that trait judgments are highly correlated. Further, behavioral studies suggest that two orthogonal dimensions, valence and dominance, can describe the basis of the human judgments from faces. In this paper, we used a corpus of behavioral data of judgments on different trait dimensions to automatically learn a trait predictor from facial pixel images. We study whether trait evaluations performed by humans can be learned using machine learning classifiers, and used later in automatic evaluations of new facial images. The experiments performed using local point-based descriptors show promising results in the evaluation of the main traits.
|
|
|
Marta Teres, & Eduard Vazquez. (2010). Museums, spaces and museographical resources. Current state and proposals for a multidisciplinary framework to open new perspectives. In Proceedings of The CREATE 2010 Conference (319–323).
Abstract: Two of the main aims of a museum are to communicate its heritage and to make enjoy its visitors. This communication can be done through the pieces itself and the museographical resources but also through the building, the interior design, the light and the colour. Art museums, in opposition with other museums, lack on the application of these additional resources. Such a work necessarily requires a multidisciplinary point of view for a holistic vision of all what a museum implies and to use all its potential as a tool of knowledge and culture for all the visitors.
|
|
|
Mathieu Nicolas Delalandre, Ernest Valveny, Tony Pridmore, & Dimosthenis Karatzas. (2010). Generation of Synthetic Documents for Performance Evaluation of Symbol Recognition & Spotting Systems. IJDAR - International Journal on Document Analysis and Recognition, 13(3), 187–207.
Abstract: This paper deals with the topic of performance evaluation of symbol recognition & spotting systems. We propose here a new approach to the generation of synthetic graphics documents containing non-isolated symbols in a real context. This approach is based on the definition of a set of constraints that permit us to place the symbols on a pre-defined background according to the properties of a particular domain (architecture, electronics, engineering, etc.). In this way, we can obtain a large amount of images resembling real documents by simply defining the set of constraints and providing a few pre-defined backgrounds. As documents are synthetically generated, the groundtruth (the location and the label of every symbol) becomes automatically available. We have applied this approach to the generation of a large database of architectural drawings and electronic diagrams, which shows the flexibility of the system. Performance evaluation experiments of a symbol localization system show that our approach permits to generate documents with different features that are reflected in variation of localization results.
|
|
|
Mathieu Nicolas Delalandre, Jean-Yves Ramel, Ernest Valveny, & Muhammad Muzzamil Luqman. (2010). A Performance Characterization Algorithm for Symbol Localization. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, 260–271). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we present an algorithm for performance characterization of symbol localization systems. This algorithm is aimed to be a more “reliable” and “open” solution to characterize the performance. To achieve that, it exploits only single points as the result of localization and offers the possibility to reconsider the localization results provided by a system. We use the information about context in groundtruth, and overall localization results, to detect the ambiguous localization results. A probability score is computed for each matching between a localization point and a groundtruth region, depending on the spatial distribution of the other regions in the groundtruth. Final characterization is given with detection rate/probability score plots, describing the sets of possible interpretations of the localization results, according to a given confidence rate. We present experimentation details along with the results for the symbol localization system of [1], exploiting a synthetic dataset of architectural floorplans and electrical diagrams (composed of 200 images and 3861 symbols).
|
|
|
Maurizio Mencuccini, Jordi Martinez-Vilalta, Josep Piñol, Lasse Loepfe, Mireia Burnat, Xavier Alvarez, et al. (2010). A quantitative and statistically robust method for the determination of xylem conduit spatial distribution. AJB - American Journal of Botany, 97(8), 1247–1259.
Abstract: Premise of the study: Because of their limited length, xylem conduits need to connect to each other to maintain water transport from roots to leaves. Conduit spatial distribution in a cross section plays an important role in aiding this connectivity. While indices of conduit spatial distribution already exist, they are not well defined statistically. * Methods: We used point pattern analysis to derive new spatial indices. One hundred and five cross-sectional images from different species were transformed into binary images. The resulting point patterns, based on the locations of the conduit centers-of-area, were analyzed to determine whether they departed from randomness. Conduit distribution was then modeled using a spatially explicit stochastic model. * Key results: The presence of conduit randomness, uniformity, or aggregation depended on the spatial scale of the analysis. The large majority of the images showed patterns significantly different from randomness at least at one spatial scale. A strong phylogenetic signal was detected in the spatial variables. * Conclusions: Conduit spatial arrangement has been largely conserved during evolution, especially at small spatial scales. Species in which conduits were aggregated in clusters had a lower conduit density compared to those with uniform distribution. Statistically sound spatial indices must be employed as an aid in the characterization of distributional patterns across species and in models of xylem water transport. Point pattern analysis is a very useful tool in identifying spatial patterns.
Keywords: Geyer; hydraulic conductivity; point pattern analysis; Ripley; Spatstat; vessel clusters; xylem anatomy; xylem network
|
|
|
Michal Drozdzal, Laura Igual, Jordi Vitria, Petia Radeva, Carolina Malagelada, & Fernando Azpiroz. (2010). SIFT flow-based Sequences Alignment. In Medical Image Computing in Catalunya: Graduate Student Workshop (7–8).
|
|
|
Michal Drozdzal, Laura Igual, Petia Radeva, Jordi Vitria, Carolina Malagelada, & Fernando Azpiroz. (2010). Aligning Endoluminal Scene Sequences in Wireless Capsule Endoscopy. In IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis (117–124).
Abstract: Intestinal motility analysis is an important examination in detection of various intestinal malfunctions. One of the big challenges of automatic motility analysis is how to compare sequence of images and extract dynamic paterns taking into account the high deformability of the intestine wall as well as the capsule motion. From clinical point of view the ability to align endoluminal scene sequences will help to find regions of similar intestinal activity and in this way will provide a valuable information on intestinal motility problems. This work, for first time, addresses the problem of aligning endoluminal sequences taking into account motion and structure of the intestine. To describe motility in the sequence, we propose different descriptors based on the Sift Flow algorithm, namely: (1) Histograms of Sift Flow Directions to describe the flow course, (2) Sift Descriptors to represent image intestine structure and (3) Sift Flow Magnitude to quantify intestine deformation. We show that the merge of all three descriptors provides robust information on sequence description in terms of motility. Moreover, we develop a novel methodology to rank the intestinal sequences based on the expert feedback about relevance of the results. The experimental results show that the selected descriptors are useful in the alignment and similarity description and the proposed method allows the analysis of the WCE.
|
|
|
Miguel Angel Bautista, Sergio Escalera, Xavier Baro, Oriol Pujol, Jordi Vitria, & Petia Radeva. (2010). Compact Evolutive Design of Error-Correcting Output Codes. Supervised and Unsupervised Ensemble Methods and Applications. In European Conference on Machine Learning (Vol. I, pp. 119–128).
|
|
|
Miguel Angel Bautista, Xavier Baro, Oriol Pujol, Petia Radeva, Jordi Vitria, & Sergio Escalera. (2010). Compact Evolutive Design of Error-Correcting Output Codes. In Supervised and Unsupervised Ensemble Methods and their Applications in the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (pp. 119–128).
Abstract: The classication of large number of object categories is a challenging trend in the Machine Learning eld. In literature, this is often addressed using an ensemble of classiers. In this scope, the Error-Correcting Output Codes framework has demonstrated to be a powerful tool for the combination of classiers. However, most of the state-of-the-art ECOC approaches use a linear or exponential number of classiers, making the discrimination of a large number of classes unfeasible. In this paper, we explore and propose a minimal design of ECOC in terms of the number of classiers. Evolutionary computation is used for tuning the parameters of the classiers and looking for the best Minimal ECOC code conguration. The results over several public UCI data sets and a challenging multi-class Computer Vision problem show that the proposed methodology obtains comparable and even better results than state-of-the-art ECOC methodologies with far less number of dichotomizers.
Keywords: Ensemble of Dichotomizers; Error-Correcting Output Codes; Evolutionary optimization
|
|
|
Miguel Reyes, Jordi Vitria, Petia Radeva, & Sergio Escalera. (2010). Real-time Activity Monitoring of Inpatients. In Medical Image Computing in Catalunya: Graduate Student Workshop (35–36).
Abstract: In this paper, we present the development of an application capable of monitoring a set of patient vital signs in real time. The application has been designed to support the medical staff of a hospital. Preliminary results show the suitability
of the system to prevent the injury produced by the agitation of the patients.
|
|
|
Mikhail Mozerov, Ignasi Rius, Xavier Roca, & Jordi Gonzalez. (2010). Nonlinear synchronization for automatic learning of 3D pose variability in human motion sequences. EURASIPJ - EURASIP Journal on Advances in Signal Processing, .
Abstract: Article ID 507247
A dense matching algorithm that solves the problem of synchronizing prerecorded human motion sequences, which show different speeds and accelerations, is proposed. The approach is based on minimization of MRF energy and solves the problem by using Dynamic Programming. Additionally, an optimal sequence is automatically selected from the input dataset to be a time-scale pattern for all other sequences. The paper utilizes an action specific model which automatically learns the variability of 3D human postures observed in a set of training sequences. The model is trained using the public CMU motion capture dataset for the walking action, and a mean walking performance is automatically learnt. Additionally, statistics about the observed variability of the postures and motion direction are also computed at each time step. The synchronized motion sequences are used to learn a model of human motion for action recognition and full-body tracking purposes.
|
|
|
Miquel Ferrer, Ernest Valveny, F. Serratosa, K. Riesen, & Horst Bunke. (2010). Generalized Median Graph Computation by Means of Graph Embedding in Vector Spaces. PR - Pattern Recognition, 43(4), 1642–1655.
Abstract: The median graph has been presented as a useful tool to represent a set of graphs. Nevertheless its computation is very complex and the existing algorithms are restricted to use limited amount of data. In this paper we propose a new approach for the computation of the median graph based on graph embedding. Graphs are embedded into a vector space and the median is computed in the vector domain. We have designed a procedure based on the weighted mean of a pair of graphs to go from the vector domain back to the graph domain in order to obtain a final approximation of the median graph. Experiments on three different databases containing large graphs show that we succeed to compute good approximations of the median graph. We have also applied the median graph to perform some basic classification tasks achieving reasonable good results. These experiments on real data open the door to the application of the median graph to a number of more complex machine learning algorithms where a representative of a set of graphs is needed.
Keywords: Graph matching; Weighted mean of graphs; Median graph; Graph embedding; Vector spaces
|
|
|
Mirko Arnold, Anarta Ghosh, Stephen Ameling, & G Lacey. (2010). Automatic segmentation and inpainting of specular highlights for endoscopic imaging. EURASIP JIVP - EURASIP Journal on Image and Video Processing, 2010(9).
|
|
|
Mohammad Rouhani, & Angel Sappa. (2010). Relaxing the 3L Algorithm for an Accurate Implicit Polynomial Fitting. In 23rd IEEE Conference on Computer Vision and Pattern Recognition (pp. 3066–3072).
Abstract: This paper presents a novel method to increase the accuracy of linear fitting of implicit polynomials. The proposed method is based on the 3L algorithm philosophy. The novelty lies on the relaxation of the additional constraints, already imposed by the 3L algorithm. Hence, the accuracy of the final solution is increased due to the proper adjustment of the expected values in the aforementioned additional constraints. Although iterative, the proposed approach solves the fitting problem within a linear framework, which is independent of the threshold tuning. Experimental results, both in 2D and 3D, showing improvements in the accuracy of the fitting are presented. Comparisons with both state of the art algorithms and a geometric based one (non-linear fitting), which is used as a ground truth, are provided.
|
|