|
Mohammad Rouhani, & Angel Sappa. (2009). A Novel Approach to Geometric Fitting of Implicit Quadrics. In 8th International Conference on Advanced Concepts for Intelligent Vision Systems (Vol. 5807, 121–132). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents a novel approach for estimating the geometric distance from a given point to the corresponding implicit quadric curve/surface. The proposed estimation is based on the height of a tetrahedron, which is used as a coarse but reliable estimation of the real distance. The estimated distance is then used for finding the best set of quadric parameters, by means of the Levenberg-Marquardt algorithm, which is a common framework in other geometric fitting approaches. Comparisons of the proposed approach with previous ones are provided to show both improvements in CPU time as well as in the accuracy of the obtained results.
|
|
|
David Aldavert, Ricardo Toledo, Arnau Ramisa, & Ramon Lopez de Mantaras. (2009). Visual Registration Method For A Low Cost Robot: Computer Vision Systems. In 7th International Conference on Computer Vision Systems (Vol. 5815, 204–214). LNCS. Springer Berlin Heidelberg.
Abstract: An autonomous mobile robot must face the correspondence or data association problem in order to carry out tasks like place recognition or unknown environment mapping. In order to put into correspondence two maps, most methods estimate the transformation relating the maps from matches established between low level feature extracted from sensor data. However, finding explicit matches between features is a challenging and computationally expensive task. In this paper, we propose a new method to align obstacle maps without searching explicit matches between features. The maps are obtained from a stereo pair. Then, we use a vocabulary tree approach to identify putative corresponding maps followed by the Newton minimization algorithm to find the transformation that relates both maps. The proposed method is evaluated in a typical office environment showing good performance.
|
|
|
Francesco Ciompi, Oriol Pujol, E Fernandez-Nofrerias, J. Mauri, & Petia Radeva. (2009). ECOC Random Fields for Lumen Segmentation in Radial Artery IVUS Sequences. In 12th International Conference on Medical Image and Computer Assisted Intervention (Vol. 5762). LNCS. Springer Berlin Heidelberg.
Abstract: The measure of lumen volume on radial arteries can be used to evaluate the vessel response to different vasodilators. In this paper, we present a framework for automatic lumen segmentation in longitudinal cut images of radial artery from Intravascular ultrasound sequences. The segmentation is tackled as a classification problem where the contextual information is exploited by means of Conditional Random Fields (CRFs). A multi-class classification framework is proposed, and inference is achieved by combining binary CRFs according to the Error-Correcting-Output-Code technique. The results are validated against manually segmented sequences. Finally, the method is compared with other state-of-the-art classifiers.
|
|
|
L.Tarazon, D. Perez, N. Serrano, V. Alabau, Oriol Ramos Terrades, A. Sanchis, et al. (2009). Confidence Measures for Error Correction in Interactive Transcription of Handwritten Text. In 15th International Conference on Image Analysis and Processing (Vol. 5716, pp. 567–574). LNCS. Springer Berlin Heidelberg.
Abstract: An effective approach to transcribe old text documents is to follow an interactive-predictive paradigm in which both, the system is guided by the human supervisor, and the supervisor is assisted by the system to complete the transcription task as efficiently as possible. In this paper, we focus on a particular system prototype called GIDOC, which can be seen as a first attempt to provide user-friendly, integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. More specifically, we focus on the handwriting recognition part of GIDOC, for which we propose the use of confidence measures to guide the human supervisor in locating possible system errors and deciding how to proceed. Empirical results are reported on two datasets showing that a word error rate not larger than a 10% can be achieved by only checking the 32% of words that are recognised with less confidence.
|
|
|
Sergio Escalera, Alicia Fornes, Oriol Pujol, & Petia Radeva. (2009). Multi-class Binary Symbol Classification with Circular Blurred Shape Models. In 15th International Conference on Image Analysis and Processing (Vol. 5716, 1005–1014). LNCS. Springer Berlin Heidelberg.
Abstract: Multi-class binary symbol classification requires the use of rich descriptors and robust classifiers. Shape representation is a difficult task because of several symbol distortions, such as occlusions, elastic deformations, gaps or noise. In this paper, we present the Circular Blurred Shape Model descriptor. This descriptor encodes the arrangement information of object parts in a correlogram structure. A prior blurring degree defines the level of distortion allowed to the symbol. Moreover, we learn the new feature space using a set of Adaboost classifiers, which are combined in the Error-Correcting Output Codes framework to deal with the multi-class categorization problem. The presented work has been validated over different multi-class data sets, and compared to the state-of-the-art descriptors, showing significant performance improvements.
|
|
|
Mehdi Mirza-Mohammadi, Sergio Escalera, & Petia Radeva. (2009). Contextual-Guided Bag-of-Visual-Words Model for Multi-class Object Categorization. In 13th International Conference on Computer Analysis of Images and Patterns (Vol. 5702, 748–756). LNCS. Springer Berlin Heidelberg.
Abstract: Bag-of-words model (BOW) is inspired by the text classification problem, where a document is represented by an unsorted set of contained words. Analogously, in the object categorization problem, an image is represented by an unsorted set of discrete visual words (BOVW). In these models, relations among visual words are performed after dictionary construction. However, close object regions can have far descriptions in the feature space, being grouped as different visual words. In this paper, we present a method for considering geometrical information of visual words in the dictionary construction step. Object interest regions are obtained by means of the Harris-Affine detector and then described using the SIFT descriptor. Afterward, a contextual-space and a feature-space are defined, and a merging process is used to fuse feature words based on their proximity in the contextual-space. Moreover, we use the Error Correcting Output Codes framework to learn the new dictionary in order to perform multi-class classification. Results show significant classification improvements when spatial information is taken into account in the dictionary construction step.
|
|
|
Debora Gil, Aura Hernandez-Sabate, Mireia Burnat, Steven Jansen, & Jordi Martinez-Vilalta. (2009). Structure-Preserving Smoothing of Biomedical Images. In 13th International Conference on Computer Analysis of Images and Patterns (Vol. 5702, pp. 427–434). LNCS. Springer Berlin Heidelberg.
Abstract: Smoothing of biomedical images should preserve gray-level transitions between adjacent tissues, while restoring contours consistent with anatomical structures. Anisotropic diffusion operators are based on image appearance discontinuities (either local or contextual) and might fail at weak inter-tissue transitions. Meanwhile, the output of block-wise and morphological operations is prone to present a block structure due to the shape and size of the considered pixel neighborhood. In this contribution, we use differential geometry concepts to define a diffusion operator that restricts to image consistent level-sets. In this manner, the final state is a non-uniform intensity image presenting homogeneous inter-tissue transitions along anatomical structures, while smoothing intra-structure texture. Experiments on different types of medical images (magnetic resonance, computerized tomography) illustrate its benefit on a further process (such as segmentation) of images.
Keywords: non-linear smoothing; differential geometry; anatomical structures segmentation; cardiac magnetic resonance; computerized tomography.
|
|
|
Miquel Ferrer, Ernest Valveny, F. Serratosa, I. Bardaji, & Horst Bunke. (2009). Graph-based k-means clustering: A comparison of the set versus the generalized median graph. In 13th International Conference on Computer Analysis of Images and Patterns (Vol. 5702, 342–350). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we propose the application of the generalized median graph in a graph-based k-means clustering algorithm. In the graph-based k-means algorithm, the centers of the clusters have been traditionally represented using the set median graph. We propose an approximate method for the generalized median graph computation that allows to use it to represent the centers of the clusters. Experiments on three databases show that using the generalized median graph as the clusters representative yields better results than the set median graph.
|
|
|
Maria Salamo, Sergio Escalera, & Petia Radeva. (2009). Quality Enhancement based on Reinforcement Learning and Feature Weighting for a Critiquing-Based Recommender. In 8th International Conference on Case-Based Reasoning (Vol. 5650, 298–312). LNCS. Springer Berlin Heidelberg.
Abstract: Personalizing the product recommendation task is a major focus of research in the area of conversational recommender systems. Conversational case-based recommender systems help users to navigate through product spaces, alternatively making product suggestions and eliciting users feedback. Critiquing is a common form of feedback and incremental critiquing-based recommender system has shown its efficiency to personalize products based primarily on a quality measure. This quality measure influences the recommendation process and it is obtained by the combination of compatibility and similarity scores. In this paper, we describe new compatibility strategies whose basis is on reinforcement learning and a new feature weighting technique which is based on the user’s history of critiques. Moreover, we show that our methodology can significantly improve recommendation efficiency in comparison with the state-of-the-art approaches.
|
|
|
Oriol Pujol, Eloi Puertas, & Carlo Gatta. (2009). Multi-scale Stacked Sequential Learning. In 8th International Workshop of Multiple Classifier Systems (Vol. 5519, 262–271). Springer Berlin Heidelberg.
Abstract: One of the most widely used assumptions in supervised learning is that data is independent and identically distributed. This assumption does not hold true in many real cases. Sequential learning is the discipline of machine learning that deals with dependent data such that neighboring examples exhibit some kind of relationship. In the literature, there are different approaches that try to capture and exploit this correlation, by means of different methodologies. In this paper we focus on meta-learning strategies and, in particular, the stacked sequential learning approach. The main contribution of this work is two-fold: first, we generalize the stacked sequential learning. This generalization reflects the key role of neighboring interactions modeling. Second, we propose an effective and efficient way of capturing and exploiting sequential correlations that takes into account long-range interactions by means of a multi-scale pyramidal decomposition of the predicted labels. Additionally, this new method subsumes the standard stacked sequential learning approach. We tested the proposed method on two different classification tasks: text lines classification in a FAQ data set and image classification. Results on these tasks clearly show that our approach outperforms the standard stacked sequential learning. Moreover, we show that the proposed method allows to control the trade-off between the detail and the desired range of the interactions.
|
|
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2009). Recoding Error-Correcting Output Codes. In 8th International Workshop of Multiple Classifier Systems (Vol. 5519, 11–21). Springer Berlin Heidelberg.
Abstract: One of the most widely applied techniques to deal with multi- class categorization problems is the pairwise voting procedure. Recently, this classical approach has been embedded in the Error-Correcting Output Codes framework (ECOC). This framework is based on a coding step, where a set of binary problems are learnt and coded in a matrix, and a decoding step, where a new sample is tested and classified according to a comparison with the positions of the coded matrix. In this paper, we present a novel approach to redefine without retraining, in a problem-dependent way, the one-versus-one coding matrix so that the new coded information increases the generalization capability of the system. Moreover, the final classification can be tuned with the inclusion of a weighting matrix in the decoding step. The approach has been validated over several UCI Machine Learning repository data sets and two real multi-class problems: traffic sign and face categorization. The results show that performance improvements are obtained when comparing the new approach to one of the best ECOC designs (one-versus-one). Furthermore, the novel methodology obtains at least the same performance than the one-versus-one ECOC design.
|
|
|
Marco Pedersoli, Jordi Gonzalez, & Juan J. Villanueva. (2009). High-Speed Human Detection Using a Multiresolution Cascade of Histograms of Oriented Gradients. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents a new method for human detection based on a multiresolution cascade of Histograms of Oriented Gradients (HOG) that can highly reduce the computational cost of the detection search without affecting accuracy. The method consists of a cascade of sliding window detectors. Each detector is a Support Vector Machine (SVM) composed by features at different resolution, from coarse for the first level to fine for the last one.
Considering that the spatial stride of the sliding window search is affected by the HOG features size, unlike previous methods based on Adaboost cascades, we can adopt a spatial stride inversely proportional to the features resolution. This produces that the speed-up of the cascade is not only due to the low number of features that need to be computed in the first levels, but also to the lower number of detection windows that needs to be evaluated.
Experimental results shows that our method permits a detection rate comparable with the state of the art, but at the same time a gain in the speed of the detection search of 10-20 times depending on the cascade configuration.
|
|
|
Bhaskar Chakraborty, Andrew Bagdanov, & Jordi Gonzalez. (2009). Towards Real-Time Human Action Recognition. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: This work presents a novel approach to human detection based action-recognition in real-time. To realize this goal our method first detects humans in different poses using a correlation-based approach. Recognition of actions is done afterward based on the change of the angular values subtended by various body parts. Real-time human detection and action recognition are very challenging, and most state-of-the-art approaches employ complex feature extraction and classification techniques, which ultimately becomes a handicap for real-time recognition. Our correlation-based method, on the other hand, is computationally efficient and uses very simple gradient-based features. For action recognition angular features of body parts are extracted using a skeleton technique. Results for action recognition are comparable with the present state-of-the-art.
|
|
|
Miquel Ferrer, Ernest Valveny, & F. Serratosa. (2009). Median Graph Computation by means of a Genetic Approach Based on Minimum Common Supergraph and Maximum Common Subraph. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524, 346–353). LNCS. Springer Berlin Heidelberg.
Abstract: Given a set of graphs, the median graph has been theoretically presented as a useful concept to infer a representative of the set. However, the computation of the median graph is a highly complex task and its practical application has been very limited up to now. In this work we present a new genetic algorithm for the median graph computation. A set of experiments on real data, where none of the existing algorithms for the median graph computation could be applied up to now due to their computational complexity, show that we obtain good approximations of the median graph. Finally, we use the median graph in a real nearest neighbour classification showing that it leaves the box of the only-theoretical concepts and demonstrating, from a practical point of view, that can be a useful tool to represent a set of graphs.
|
|
|
Albert Gordo, & Ernest Valveny. (2009). The diagonal split: A pre-segmentation step for page layout analysis & classification. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524, 290–297). LNCS. Springer Berlin Heidelberg.
Abstract: Document classification is an important task in all the processes related to document storage and retrieval. In the case of complex documents, structural features are needed to achieve a correct classification. Unfortunately, physical layout analysis is error prone. In this paper we present a pre-segmentation step based on a divide & conquer strategy that can be used to improve the page segmentation results, independently of the segmentation algorithm used. This pre-segmentation step is evaluated in classification and retrieval using the selective CRLA algorithm for layout segmentation together with a clustering based on the voronoi area diagram, and tested on two different databases, MARG and Girona Archives.
|
|