|
Oscar Camara, Estanislao Oubel, Gemma Piella, Simone Balocco, Mathieu De Craene, & Alejandro F. Frangi. (2009). Multi-sequence Registration of Cine, Tagged and Delay-Enhancement MRI with Shift Correction and Steerable Pyramid-Based Detagging. In 5th International Conference on Functional Imaging and Modeling of the Heart (Vol. 5528, 330–338). LNCS. Springer Berlin Heidelberg.
Abstract: In this work, we present a registration framework for cardiac cine MRI (cMRI), tagged (tMRI) and delay-enhancement MRI (deMRI), where the two main issues to find an accurate alignment between these images have been taking into account: the presence of tags in tMRI and respiration artifacts in all sequences. A steerable pyramid image decomposition has been used for detagging purposes since it is suitable to extract high-order oriented structures by directional adaptive filtering. Shift correction of cMRI is achieved by firstly maximizing the similarity between the Long Axis and Short Axis cMRI. Subsequently, these shift-corrected images are used as target images in a rigid registration procedure with their corresponding tMRI/deMRI in order to correct their shift. The proposed registration framework has been evaluated by 840 registration tests, considerably improving the alignment of the MR images (mean RMS error of 2.04mm vs. 5.44mm).
|
|
|
D. Perez, L. Tarazon, N. Serrano, F.M. Castro, Oriol Ramos Terrades, & A. Juan. (2009). The GERMANA Database. In 10th International Conference on Document Analysis and Recognition (pp. 301–305).
Abstract: A new handwritten text database, GERMANA, is presented to facilitate empirical comparison of different approaches to text line extraction and off-line handwriting recognition. GERMANA is the result of digitising and annotating a 764-page Spanish manuscript from 1891, in which most pages only contain nearly calligraphed text written on ruled sheets of well-separated lines. To our knowledge, it is the first publicly available database for handwriting research, mostly written in Spanish and comparable in size to standard databases. Due to its sequential book structure, it is also well-suited for realistic assessment of interactive handwriting recognition systems. To provide baseline results for reference in future studies, empirical results are also reported, using standard techniques and tools for preprocessing, feature extraction, HMM-based image modelling, and language modelling.
|
|
|
Maria Salamo, Sergio Escalera, & Petia Radeva. (2009). Quality Enhancement based on Reinforcement Learning and Feature Weighting for a Critiquing-Based Recommender. In 8th International Conference on Case-Based Reasoning (Vol. 5650, 298–312). LNCS. Springer Berlin Heidelberg.
Abstract: Personalizing the product recommendation task is a major focus of research in the area of conversational recommender systems. Conversational case-based recommender systems help users to navigate through product spaces, alternatively making product suggestions and eliciting users feedback. Critiquing is a common form of feedback and incremental critiquing-based recommender system has shown its efficiency to personalize products based primarily on a quality measure. This quality measure influences the recommendation process and it is obtained by the combination of compatibility and similarity scores. In this paper, we describe new compatibility strategies whose basis is on reinforcement learning and a new feature weighting technique which is based on the user’s history of critiques. Moreover, we show that our methodology can significantly improve recommendation efficiency in comparison with the state-of-the-art approaches.
|
|
|
Albert Gordo, & Ernest Valveny. (2009). The diagonal split: A pre-segmentation step for page layout analysis & classification. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524, 290–297). LNCS. Springer Berlin Heidelberg.
Abstract: Document classification is an important task in all the processes related to document storage and retrieval. In the case of complex documents, structural features are needed to achieve a correct classification. Unfortunately, physical layout analysis is error prone. In this paper we present a pre-segmentation step based on a divide & conquer strategy that can be used to improve the page segmentation results, independently of the segmentation algorithm used. This pre-segmentation step is evaluated in classification and retrieval using the selective CRLA algorithm for layout segmentation together with a clustering based on the voronoi area diagram, and tested on two different databases, MARG and Girona Archives.
|
|
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2009). Separability of Ternary Codes for Sparse Designs of Error-Correcting Output Codes. PRL - Pattern Recognition Letters, 30(3), 285–297.
Abstract: Error Correcting Output Codes (ECOC) represent a successful framework to deal with multi-class categorization problems based on combining binary classifiers. In this paper, we present a new formulation of the ternary ECOC distance and the error-correcting capabilities in the ternary ECOC framework. Based on the new measure, we stress on how to design coding matrices preventing codification ambiguity and propose a new Sparse Random coding matrix with ternary distance maximization. The results on the UCI Repository and in a real speed traffic categorization problem show that when the coding design satisfies the new ternary measures, significant performance improvement is obtained independently of the decoding strategy applied.
|
|
|
Oriol Pujol, Eloi Puertas, & Carlo Gatta. (2009). Multi-scale Stacked Sequential Learning. In 8th International Workshop of Multiple Classifier Systems (Vol. 5519, 262–271). Springer Berlin Heidelberg.
Abstract: One of the most widely used assumptions in supervised learning is that data is independent and identically distributed. This assumption does not hold true in many real cases. Sequential learning is the discipline of machine learning that deals with dependent data such that neighboring examples exhibit some kind of relationship. In the literature, there are different approaches that try to capture and exploit this correlation, by means of different methodologies. In this paper we focus on meta-learning strategies and, in particular, the stacked sequential learning approach. The main contribution of this work is two-fold: first, we generalize the stacked sequential learning. This generalization reflects the key role of neighboring interactions modeling. Second, we propose an effective and efficient way of capturing and exploiting sequential correlations that takes into account long-range interactions by means of a multi-scale pyramidal decomposition of the predicted labels. Additionally, this new method subsumes the standard stacked sequential learning approach. We tested the proposed method on two different classification tasks: text lines classification in a FAQ data set and image classification. Results on these tasks clearly show that our approach outperforms the standard stacked sequential learning. Moreover, we show that the proposed method allows to control the trade-off between the detail and the desired range of the interactions.
|
|
|
Carlo Gatta, Juan Diego Gomez, Francesco Ciompi, Oriol Rodriguez-Leor, & Petia Radeva. (2009). Toward robust myocardial blush grade estimation in contrast angiography. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524, 249–256). LNCS. Springer Berlin Heidelberg.
Abstract: The assessment of Myocardial Blush Grade after primary angioplasty is a precious diagnostic tool to understand if the patient needs further medication or the use of specifics drugs. Unfortunately, the assessment of MBG is difficult for non highly specialized staff. Experimental data show that there is poor correlation between MBG assessment of low and high specialized staff, thus reducing its applicability. This paper proposes a method able to achieve an objective measure of MBG, or a set of parameters that correlates with the MBG. The method tracks the blush area starting from just one single frame tagged by the physician. As a consequence, the blush area is kept isolated from contaminating phenomena such as diaphragm and arteries movements. We also present a method to extract four parameters that are expected to correlate with the MBG. Preliminary results show that the method is capable of extracting interesting information regarding the behavior of the myocardial perfusion.
|
|
|
Francesco Ciompi, Oriol Pujol, Oriol Rodriguez-Leor, Carlo Gatta, Angel Serrano, & Petia Radeva. (2009). Enhancing In-Vitro IVUS Data for Tissue Characterization. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524, 241–248). LNCS. Springer Berlin Heidelberg.
Abstract: Intravascular Ultrasound (IVUS) data validation is usually performed by comparing post-mortem (in-vitro) IVUS data and corresponding histological analysis of the tissue, obtaining a reliable ground truth. The main drawback of this method is the few number of available study cases due to the complex procedure of histological analysis. In this work we propose a novel semi-supervised approach to enhance the in-vitro training set by including examples from in-vivo coronary plaques data set. For this purpose, a Sequential Floating Forward Selection method is applied on in-vivo data and plaque characterization performances are evaluated by Leave-One-Patient-Out cross-validation technique. Supervised data inclusion improves global classification accuracy from 89.39% to 91.82%.
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2009). An iterative multiresolution scheme for SFM with missing data. JMIV - Journal of Mathematical Imaging and Vision, 34(3), 240–258.
Abstract: Several techniques have been proposed for tackling the Structure from Motion problem through factorization in the case of missing data. However, when the percentage of unknown data is high, most of them may not perform as well as expected. Focussing on this problem, an iterative multiresolution scheme, which aims at recovering missing entries in the originally given input matrix, is proposed. Information recovered following a coarse-to-fine strategy is used for filling in the missing entries. The objective is to recover, as much as possible, missing data in the given matrix.
Thus, when a factorization technique is applied to the partially or totally filled in matrix, instead of to the originally given input one, better results will be obtained. An evaluation study about the robustness to missing and noisy data is reported.
Experimental results obtained with synthetic and real video sequences are presented to show the viability of the proposed approach.
|
|
|
Agnes Borras, & Josep Llados. (2009). Corest: A measure of color and space stability to detect salient regions according to human criteria. In 5th International Conference on Computer Vision Theory and Applications (pp. 204–209).
|
|
|
David Aldavert, Ricardo Toledo, Arnau Ramisa, & Ramon Lopez de Mantaras. (2009). Visual Registration Method For A Low Cost Robot: Computer Vision Systems. In 7th International Conference on Computer Vision Systems (Vol. 5815, 204–214). LNCS. Springer Berlin Heidelberg.
Abstract: An autonomous mobile robot must face the correspondence or data association problem in order to carry out tasks like place recognition or unknown environment mapping. In order to put into correspondence two maps, most methods estimate the transformation relating the maps from matches established between low level feature extracted from sensor data. However, finding explicit matches between features is a challenging and computationally expensive task. In this paper, we propose a new method to align obstacle maps without searching explicit matches between features. The maps are obtained from a stereo pair. Then, we use a vocabulary tree approach to identify putative corresponding maps followed by the Newton minimization algorithm to find the transformation that relates both maps. The proposed method is evaluated in a typical office environment showing good performance.
|
|
|
Alicia Fornes, Josep Llados, Gemma Sanchez, & Horst Bunke. (2009). Symbol-independent writer identification in old handwritten music scores. In In proceedings of 8th IAPR International Workshop on Graphics Recognition (186–197). Springer Berlin Heidelberg.
|
|
|
Antonio Clavelli, & Dimosthenis Karatzas. (2009). Text Segmentation in Colour Posters from the Spanish Civil War Era. In 10th International Conference on Document Analysis and Recognition (pp. 181–185).
Abstract: The extraction of textual content from colour documents of a graphical nature is a complicated task. The text can be rendered in any colour, size and orientation while the existence of complex background graphics with repetitive patterns can make its localization and segmentation extremely difficult.
Here, we propose a new method for extracting textual content from such colour images that makes no assumption as to the size of the characters, their orientation or colour, while it is tolerant to characters that do not follow a straight baseline. We evaluate this method on a collection of documents with historical
connotations: the Posters from the Spanish Civil War.
|
|
|
Javier Vazquez, C. Alejandro Parraga, & Maria Vanrell. (2009). Ordinal pairwise method for natural images comparison. PER - Perception, 38, 180.
Abstract: 38(Suppl.)ECVP Abstract Supplement
We developed a new psychophysical method to compare different colour appearance models when applied to natural scenes. The method was as follows: two images (processed by different algorithms) were displayed on a CRT monitor and observers were asked to select the most natural of them. The original images were gathered by means of a calibrated trichromatic digital camera and presented one on top of the other on a calibrated screen. The selection was made by pressing on a 6-button IR box, which allowed observers to consider not only the most natural but to rate their selection. The rating system allowed observers to register how much more natural was their chosen image (eg, much more, definitely more, slightly more), which gave us valuable extra information on the selection process. The results were analysed considering both the selection as a binary choice (using Thurstone's law of comparative judgement) and using Bradley-Terry method for ordinal comparison. Our results show a significant difference in the rating scales obtained. Although this method has been used in colour constancy algorithm comparisons, its uses are much wider, eg to compare algorithms of image compression, rendering, recolouring, etc.
|
|
|
C. Alejandro Parraga, Javier Vazquez, & Maria Vanrell. (2009). A new cone activation-based natural images dataset. PER - Perception, 36, 180.
Abstract: We generated a new dataset of digital natural images where each colour plane corresponds to the human LMS (long-, medium-, short-wavelength) cone activations. The images were chosen to represent five different visual environments (eg forest, seaside, mountain snow, urban, motorways) and were taken under natural illumination at different times of day. At the bottom-left corner of each picture there was a matte grey ball of approximately constant spectral reflectance (across the camera's response spectrum,) and nearly Lambertian reflective properties, which allows to compute (and remove, if necessary) the illuminant's colour and intensity. The camera (Sigma Foveon SD10) was calibrated by measuring its sensor's spectral responses using a set of 31 spectrally narrowband interference filters. This allowed conversion of the final camera-dependent RGB colour space into the Smith and Pokorny (1975) cone activation space by means of a polynomial transformation, optimised for a set of 1269 Munsell chip reflectances. This new method is an improvement over the usual 3 × 3 matrix transformation which is only accurate for spectrally-narrowband colours. The camera-to-LMS transformation can be recalculated to consider other non-human visual systems. The dataset is available to download from our website.
|
|