Carlo Gatta, Juan Diego Gomez, Francesco Ciompi, Oriol Rodriguez-Leor, & Petia Radeva. (2009). Toward robust myocardial blush grade estimation in contrast angiography. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524, 249–256). LNCS. Springer Berlin Heidelberg.
Abstract: The assessment of Myocardial Blush Grade after primary angioplasty is a precious diagnostic tool to understand if the patient needs further medication or the use of specifics drugs. Unfortunately, the assessment of MBG is difficult for non highly specialized staff. Experimental data show that there is poor correlation between MBG assessment of low and high specialized staff, thus reducing its applicability. This paper proposes a method able to achieve an objective measure of MBG, or a set of parameters that correlates with the MBG. The method tracks the blush area starting from just one single frame tagged by the physician. As a consequence, the blush area is kept isolated from contaminating phenomena such as diaphragm and arteries movements. We also present a method to extract four parameters that are expected to correlate with the MBG. Preliminary results show that the method is capable of extracting interesting information regarding the behavior of the myocardial perfusion.
|
Simone Balocco, Carlo Gatta, Francesco Ciompi, Oriol Pujol, Xavier Carrillo, J. Mauri, et al. (2011). Combining Growcut and Temporal Correlation for IVUS Lumen Segmentation. In Jordi Vitria, Joao Miguel Raposo, & Mario Hernandez (Eds.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 556–563). LNCS. Berlin: Springer Berlin Heidelberg.
Abstract: The assessment of arterial luminal area, performed by IVUS analysis, is a clinical index used to evaluate the degree of coronary artery disease. In this paper we propose a novel approach to automatically segment the vessel lumen, which combines model-based temporal information extracted from successive frames of the sequence, with spatial classification using the Growcut algorithm. The performance of the method is evaluated by an in vivo experiment on 300 IVUS frames. The automatic and manual segmentation performances in general vessel and stent frames are comparable. The average segmentation error in vessel, stent and bifurcation frames are 0.17±0.08 mm, 0.18±0.07 mm and 0.31±0.12 mm respectively.
|
Francisco Blanco, Felipe Lumbreras, Joan Serrat, Roswitha Siener, Silvia Serranti, Giuseppe Bonifazi, et al. (2014). Taking advantage of Hyperspectral Imaging classification of urinary stones against conventional IR Spectroscopy. JBiO - Journal of Biomedical Optics, 19(12), 126004–1 - 126004–9.
Abstract: The analysis of urinary stones is mandatory for the best management of the disease after the stone passage in order to prevent further stone episodes. Thus the use of an appropriate methodology for an individualized stone analysis becomes a key factor for giving the patient the most suitable treatment. A recently developed hyperspectral imaging methodology, based on pixel-to-pixel analysis of near-infrared spectral images, is compared to the reference technique in stone analysis, infrared (IR) spectroscopy. The developed classification model yields >90% correct classification rate when compared to IR and is able to precisely locate stone components within the structure of the stone with a 15 µm resolution. Due to the little sample pretreatment, low analysis time, good performance of the model, and the automation of the measurements, they become analyst independent; this methodology can be considered to become a routine analysis for clinical laboratories.
|
Alicia Fornes, & Gemma Sanchez. (2014). Analysis and Recognition of Music Scores. In D. Doermann, & K. Tombre (Eds.), Handbook of Document Image Processing and Recognition (Vol. E, pp. 749–774). Springer London.
Abstract: The analysis and recognition of music scores has attracted the interest of researchers for decades. Optical Music Recognition (OMR) is a classical research field of Document Image Analysis and Recognition (DIAR), whose aim is to extract information from music scores. Music scores contain both graphical and textual information, and for this reason, techniques are closely related to graphics recognition and text recognition. Since music scores use a particular diagrammatic notation that follow the rules of music theory, many approaches make use of context information to guide the recognition and solve ambiguities. This chapter overviews the main Optical Music Recognition (OMR) approaches. Firstly, the different methods are grouped according to the OMR stages, namely, staff removal, music symbol recognition, and syntactical analysis. Secondly, specific approaches for old and handwritten music scores are reviewed. Finally, online approaches and commercial systems are also commented.
|
Alicia Fornes. (2009). Writer Identification by a Combination of Graphical Features in the Framework of Old Handwritten Music Scores (Josep Llados, & Gemma Sanchez, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: The analysis and recognition of historical document images has attracted growing interest in the last years. Mass digitization and document image understanding allows the preservation, access and indexation of this artistic, cultural and technical heritage. The analysis of handwritten documents is an outstanding subfield. The main interest is not only the transcription of the document to a standard format, but also, the identification of the author of a document from a set of writers (namely writer identification).
Writer identification in handwritten text documents is an active area of study, however, the identification of the writer of graphical documents is still a challenge. The main objective of this thesis is the identification of the writer in old music scores, as an example of graphic documents. Concerning old music scores, many historical archives contain a huge number of sheets of musical compositions without information about the composer, and the research on this field could be helpful for musicologists.
The writer identification framework proposed in this thesis combines three different writer identification approaches, which are the main scientific contributions. The first one is based on symbol recognition methods. For this purpose, two novel symbol recognition methods are proposed for coping with the typical distortions in hand-drawn symbols. The second approach preprocesses the music score for obtaining music lines, and extracts information about the slant, width of the writing, connected components, contours and fractals. Finally, the third approach extracts global information by generating texture images from the music scores and extracting textural features (such as Gabor filters and co-occurence matrices).
The high identification rates obtained in the experimental results demonstrate the suitability of the proposed ensemble architecture. To the best of our knowledge, this work is the first contribution on writer identification from images containing graphical languages.
|
Mario Rojas, David Masip, & Jordi Vitria. (2011). Predicting Dominance Judgements Automatically: A Machine Learning Approach. In IEEE International Workshop on Social Behavior Analysis (pp. 939–944).
Abstract: The amount of multimodal devices that surround us is growing everyday. In this context, human interaction and communication have become a focus of attention and a hot topic of research. A crucial element in human relations is the evaluation of individuals with respect to facial traits, what is called a first impression. Studies based on appearance have suggested that personality can be expressed by appearance and the observer may use such information to form judgments. In the context of rapid facial evaluation, certain personality traits seem to have a more pronounced effect on the relations and perceptions inside groups. The perception of dominance has been shown to be an active part of social roles at different stages of life, and even play a part in mate selection. The aim of this paper is to study to what extent this information is learnable from the point of view of computer science. Specifically we intend to determine if judgments of dominance can be learned by machine learning techniques. We implement two different descriptors in order to assess this. The first is the histogram of oriented gradients (HOG), and the second is a probabilistic appearance descriptor based on the frequencies of grouped binary tests. State of the art classification rules validate the performance of both descriptors, with respect to the prediction task. Experimental results show that machine learning techniques can predict judgments of dominance rather accurately (accuracies up to 90%) and that the HOG descriptor may characterize appropriately the information necessary for such task.
|
Svebor Karaman, Andrew Bagdanov, Lea Landucci, Gianpaolo D'Amico, Andrea Ferracani, Daniele Pezzatini, et al. (2016). Personalized multimedia content delivery on an interactive table by passive observation of museum visitors. MTAP - Multimedia Tools and Applications, 75(7), 3787–3811.
Abstract: The amount of multimedia data collected in museum databases is growing fast, while the capacity of museums to display information to visitors is acutely limited by physical space. Museums must seek the perfect balance of information given on individual pieces in order to provide sufficient information to aid visitor understanding while maintaining sparse usage of the walls and guaranteeing high appreciation of the exhibit. Moreover, museums often target the interests of average visitors instead of the entire spectrum of different interests each individual visitor might have. Finally, visiting a museum should not be an experience contained in the physical space of the museum but a door opened onto a broader context of related artworks, authors, artistic trends, etc. In this paper we describe the MNEMOSYNE system that attempts to address these issues through a new multimedia museum experience. Based on passive observation, the system builds a profile of the artworks of interest for each visitor. These profiles of interest are then used to drive an interactive table that personalizes multimedia content delivery. The natural user interface on the interactive table uses the visitor’s profile, an ontology of museum content and a recommendation system to personalize exploration of multimedia content. At the end of their visit, the visitor can take home a personalized summary of their visit on a custom mobile application. In this article we describe in detail each component of our approach as well as the first field trials of our prototype system built and deployed at our permanent exhibition space at LeMurate (http://www.lemurate.comune.fi.it/lemurate/) in Florence together with the first results of the evaluation process during the official installation in the National Museum of Bargello (http://www.uffizi.firenze.it/musei/?m=bargello).
Keywords: Computer vision; Video surveillance; Cultural heritage; Multimedia museum; Personalization; Natural interaction; Passive profiling
|
Oriol Ramos Terrades, Alejandro Hector Toselli, Nicolas Serrano, Veronica Romero, Enrique Vidal, & Alfons Juan. (2010). Interactive layout analysis and transcription systems for historic handwritten documents. In 10th ACM Symposium on Document Engineering (219–222).
Abstract: The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process.
Keywords: Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis
|
Alicia Fornes, Josep Llados, Gemma Sanchez, Xavier Otazu, & Horst Bunke. (2010). A Combination of Features for Symbol-Independent Writer Identification in Old Music Scores. IJDAR - International Journal on Document Analysis and Recognition, 13(4), 243–259.
Abstract: The aim of writer identification is determining the writer of a piece of handwriting from a set of writers. In this paper, we present an architecture for writer identification in old handwritten music scores. Even though an important amount of music compositions contain handwritten text, the aim of our work is to use only music notation to determine the author. The main contribution is therefore the use of features extracted from graphical alphabets. Our proposal consists in combining the identification results of two different approaches, based on line and textural features. The steps of the ensemble architecture are the following. First of all, the music sheet is preprocessed for removing the staff lines. Then, music lines and texture images are generated for computing line features and textural features. Finally, the classification results are combined for identifying the writer. The proposed method has been tested on a database of old music scores from the seventeenth to nineteenth centuries, achieving a recognition rate of about 92% with 20 writers.
|
Alicia Fornes, Josep Llados, Gemma Sanchez, & Horst Bunke. (2012). Writer Identification in Old Handwritten Music Scores. In Copnstantin Papaodysseus (Ed.), Pattern Recognition and Signal Processing in Archaeometry: Mathematical and Computational Solutions for Archaeology (pp. 27–63). IGI-Global.
Abstract: The aim of writer identification is determining the writer of a piece of handwriting from a set of writers. In this paper we present a system for writer identification in old handwritten music scores. Even though an important amount of compositions contains handwritten text in the music scores, the aim of our work is to use only music notation to determine the author. The steps of the system proposed are the following. First of all, the music sheet is preprocessed and normalized for obtaining a single binarized music line, without the staff lines. Afterwards, 100 features are extracted for every music line, which are subsequently used in a k-NN classifier that compares every feature vector with prototypes stored in a database. By applying feature selection and extraction methods on the original feature set, the performance is increased. The proposed method has been tested on a database of old music scores from the 17th to 19th centuries, achieving a recognition rate of about 95%.
|
Jaime Moreno, Xavier Otazu, & Maria Vanrell. (2010). Local Perceptual Weighting in JPEG2000 for Color Images. In 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science (255–260).
Abstract: The aim of this work is to explain how to apply perceptual concepts to define a perceptual pre-quantizer and to improve JPEG2000 compressor. The approach consists in quantizing wavelet transform coefficients using some of the human visual system behavior properties. Noise is fatal to image compression performance, because it can be both annoying for the observer and consumes excessive bandwidth when the imagery is transmitted. Perceptual pre-quantization reduces unperceivable details and thus improve both visual impression and transmission properties. The comparison between JPEG2000 without and with perceptual pre-quantization shows that the latter is not favorable in PSNR, but the recovered image is more compressed at the same or even better visual quality measured with a weighted PSNR. Perceptual criteria were taken from the CIWaM (Chromatic Induction Wavelet Model).
|
Jaime Moreno, Xavier Otazu, & Maria Vanrell. (2010). Contribution of CIWaM in JPEG2000 Quantization for Color Images. In Proceedings of The CREATE 2010 Conference (132–136).
Abstract: The aim of this work is to explain how to apply perceptual concepts to define a perceptual pre-quantizer and to improve JPEG2000 compressor. The approach consists in quantizing wavelet transform coefficients using some of the human visual system behavior properties. Noise is fatal to image compression performance, because it can be both annoying for the observer and consumes excessive bandwidth when the imagery is transmitted. Perceptual pre-quantization reduces unperceivable details and thus improve both visual impression and transmission properties. The comparison between JPEG2000 without and with perceptual pre-quantization shows that the latter is not favorable in PSNR, but the recovered image is more compressed at the same or even better visual quality measured with a weighted PSNR. Perceptual criteria were taken from the CIWaM(ChromaticInductionWaveletModel).
|
Susana Alvarez, & Maria Vanrell. (2012). Texton theory revisited: a bag-of-words approach to combine textons. PR - Pattern Recognition, 45(12), 4312–4325.
Abstract: The aim of this paper is to revisit an old theory of texture perception and
update its computational implementation by extending it to colour. With this in mind we try to capture the optimality of perceptual systems. This is achieved in the proposed approach by sharing well-known early stages of the visual processes and extracting low-dimensional features that perfectly encode adequate properties for a large variety of textures without needing further learning stages. We propose several descriptors in a bag-of-words framework that are derived from different quantisation models on to the feature spaces. Our perceptual features are directly given by the shape and colour attributes of image blobs, which are the textons. In this way we avoid learning visual words and directly build the vocabularies on these lowdimensionaltexton spaces. Main differences between proposed descriptors rely on how co-occurrence of blob attributes is represented in the vocabularies. Our approach overcomes current state-of-art in colour texture description which is proved in several experiments on large texture datasets.
|
Noha Elfiky, Theo Gevers, Arjan Gijsenij, & Jordi Gonzalez. (2014). Color Constancy using 3D Scene Geometry derived from a Single Image. TIP - IEEE Transactions on Image Processing, 23(9), 3855–3868.
Abstract: The aim of color constancy is to remove the effect of the color of the light source. As color constancy is inherently an ill-posed problem, most of the existing color constancy algorithms are based on specific imaging assumptions (e.g. grey-world and white patch assumption).
In this paper, 3D geometry models are used to determine which color constancy method to use for the different geometrical regions (depth/layer) found
in images. The aim is to classify images into stages (rough 3D geometry models). According to stage models; images are divided into stage regions using hard and soft segmentation. After that, the best color constancy methods is selected for each geometry depth. To this end, we propose a method to combine color constancy algorithms by investigating the relation between depth, local image statistics and color constancy. Image statistics are then exploited per depth to select the proper color constancy method. Our approach opens the possibility to estimate multiple illuminations by distinguishing
nearby light source from distant illuminations. Experiments on state-of-the-art data sets show that the proposed algorithm outperforms state-of-the-art
single color constancy algorithms with an improvement of almost 50% of median angular error. When using a perfect classifier (i.e, all of the test images are correctly classified into stages); the performance of the proposed method achieves an improvement of 52% of the median angular error compared to the best-performing single color constancy algorithm.
|
Aura Hernandez-Sabate, Debora Gil, Petia Radeva, & E.N.Nofrerias. (2004). Anisotropic processing of image structures for adventitia detection in intravascular ultrasound images. In Proc. Computers in Cardiology (Vol. 31, pp. 229–232). Chicago (USA).
Abstract: The adventitia layer appears as a weak edge in IVUS images with a non-uniform grey level, which difficulties its detection. In order to enhance edges, we apply an anisotropic filter that homogenizes the grey level along the image significant structures (ridges, valleys and edges). A standard edge detector applied to the filtered image yields a set of candidate points prone to be unconnected. The final model is obtained by interpolating the former line segments along the tangent direction to the level curves of the filtered image with an anisotropic contour closing technique based on functional extension principles
|