Alicia Fornes, & Josep Llados. (2010). A Symbol-dependent Writer Identifcation Approach in Old Handwritten Music Scores. In 12th International Conference on Frontiers in Handwriting Recognition (pp. 634–639).
Abstract: Writer identification consists in determining the writer of a piece of handwriting from a set of writers. In this paper we introduce a symbol-dependent approach for identifying the writer of old music scores, which is based on two symbol recognition methods. The main idea is to use the Blurred Shape Model descriptor and a DTW-based method for detecting, recognizing and describing the music clefs and notes. The proposed approach has been evaluated in a database of old music scores, achieving very high writer identification rates.
|
R. Bertrand, P. Gomez-Krämer, Oriol Ramos Terrades, P. Franco, & Jean-Marc Ogier. (2013). A System Based On Intrinsic Features for Fraudulent Document Detection. In 12th International Conference on Document Analysis and Recognition (pp. 106–110).
Abstract: Paper documents still represent a large amount of information supports used nowadays and may contain critical data. Even though official documents are secured with techniques such as printed patterns or artwork, paper documents suffer froma lack of security.
However, the high availability of cheap scanning and printing hardware allows non-experts to easily create fake documents. As the use of a watermarking system added during the document production step is hardly possible, solutions have to be proposed to distinguish a genuine document from a forged one.
In this paper, we present an automatic forgery detection method based on document’s intrinsic features at character level. This method is based on the one hand on outlier character detection in a discriminant feature space and on the other hand on the detection of strictly similar characters. Therefore, a feature set iscomputed for all characters. Then, based on a distance between characters of the same class.
Keywords: paper document; document analysis; fraudulent document; forgery; fake
|
Enric Marti, Jordi Regincos, Jaime Lopez-Krahe, & Juan J.Villanueva. (1991). A system for interpretation of hand line drawings as three-dimensional scene for CAD input. In Proceedings of the First International Conference on Document Analysis and Recognition (pp. 472–480).
|
Gemma Sanchez, Ernest Valveny, Josep Llados, Enric Marti, Oriol Ramos Terrades, N.Lozano, et al. (2003). A system for virtual prototyping of architectural projects. In Proceedings of Fifth IAPR International Workshop on Pattern Recognition (pp. 65–74).
|
Sebastien Mace, Herve Locteau, Ernest Valveny, & Salvatore Tabbone. (2010). A system to detect rooms in architectural floor plan images. In 9th IAPR International Workshop on Document Analysis Systems (167–174).
Abstract: In this article, a system to detect rooms in architectural floor plan images is described. We first present a primitive extraction algorithm for line detection. It is based on an original coupling of classical Hough transform with image vectorization in order to perform robust and efficient line detection. We show how the lines that satisfy some graphical arrangements are combined into walls. We also present the way we detect some door hypothesis thanks to the extraction of arcs. Walls and door hypothesis are then used by our room segmentation strategy; it consists in recursively decomposing the image until getting nearly convex regions. The notion of convexity is difficult to quantify, and the selection of separation lines between regions can also be rough. We take advantage of knowledge associated to architectural floor plans in order to obtain mostly rectangular rooms. Qualitative and quantitative evaluations performed on a corpus of real documents show promising results.
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2007). A System to Retrieve Text/Symbols from Color Maps using Connected Component and Skeleton Analysis. In J.M. Ogier W. L. J. Llados (Ed.), Seventh IAPR International Workshop on Graphics Recognition (79–78).
|
Pau Torras, Mohamed Ali Souibgui, Jialuo Chen, & Alicia Fornes. (2021). A Transcription Is All You Need: Learning to Align through Attention. In 14th IAPR International Workshop on Graphics Recognition (Vol. 12916, 141–146). LNCS.
Abstract: Historical ciphered manuscripts are a type of document where graphical symbols are used to encrypt their content instead of regular text. Nowadays, expert transcriptions can be found in libraries alongside the corresponding manuscript images. However, those transcriptions are not aligned, so these are barely usable for training deep learning-based recognition methods. To solve this issue, we propose a method to align each symbol in the transcript of an image with its visual representation by using an attention-based Sequence to Sequence (Seq2Seq) model. The core idea is that, by learning to recognise symbols sequence within a cipher line image, the model also identifies their position implicitly through an attention mechanism. Thus, the resulting symbol segmentation can be later used for training algorithms. The experimental evaluation shows that this method is promising, especially taking into account the small size of the cipher dataset.
|
Debora Gil, Agnes Borras, Sergio Vera, & Miguel Angel Gonzalez Ballester. (2013). A Validation Benchmark for Assessment of Medial Surface Quality for Medical Applications. In 9th International Conference on Computer Vision Systems (Vol. 7963, pp. 334–343). LNCS. Springer Berlin Heidelberg.
Abstract: Confident use of medial surfaces in medical decision support systems requires evaluating their quality for detecting pathological deformations and describing anatomical volumes. Validation in the medical imaging field is a challenging task mainly due to the difficulties for getting consensual ground truth. In this paper we propose a validation benchmark for assessing medial surfaces in the context of medical applications. Our benchmark includes a home-made database of synthetic medial surfaces and volumes and specific scores for evaluating surface accuracy, its stability against volume deformations and its capabilities for accurate reconstruction of anatomical volumes.
Keywords: Medial Surfaces; Shape Representation; Medical Applications; Performance Evaluation
|
Aura Hernandez-Sabate, Monica Mitiko, Sergio Shiguemi, & Debora Gil. (2010). A validation protocol for assessing cardiac phase retrieval in IntraVascular UltraSound. In Computing in Cardiology (Vol. 37, pp. 899–902). IEEE.
Abstract: A good reliable approach to cardiac triggering is of utmost importance in obtaining accurate quantitative results of atherosclerotic plaque burden from the analysis of IntraVascular UltraSound. Although, in the last years, there has been an increase in research of methods for retrospective gating, there is no general consensus in a validation protocol. Many methods are based on quality assessment of longitudinal cuts appearance and those reporting quantitative numbers do not follow a standard protocol. Such heterogeneity in validation protocols makes faithful comparison across methods a difficult task. We propose a validation protocol based on the variability of the retrieved cardiac phase and explore the capability of several quality measures for quantifying such variability. An ideal detector, suitable for its application in clinical practice, should produce stable phases. That is, it should always sample the same cardiac cycle fraction. In this context, one should measure the variability (variance) of a candidate sampling with respect a ground truth (reference) sampling, since the variance would indicate how spread we are aiming a target. In order to quantify the deviation between the sampling and the ground truth, we have considered two quality scores reported in the literature: signed distance to the closest reference sample and distance to the right of each reference sample. We have also considered the residuals of the regression line of reference against candidate sampling. The performance of the measures has been explored on a set of synthetic samplings covering different cardiac cycle fractions and variabilities. From our simulations, we conclude that the metrics related to distances are sensitive to the shift considered while the residuals are robust against fraction and variabilities as far as one can establish a pair-wise correspondence between candidate and reference. We will further investigate the impact of false positive and negative detections in experimental data.
|
Jialuo Chen, M.A.Souibgui, Alicia Fornes, & Beata Megyesi. (2020). A Web-based Interactive Transcription Tool for Encrypted Manuscripts. In 3rd International Conference on Historical Cryptology (pp. 52–59).
Abstract: Manual transcription of handwritten text is a time consuming task. In the case of encrypted manuscripts, the recognition is even more complex due to the huge variety of alphabets and symbol sets. To speed up and ease this process, we present a web-based tool aimed to (semi)-automatically transcribe the encrypted sources. The user uploads one or several images of the desired encrypted document(s) as input, and the system returns the transcription(s). This process is carried out in an interactive fashion with
the user to obtain more accurate results. For discovering and testing, the developed web tool is freely available.
|
Sergi Garcia Bordils, Dimosthenis Karatzas, & Marçal Rusiñol. (2023). Accelerating Transformer-Based Scene Text Detection and Recognition via Token Pruning. In 17th International Conference on Document Analysis and Recognition (Vol. 14192, pp. 106–121). LNCS.
Abstract: Scene text detection and recognition is a crucial task in computer vision with numerous real-world applications. Transformer-based approaches are behind all current state-of-the-art models and have achieved excellent performance. However, the computational requirements of the transformer architecture makes training these methods slow and resource heavy. In this paper, we introduce a new token pruning strategy that significantly decreases training and inference times without sacrificing performance, striking a balance between accuracy and speed. We have applied this pruning technique to our own end-to-end transformer-based scene text understanding architecture. Our method uses a separate detection branch to guide the pruning of uninformative image features, which significantly reduces the number of tokens at the input of the transformer. Experimental results show how our network is able to obtain competitive results on multiple public benchmarks while running at significantly higher speeds.
Keywords: Scene Text Detection; Scene Text Recognition; Transformer Acceleration
|
Antonio Hernandez, Carlo Gatta, Sergio Escalera, Laura Igual, Victoria Martin Yuste, & Petia Radeva. (2011). Accurate and Robust Fully-Automatic QCA: Method and Numerical Validation. In 14th International Conference on Medical Image Computing and Computer Assisted Intervention (Vol. 14, pp. 496–503). Springer.
Abstract: The Quantitative Coronary Angiography (QCA) is a methodology used to evaluate the arterial diseases and, in particular, the degree of stenosis. In this paper we propose AQCA, a fully automatic method for vessel segmentation based on graph cut theory. Vesselness, geodesic paths and a new multi-scale edgeness map are used to compute a globally optimal artery segmentation. We evaluate the method performance in a rigorous numerical way on two datasets. The method can detect an artery with precision 92.9 +/- 5% and sensitivity 94.2 +/- 6%. The average absolute distance error between detected and ground truth centerline is 1.13 +/- 0.11 pixels (about 0.27 +/- 0.025 mm) and the absolute relative error in the vessel caliber estimation is 2.93% with almost no bias. Moreover, the method can discriminate between arteries and catheter with an accuracy of 96.4%.
|
C. Alejandro Parraga, Ramon Baldrich, & Maria Vanrell. (2010). Accurate Mapping of Natural Scenes Radiance to Cone Activation Space: A New Image Dataset. In 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science (50–57).
Abstract: The characterization of trichromatic cameras is usually done in terms of a device-independent color space, such as the CIE 1931 XYZ space. This is indeed convenient since it allows the testing of results against colorimetric measures. We have characterized our camera to represent human cone activation by mapping the camera sensor's (RGB) responses to human (LMS) through a polynomial transformation, which can be “customized” according to the types of scenes we want to represent. Here we present a method to test the accuracy of the camera measures and a study on how the choice of training reflectances for the polynomial may alter the results.
|
Ariel Amato, Mikhail Mozerov, Ivan Huerta, Jordi Gonzalez, & Juan J. Villanueva. (2008). ackground Subtraction Technique Based on Chromaticity and Intensity Patterns. In 19th International Conference on Pattern Recognition, (1–4).
|
Mohammad Ali Bagheri, Qigang Gao, & Sergio Escalera. (2016). Action Recognition by Pairwise Proximity Function Support Vector Machines with Dynamic Time Warping Kernels. In 29th Canadian Conference on Artificial Intelligence (Vol. 9673, pp. 3–14). Springer International Publishing.
Abstract: In the context of human action recognition using skeleton data, the 3D trajectories of joint points may be considered as multi-dimensional time series. The traditional recognition technique in the literature is based on time series dis(similarity) measures (such as Dynamic Time Warping). For these general dis(similarity) measures, k-nearest neighbor algorithms are a natural choice. However, k-NN classifiers are known to be sensitive to noise and outliers. In this paper, a new class of Support Vector Machine that is applicable to trajectory classification, such as action recognition, is developed by incorporating an efficient time-series distances measure into the kernel function. More specifically, the derivative of Dynamic Time Warping (DTW) distance measure is employed as the SVM kernel. In addition, the pairwise proximity learning strategy is utilized in order to make use of non-positive semi-definite (PSD) kernels in the SVM formulation. The recognition results of the proposed technique on two action recognition datasets demonstrates the ourperformance of our methodology compared to the state-of-the-art methods. Remarkably, we obtained 89 % accuracy on the well-known MSRAction3D dataset using only 3D trajectories of body joints obtained by Kinect
|