|
Enric Marti, Antoni Gurgui, Debora Gil, Aura Hernandez-Sabate, Jaume Rocarias, & Ferran Poveda. (2014). ABP on line: Seguimiento, estregas y evaluación en aprendizaje basado en proyectos.
|
|
|
Jialuo Chen, M.A.Souibgui, Alicia Fornes, & Beata Megyesi. (2020). A Web-based Interactive Transcription Tool for Encrypted Manuscripts. In 3rd International Conference on Historical Cryptology (pp. 52–59).
Abstract: Manual transcription of handwritten text is a time consuming task. In the case of encrypted manuscripts, the recognition is even more complex due to the huge variety of alphabets and symbol sets. To speed up and ease this process, we present a web-based tool aimed to (semi)-automatically transcribe the encrypted sources. The user uploads one or several images of the desired encrypted document(s) as input, and the system returns the transcription(s). This process is carried out in an interactive fashion with
the user to obtain more accurate results. For discovering and testing, the developed web tool is freely available.
|
|
|
J. Nuñez, O. Fors, Xavier Otazu, Vicenç Pala, Roman Arbiol, & M.T. Merino. (2006). A Wavelet-Based Method for the Determination of the Relative Resolution Between Remotely Sensed Images. IEEE Transactions on Geoscience and Remote Sensing, 44(9): 2539–2548.
|
|
|
Jaume Garcia, Debora Gil, Sandra Pujades, & Francesc Carreras. (2008). A Variational Framework for Assessment of the Left Ventricle Motion. International Journal Mathematical Modelling of Natural Phenomena, 3(6), 76–100.
Abstract: Impairment of left ventricular contractility due to cardiovascular diseases is reflected in left ventricle (LV) motion patterns. An abnormal change of torsion or long axis shortening LV values can help with the diagnosis and follow-up of LV dysfunction. Tagged Magnetic Resonance (TMR) is a widely spread medical imaging modality that allows estimation of the myocardial tissue local deformation. In this work, we introduce a novel variational framework for extracting the left ventricle dynamics from TMR sequences. A bi-dimensional representation space of TMR images given by Gabor filter banks is defined. Tracking of the phases of the Gabor response is combined using a variational framework which regularizes the deformation field just at areas where the Gabor amplitude drops, while restoring the underlying motion otherwise. The clinical applicability of the proposed method is illustrated by extracting normality models of the ventricular torsion from 19 healthy subjects.
Keywords: Key words: Left Ventricle Dynamics, Ventricular Torsion, Tagged Magnetic Resonance, Motion Tracking, Variational Framework, Gabor Transform.
|
|
|
Aura Hernandez-Sabate, Monica Mitiko, Sergio Shiguemi, & Debora Gil. (2010). A validation protocol for assessing cardiac phase retrieval in IntraVascular UltraSound. In Computing in Cardiology (Vol. 37, pp. 899–902). IEEE.
Abstract: A good reliable approach to cardiac triggering is of utmost importance in obtaining accurate quantitative results of atherosclerotic plaque burden from the analysis of IntraVascular UltraSound. Although, in the last years, there has been an increase in research of methods for retrospective gating, there is no general consensus in a validation protocol. Many methods are based on quality assessment of longitudinal cuts appearance and those reporting quantitative numbers do not follow a standard protocol. Such heterogeneity in validation protocols makes faithful comparison across methods a difficult task. We propose a validation protocol based on the variability of the retrieved cardiac phase and explore the capability of several quality measures for quantifying such variability. An ideal detector, suitable for its application in clinical practice, should produce stable phases. That is, it should always sample the same cardiac cycle fraction. In this context, one should measure the variability (variance) of a candidate sampling with respect a ground truth (reference) sampling, since the variance would indicate how spread we are aiming a target. In order to quantify the deviation between the sampling and the ground truth, we have considered two quality scores reported in the literature: signed distance to the closest reference sample and distance to the right of each reference sample. We have also considered the residuals of the regression line of reference against candidate sampling. The performance of the measures has been explored on a set of synthetic samplings covering different cardiac cycle fractions and variabilities. From our simulations, we conclude that the metrics related to distances are sensitive to the shift considered while the residuals are robust against fraction and variabilities as far as one can establish a pair-wise correspondence between candidate and reference. We will further investigate the impact of false positive and negative detections in experimental data.
|
|
|
Debora Gil, Agnes Borras, Sergio Vera, & Miguel Angel Gonzalez Ballester. (2013). A Validation Benchmark for Assessment of Medial Surface Quality for Medical Applications. In 9th International Conference on Computer Vision Systems (Vol. 7963, pp. 334–343). LNCS. Springer Berlin Heidelberg.
Abstract: Confident use of medial surfaces in medical decision support systems requires evaluating their quality for detecting pathological deformations and describing anatomical volumes. Validation in the medical imaging field is a challenging task mainly due to the difficulties for getting consensual ground truth. In this paper we propose a validation benchmark for assessing medial surfaces in the context of medical applications. Our benchmark includes a home-made database of synthetic medial surfaces and volumes and specific scores for evaluating surface accuracy, its stability against volume deformations and its capabilities for accurate reconstruction of anatomical volumes.
Keywords: Medial Surfaces; Shape Representation; Medical Applications; Performance Evaluation
|
|
|
Mohamed Ali Souibgui, Asma Bensalah, Jialuo Chen, Alicia Fornes, & Michelle Waldispühl. (2023). A User Perspective on HTR methods for the Automatic Transcription of Rare Scripts: The Case of Codex Runicus Just Accepted. JOCCH - ACM Journal on Computing and Cultural Heritage, 15(4), 1–18.
Abstract: Recent breakthroughs in Artificial Intelligence, Deep Learning and Document Image Analysis and Recognition have significantly eased the creation of digital libraries and the transcription of historical documents. However, for documents in rare scripts with few labelled training data available, current Handwritten Text Recognition (HTR) systems are too constraint. Moreover, research on HTR often focuses on technical aspects only, and rarely puts emphasis on implementing software tools for scholars in Humanities. In this article, we describe, compare and analyse different transcription methods for rare scripts. We evaluate their performance in a real use case of a medieval manuscript written in the runic script (Codex Runicus) and discuss advantages and disadvantages of each method from the user perspective. From this exhaustive analysis and comparison with a fully manual transcription, we raise conclusions and provide recommendations to scholars interested in using automatic transcription tools.
|
|
|
George A. Triantafyllid, Nikolaos Thomos, Cristina Cañero, P. Vieyres, & Michael G. Strintzis. (2005). A User Interface for Mobile Robotized Tele-Echography.
|
|
|
L. Rothacker, Marçal Rusiñol, Josep Llados, & G.A. Fink. (2014). A Two-stage Approach to Segmentation-Free Query-by-example Word Spotting. Manuscript Cultures, 47–58.
Abstract: With the ongoing progress in digitization, huge document collections and archives have become available to a broad audience. Scanned document images can be transmitted electronically and studied simultaneously throughout the world. While this is very beneficial, it is often impossible to perform automated searches on these document collections. Optical character recognition usually fails when it comes to handwritten or historic documents. In order to address the need for exploring document collections rapidly, researchers are working on word spotting. In query-by-example word spotting scenarios, the user selects an exemplary occurrence of the query word in a document image. The word spotting system then retrieves all regions in the collection that are visually similar to the given example of the query word. The best matching regions are presented to the user and no actual transcription is required.
An important property of a word spotting system is the computational speed with which queries can be executed. In our previous work, we presented a relatively slow but high-precision method. In the present work, we will extend this baseline system to an integrated two-stage approach. In a coarse-grained first stage, we will filter document images efficiently in order to identify regions that are likely to contain the query word. In the fine-grained second stage, these regions will be analyzed with our previously presented high-precision method. Finally, we will report recognition results and query times for the well-known George Washington
benchmark in our evaluation. We achieve state-of-the-art recognition results while the query times can be reduced to 50% in comparison with our baseline.
|
|
|
Razieh Rastgoo, Kourosh Kiani, & Sergio Escalera. (2024). A transformer model for boundary detection in continuous sign language. MTAP - Multimedia Tools and Applications, .
Abstract: Sign Language Recognition (SLR) has garnered significant attention from researchers in recent years, particularly the intricate domain of Continuous Sign Language Recognition (CSLR), which presents heightened complexity compared to Isolated Sign Language Recognition (ISLR). One of the prominent challenges in CSLR pertains to accurately detecting the boundaries of isolated signs within a continuous video stream. Additionally, the reliance on handcrafted features in existing models poses a challenge to achieving optimal accuracy. To surmount these challenges, we propose a novel approach utilizing a Transformer-based model. Unlike traditional models, our approach focuses on enhancing accuracy while eliminating the need for handcrafted features. The Transformer model is employed for both ISLR and CSLR. The training process involves using isolated sign videos, where hand keypoint features extracted from the input video are enriched using the Transformer model. Subsequently, these enriched features are forwarded to the final classification layer. The trained model, coupled with a post-processing method, is then applied to detect isolated sign boundaries within continuous sign videos. The evaluation of our model is conducted on two distinct datasets, including both continuous signs and their corresponding isolated signs, demonstrates promising results.
|
|
|
Pau Torras, Mohamed Ali Souibgui, Jialuo Chen, & Alicia Fornes. (2021). A Transcription Is All You Need: Learning to Align through Attention. In 14th IAPR International Workshop on Graphics Recognition (Vol. 12916, 141–146). LNCS.
Abstract: Historical ciphered manuscripts are a type of document where graphical symbols are used to encrypt their content instead of regular text. Nowadays, expert transcriptions can be found in libraries alongside the corresponding manuscript images. However, those transcriptions are not aligned, so these are barely usable for training deep learning-based recognition methods. To solve this issue, we propose a method to align each symbol in the transcript of an image with its visual representation by using an attention-based Sequence to Sequence (Seq2Seq) model. The core idea is that, by learning to recognise symbols sequence within a cipher line image, the model also identifies their position implicitly through an attention mechanism. Thus, the resulting symbol segmentation can be later used for training algorithms. The experimental evaluation shows that this method is promising, especially taking into account the small size of the cipher dataset.
|
|
|
G.Thorvaldsen, Joana Maria Pujadas-Mora, T.Andersen, L.Eikvil, Josep Llados, Alicia Fornes, et al. (2015). A Tale of two Transcriptions. Historical Life Course Studies, 1–19.
Abstract: non-indexed
This article explains how two projects implement semi-automated transcription routines: for census sheets in Norway and marriage protocols from Barcelona. The Spanish system was created to transcribe the marriage license books from 1451 to 1905 for the Barcelona area; one of the world’s longest series of preserved vital records. Thus, in the Project “Five Centuries of Marriages” (5CofM) at the Autonomous University of Barcelona’s Center for Demographic Studies, the Barcelona Historical Marriage Database has been built. More than 600,000 records were transcribed by 150 transcribers working online. The Norwegian material is cross-sectional as it is the 1891 census, recorded on one sheet per person. This format and the underlining of keywords for several variables made it more feasible to semi-automate data entry than when many persons are listed on the same page. While Optical Character Recognition (OCR) for printed text is scientifically mature, computer vision research is now focused on more difficult problems such as handwriting recognition. In the marriage project, document analysis methods have been proposed to automatically recognize the marriage licenses. Fully automatic recognition is still a challenge, but some promising results have been obtained. In Spain, Norway and elsewhere the source material is available as scanned pictures on the Internet, opening up the possibility for further international cooperation concerning automating the transcription of historic source materials. Like what is being done in projects to digitize printed materials, the optimal solution is likely to be a combination of manual transcription and machine-assisted recognition also for hand-written sources.
Keywords: Nominative Sources; Census; Vital Records; Computer Vision; Optical Character Recognition; Word Spotting
|
|
|
Josep Llados, Jaime Lopez-Krahe, & Enric Marti. (1997). A system to understand hand-drawn floor plans using subgraph isomorphism and Hough transform. In Machine Vision and Applications (Vol. 10, pp. 150–158).
Abstract: Presently, man-machine interface development is a widespread research activity. A system to understand hand drawn architectural drawings in a CAD environment is presented in this paper. To understand a document, we have to identify its building elements and their structural properties. An attributed graph structure is chosen as a symbolic representation of the input document and the patterns to recognize in it. An inexact subgraph isomorphism procedure using relaxation labeling techniques is performed. In this paper we focus on how to speed up the matching. There is a building element, the walls, characterized by a hatching pattern. Using a straight line Hough transform (SLHT)-based method, we recognize this pattern, characterized by parallel straight lines, and remove from the input graph the edges belonging to this pattern. The isomorphism is then applied to the remainder of the input graph. When all the building elements have been recognized, the document is redrawn, correcting the inaccurate strokes obtained from a hand-drawn input.
Keywords: Line drawings – Hough transform – Graph matching – CAD systems – Graphics recognition
|
|
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2008). A System to Segment Text and Symbols from Color Maps. In Graphics Recognition. Recent Advances and New Opportunities (Vol. 5046, pp. 245–256). LNCS.
|
|
|
Partha Pratim Roy, Eduard Vazquez, Josep Llados, Ramon Baldrich, & Umapada Pal. (2007). A System to Retrieve Text/Symbols from Color Maps using Connected Component and Skeleton Analysis. In J.M. Ogier W. L. J. Llados (Ed.), Seventh IAPR International Workshop on Graphics Recognition (79–78).
|
|