L. Rothacker, Marçal Rusiñol, Josep Llados, & G.A. Fink. (2014). A Two-stage Approach to Segmentation-Free Query-by-example Word Spotting. Manuscript Cultures, 47–58.
Abstract: With the ongoing progress in digitization, huge document collections and archives have become available to a broad audience. Scanned document images can be transmitted electronically and studied simultaneously throughout the world. While this is very beneficial, it is often impossible to perform automated searches on these document collections. Optical character recognition usually fails when it comes to handwritten or historic documents. In order to address the need for exploring document collections rapidly, researchers are working on word spotting. In query-by-example word spotting scenarios, the user selects an exemplary occurrence of the query word in a document image. The word spotting system then retrieves all regions in the collection that are visually similar to the given example of the query word. The best matching regions are presented to the user and no actual transcription is required.
An important property of a word spotting system is the computational speed with which queries can be executed. In our previous work, we presented a relatively slow but high-precision method. In the present work, we will extend this baseline system to an integrated two-stage approach. In a coarse-grained first stage, we will filter document images efficiently in order to identify regions that are likely to contain the query word. In the fine-grained second stage, these regions will be analyzed with our previously presented high-precision method. Finally, we will report recognition results and query times for the well-known George Washington
benchmark in our evaluation. We achieve state-of-the-art recognition results while the query times can be reduced to 50% in comparison with our baseline.
|
Lasse Martensson, Ekta Vats, Anders Hast, & Alicia Fornes. (2019). In Search of the Scribe: Letter Spotting as a Tool for Identifying Scribes in Large Handwritten Text Corpora. HUMAN IT - Journal for Information Technology Studies as a Human Science, 95–120.
Abstract: In this article, a form of the so-called word spotting-method is used on a large set of handwritten documents in order to identify those that contain script of similar execution. The point of departure for the investigation is the mediaeval Swedish manuscript Cod. Holm. D 3. The main scribe of this manuscript has yet not been identified in other documents. The current attempt aims at localising other documents that display a large degree of similarity in the characteristics of the script, these being possible candidates for being executed by the same hand. For this purpose, the method of word spotting has been employed, focusing on individual letters, and therefore the process is referred to as letter spotting in the article. In this process, a set of ‘g’:s, ‘h’:s and ‘k’:s have been selected as templates, and then a search has been made for close matches among the mediaeval Swedish charters. The search resulted in a number of charters that displayed great similarities with the manuscript D 3. The used letter spotting method thus proofed to be a very efficient sorting tool localising similar script samples.
Keywords: Scribal attribution/ writer identification; digital palaeography; word spotting; mediaeval charters; mediaeval manuscripts
|
Lluis Barcelo, & X. Binefa. (2002). Bayesian Video Mosaicing with moving objects. International Journal of Pattern Recognition and Artificial Intelligence, 16(3): 341–348 (IF: 0.359).
|
M. Bressan, David Guillamet, & Jordi Vitria. (2004). Multiclass Object Recognition using Class-Conditional Independent Component Analisis. Cybernetics and Systems, 35/1:35–61 (IF: 0.768).
|
M. Bressan, David Guillamet, & Jordi Vitria. (2003). Using an ICA Representation of Local Color Histograms for Object Recognition. Pattern Recognition, 36(3):691–701 (IF: 1.611).
|
M. Bressan, & Jordi Vitria. (2003). Independent Feature Selection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(10): 1312–1317 (IF: 3.823).
|
M. Gomez, J. Mauri, E. Fernandez-Nofrerias, Oriol Rodriguez-Leor, Carme Julia, David Rotger, et al. (2002). Una nova aplicacio informatica per a la correlacio d imatges angiografiques i d ecografia intracoronaria. Revista de la Societat Catalana de Cardiologia, 4(4): 42, XIV Congres de la Societat Catalana de Cardiologia..
|
M. Gomez, J. Mauri, E. Fernandez-Nofrerias, Oriol Rodriguez-Leor, Carme Julia, Misael Rosales, et al. (2002). Modelo fisico para la simulacion de ultrasonido Intravascular. XXXVIII Congreso Nacional de la Sociedad Española de Cardiologia..
|
M. Gomez, J. Mauri, E. Fernandez-Nofrerias, Oriol Rodriguez-Leor, Carme Julia, Oriol Pujol, et al. (2002). Diferenciacion de las estructuras del vaso coronario mediante el procesamiento de imagenes y el analisis de las diferentes texturas a partir de la ecografia intracoronaria. XXXVIII Congreso Nacional de la Sociedad Española de Cardiologia.
|
M. Gonzalez-Audicana, Xavier Otazu, O. Fors, & A. Seco. (2005). Comparison between Mallats and the trous discrete wavelet transform based algorithms for the fusion of multispectral and panchromatic images. International Journal of Remote Sensing, 26(3):595–614 (IF: 0.925).
|
M.A. Garcia, & Angel Sappa. (2004). Efficient Generation of Discontinuity-Preserving Adaptive Triangulations from Range Images. IEEE Trans. on Systems, Man, and Cybernetics (Part B), 34(5):2003–2014 (IF: 1.052).
|
Marçal Rusiñol. (2019). Classificació semàntica i visual de documents digitals. Revista de biblioteconomia i documentacio, 75–86.
Abstract: Se analizan los sistemas de procesamiento automático que trabajan sobre documentos digitalizados con el objetivo de describir los contenidos. De esta forma contribuyen a facilitar el acceso, permitir la indización automática y hacer accesibles los documentos a los motores de búsqueda. El objetivo de estas tecnologías es poder entrenar modelos computacionales que sean capaces de clasificar, agrupar o realizar búsquedas sobre documentos digitales. Así, se describen las tareas de clasificación, agrupamiento y búsqueda. Cuando utilizamos tecnologías de inteligencia artificial en los sistemas de
clasificación esperamos que la herramienta nos devuelva etiquetas semánticas; en sistemas de agrupamiento que nos devuelva documentos agrupados en clusters significativos; y en sistemas de búsqueda esperamos que dada una consulta, nos devuelva una lista ordenada de documentos en función de la relevancia. A continuación se da una visión de conjunto de los métodos que nos permiten describir los documentos digitales, tanto de manera visual (cuál es su apariencia), como a partir de sus contenidos semánticos (de qué hablan). En cuanto a la descripción visual de documentos se aborda el estado de la cuestión de las representaciones numéricas de documentos digitalizados
tanto por métodos clásicos como por métodos basados en el aprendizaje profundo (deep learning). Respecto de la descripción semántica de los contenidos se analizan técnicas como el reconocimiento óptico de caracteres (OCR); el cálculo de estadísticas básicas sobre la aparición de las diferentes palabras en un texto (bag-of-words model); y los métodos basados en aprendizaje profundo como el método word2vec, basado en una red neuronal que, dadas unas cuantas palabras de un texto, debe predecir cuál será la
siguiente palabra. Desde el campo de las ingenierías se están transfiriendo conocimientos que se han integrado en productos o servicios en los ámbitos de la archivística, la biblioteconomía, la documentación y las plataformas de gran consumo, sin embargo los algoritmos deben ser lo suficientemente eficientes no sólo para el reconocimiento y transcripción literal sino también para la capacidad de interpretación de los contenidos.
|
Marçal Rusiñol, & Lluis Gomez. (2018). Avances en clasificación de imágenes en los últimos diez años. Perspectivas y limitaciones en el ámbito de archivos fotográficos históricos. Revista anual de la Asociación de Archiveros de Castilla y León, 161–174.
|
Marçal Rusiñol, R.Roset, Josep Llados, & C.Montaner. (2011). Automatic Index Generation of Digitized Map Series by Coordinate Extraction and Interpretation. ePER - e-Perimetron, 219–229.
Abstract: By means of computer vision algorithms scanned images of maps are processed in order to extract relevant geographic information from printed coordinate pairs. The meaningful information is then transformed into georeferencing information for each single map sheet, and the complete set is compiled to produce a graphical index sheet for the map series along with relevant metadata. The whole process is fully automated and trained to attain maximum effectivity and throughput.
|
Marc Sunset Perez, Marc Comino Trinidad, Dimosthenis Karatzas, Antonio Chica Calaf, & Pere Pau Vazquez Alcocer. (2016). Development of general‐purpose projection‐based augmented reality systems. IADIs - IADIs international journal on computer science and information systems, 1–18.
Abstract: Despite the large amount of methods and applications of augmented reality, there is little homogenizatio n on the software platforms that support them. An exception may be the low level control software that is provided by some high profile vendors such as Qualcomm and Metaio. However, these provide fine grain modules for e.g. element tracking. We are more co ncerned on the application framework, that includes the control of the devices working together for the development of the AR experience. In this paper we describe the development of a software framework for AR setups. We concentrate on the modular design of the framework, but also on some hard problems such as the calibration stage, crucial for projection – based AR. The developed framework is suitable and has been tested in AR applications using camera – projector pairs, for both fixed and nomadic setups
|