|
Christophe Rigaud, Dimosthenis Karatzas, Jean-Christophe Burie and Jean-Marc Ogier. 2013. Speech balloon contour classification in comics. 10th IAPR International Workshop on Graphics Recognition.
Abstract: Comic books digitization combined with subsequent comic book understanding create a variety of new applications, including mobile reading and data mining. Document understanding in this domain is challenging as comics are semi-structured documents, combining semantically important graphical and textual parts. In this work we detail a novel approach for classifying speech balloon in scanned comics book pages based on their contour time series.
|
|
|
Miquel Ferrer, Ernest Valveny and F. Serratosa. 2006. Spectral Median Graphs Applied to Graphical Symbol Recognition. 11th Iberoamerican Congress on Pattern Recognition (CIARP´06), J.P. Martinez–Trinidad et al. (Eds.), LNCS 4225: 774–783.
|
|
|
Josep Llados, J. Lopez-Krahe and D. Archambault. 2007. Special Issue on Information Technologies for Visually Impaired People. Guest Editors.
|
|
|
Josep Llados and Dorothea Blostein. 2007. Special Issue on Graphics Recognition. Guest Editors.
|
|
|
Thanh Ha Do, Salvatore Tabbone and Oriol Ramos Terrades. 2016. Sparse representation over learned dictionary for symbol recognition. SP, 125, 36–47.
Abstract: In this paper we propose an original sparse vector model for symbol retrieval task. More specically, we apply the K-SVD algorithm for learning a visual dictionary based on symbol descriptors locally computed around interest points. Results on benchmark datasets show that the obtained sparse representation is competitive related to state-of-the-art methods. Moreover, our sparse representation is invariant to rotation and scale transforms and also robust to degraded images and distorted symbols. Thereby, the learned visual dictionary is able to represent instances of unseen classes of symbols.
Keywords: Symbol Recognition; Sparse Representation; Learned Dictionary; Shape Context; Interest Points
|
|
|
Anguelos Nicolaou, Andrew Bagdanov, Marcus Liwicki and Dimosthenis Karatzas. 2015. Sparse Radial Sampling LBP for Writer Identification. 13th International Conference on Document Analysis and Recognition ICDAR2015.716–720.
Abstract: In this paper we present the use of Sparse Radial Sampling Local Binary Patterns, a variant of Local Binary Patterns (LBP) for text-as-texture classification. By adapting and extending the standard LBP operator to the particularities of text we get a generic text-as-texture classification scheme and apply it to writer identification. In experiments on CVL and ICDAR 2013 datasets, the proposed feature-set demonstrates State-Of-the-Art (SOA) performance. Among the SOA, the proposed method is the only one that is based on dense extraction of a single local feature descriptor. This makes it fast and applicable at the earliest stages in a DIA pipeline without the need for segmentation, binarization, or extraction of multiple features.
|
|
|
Dimosthenis Karatzas and Ch. Lioutas. 1998. Software Package Development for Electron Diffraction Image Analysis. Proceedings of the XIV Solid State Physics National Conference.
|
|
|
Dena Bazazian, Dimosthenis Karatzas and Andrew Bagdanov. 2018. Soft-PHOC Descriptor for End-to-End Word Spotting in Egocentric Scene Images. International Workshop on Egocentric Perception, Interaction and Computing at ECCV.
Abstract: Word spotting in natural scene images has many applications in scene understanding and visual assistance. We propose Soft-PHOC, an intermediate representation of images based on character probability maps. Our representation extends the concept of the Pyramidal Histogram Of Characters (PHOC) by exploiting Fully Convolutional Networks to derive a pixel-wise mapping of the character distribution within candidate word regions. We show how to use our descriptors for word spotting tasks in egocentric camera streams through an efficient text line proposal algorithm. This is based on the Hough Transform over character attribute maps followed by scoring using Dynamic Time Warping (DTW). We evaluate our results on ICDAR 2015 Challenge 4 dataset of incidental scene text captured by an egocentric camera.
|
|
|
J. Chazalon and 9 others. 2017. SmartDoc 2017 Video Capture: Mobile Document Acquisition in Video Mode. 1st International Workshop on Open Services and Tools for Document Analysis.
Abstract: As mobile document acquisition using smartphones is getting more and more common, along with the continuous improvement of mobile devices (both in terms of computing power and image quality), we can wonder to which extent mobile phones can replace desktop scanners. Modern applications can cope with perspective distortion and normalize the contrast of a document page captured with a smartphone, and in some cases like bottle labels or posters, smartphones even have the advantage of allowing the acquisition of non-flat or large documents. However, several cases remain hard to handle, such as reflective documents (identity cards, badges, glossy magazine cover, etc.) or large documents for which some regions require an important amount of detail. This paper introduces the SmartDoc 2017 benchmark (named “SmartDoc Video Capture”), which aims at
assessing whether capturing documents using the video mode of a smartphone could solve those issues. The task under evaluation is both a stitching and a reconstruction problem, as the user can move the device over different parts of the document to capture details or try to erase highlights. The material released consists of a dataset, an evaluation method and the associated tool, a sample method, and the tools required to extend the dataset. All the components are released publicly under very permissive licenses, and we particularly cared about maximizing the ease of
understanding, usage and improvement.
|
|
|
Lluis Gomez, Andres Mafla, Marçal Rusiñol and Dimosthenis Karatzas. 2018. Single Shot Scene Text Retrieval. 15th European Conference on Computer Vision.728–744. (LNCS.)
Abstract: Textual information found in scene images provides high level semantic information about the image and its context and it can be leveraged for better scene understanding. In this paper we address the problem of scene text retrieval: given a text query, the system must return all images containing the queried text. The novelty of the proposed model consists in the usage of a single shot CNN architecture that predicts at the same time bounding boxes and a compact text representation of the words in them. In this way, the text based image retrieval task can be casted as a simple nearest neighbor search of the query text representation over the outputs of the CNN over the entire image
database. Our experiments demonstrate that the proposed architecture
outperforms previous state-of-the-art while it offers a significant increase
in processing speed.
Keywords: Image retrieval; Scene text; Word spotting; Convolutional Neural Networks; Region Proposals Networks; PHOC
|
|