Marçal Rusiñol, Dimosthenis Karatzas, & Josep Llados. (2014). Spotting Graphical Symbols in Camera-Acquired Documents in Real Time. In Bart Lamiroy, & Jean-Marc Ogier (Eds.), Graphics Recognition. Current Trends and Challenges (Vol. 8746, pp. 3–10). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we present a system devoted to spot graphical symbols in camera-acquired document images. The system is based on the extraction and further matching of ORB compact local features computed over interest key-points. Then, the FLANN indexing framework based on approximate nearest neighbor search allows to efficiently match local descriptors between the captured scene and the graphical models. Finally, the RANSAC algorithm is used in order to compute the homography between the spotted symbol and its appearance in the document image. The proposed approach is efficient and is able to work in real time.
|
Jose Antonio Rodriguez, & Florent Perronnin. (2008). Local Gradient Histogram Features for Word Spotting in Unconstrained Handwritten Documents. In J.M. Ogier J. L. W. Liu (Ed.), Graphics Recognition: Recent Advances and New Opportunities (Vol. 5046, 188–198). LNCS.
|
Marçal Rusiñol, V. Poulain d'Andecy, Dimosthenis Karatzas, & Josep Llados. (2014). Classification of Administrative Document Images by Logo Identification. In Bart Lamiroy, & Jean-Marc Ogier (Eds.), Graphics Recognition. Current Trends and Challenges (Vol. 8746, pp. 49–58). Springer Berlin Heidelberg.
Abstract: This paper is focused on the categorization of administrative document images (such as invoices) based on the recognition of the supplier’s graphical logo. Two different methods are proposed, the first one uses a bag-of-visual-words model whereas the second one tries to locate logo images described by the blurred shape model descriptor within documents by a sliding-window technique. Preliminar results are reported with a dataset of real administrative documents.
Keywords: Administrative Document Classification; Logo Recognition; Logo Spotting
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2010). Touching Text Character Localization in Graphical Documents using SIFT. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 199–211). LNCS. Springer Berlin Heidelberg.
Abstract: Interpretation of graphical document images is a challenging task as it requires proper understanding of text/graphics symbols present in such documents. Difficulties arise in graphical document recognition when text and symbol overlapped/touched. Intersection of text and symbols with graphical lines and curves occur frequently in graphical documents and hence separation of such symbols is very difficult.
Several pattern recognition and classification techniques exist to recognize isolated text/symbol. But, the touching/overlapping text and symbol recognition has not yet been dealt successfully. An interesting technique, Scale Invariant Feature Transform (SIFT), originally devised for object recognition can take care of overlapping problems. Even if SIFT features have emerged as a very powerful object descriptors, their employment in graphical documents context has not been investigated much. In this paper we present the adaptation of the SIFT approach in the context of text character localization (spotting) in graphical documents. We evaluate the applicability of this technique in such documents and discuss the scope of improvement by combining some state-of-the-art approaches.
Keywords: Support Vector Machine; Text Component; Graphical Line; Document Image; Scale Invariant Feature Transform
|
Marçal Rusiñol, & Josep Llados. (2017). Flowchart Recognition in Patent Information Retrieval. In M. Lupu, K. Mayer, N. Kando, & A.J. Trippe (Eds.), Current Challenges in Patent Information Retrieval (Vol. 37, pp. 351–368). Springer Berlin Heidelberg.
|
Nataliya Shapovalova, Carles Fernandez, Xavier Roca, & Jordi Gonzalez. (2011). Semantics of Human Behavior in Image Sequences. In Albert Ali Salah, & (Ed.), Computer Analysis of Human Behavior (pp. 151–182). Springer London.
Abstract: Human behavior is contextualized and understanding the scene of an action is crucial for giving proper semantics to behavior. In this chapter we present a novel approach for scene understanding. The emphasis of this work is on the particular case of Human Event Understanding. We introduce a new taxonomy to organize the different semantic levels of the Human Event Understanding framework proposed. Such a framework particularly contributes to the scene understanding domain by (i) extracting behavioral patterns from the integrative analysis of spatial, temporal, and contextual evidence and (ii) integrative analysis of bottom-up and top-down approaches in Human Event Understanding. We will explore how the information about interactions between humans and their environment influences the performance of activity recognition, and how this can be extrapolated to the temporal domain in order to extract higher inferences from human events observed in sequences of images.
|
Angel Sappa, David Geronimo, Fadi Dornaika, Mohammad Rouhani, & Antonio Lopez. (2012). Moving object detection from mobile platforms using stereo data registration. In Marek R. Ogiela, & Lakhmi C. Jain (Eds.), Computational Intelligence paradigms in advanced pattern classification (Vol. 386, pp. 25–37). Springer Berlin Heidelberg.
Abstract: This chapter describes a robust approach for detecting moving objects from on-board stereo vision systems. It relies on a feature point quaternion-based registration, which avoids common problems that appear when computationally expensive iterative-based algorithms are used on dynamic environments. The proposed approach consists of three main stages. Initially, feature points are extracted and tracked through consecutive 2D frames. Then, a RANSAC based approach is used for registering two point sets, with known correspondences in the 3D space. The computed 3D rigid displacement is used to map two consecutive 3D point clouds into the same coordinate system by means of the quaternion method. Finally, moving objects correspond to those areas with large 3D registration errors. Experimental results show the viability of the proposed approach to detect moving objects like vehicles or pedestrians in different urban scenarios.
Keywords: pedestrian detection
|
Quan-sen Sun, Pheng-ann Heng, Zhong Jin, & De-shen Xia. (2005). Face recognition based on generalized canonical correlation analysis. In Advances in Intelligent Computing, Lecture Notes in Computer Science, 3645: 958–967.
|
Quan-sen Sun, Zhong Jin, Pheng-ann Heng, & De-shen Xia. (2005). A novel feature fusion method based on partial least squares regression. In Pattern Recognition and Data Mining, Lecture Notes in Computer Science, 3686: 268–277.
|
Hans Stadthagen-Gonzalez, Luis Lopez, M. Carmen Parafita, & C. Alejandro Parraga. (2018). Using two-alternative forced choice tasks and Thurstone law of comparative judgments for code-switching research. In Linguistic Approaches to Bilingualism (pp. 67–97).
Abstract: This article argues that 2-alternative forced choice tasks and Thurstone’s law of comparative judgments (Thurstone, 1927) are well suited to investigate code-switching competence by means of acceptability judgments. We compare this method with commonly used Likert scale judgments and find that the 2-alternative forced choice task provides granular details that remain invisible in a Likert scale experiment. In order to compare and contrast both methods, we examined the syntactic phenomenon usually referred to as the Adjacency Condition (AC) (apud Stowell, 1981), which imposes a condition of adjacency between verb and object. Our interest in the AC comes from the fact that it is a subtle feature of English grammar which is absent in Spanish, and this provides an excellent springboard to create minimal code-switched pairs that allow us to formulate a clear research question that can be tested using both methods.
Keywords: two-alternative forced choice and Thurstone's law; acceptability judgment; code-switching
|
Patricia Suarez, Angel Sappa, & Boris X. Vintimilla. (2021). Deep learning-based vegetation index estimation. In A.Solanki, A.Nayyar, & M.Naved (Eds.), Generative Adversarial Networks for Image-to-Image Translation (pp. 205–234). Elsevier.
|
Salvatore Tabbone, & Oriol Ramos Terrades. (2014). An Overview of Symbol Recognition. In D. Doermann, & K. Tombre (Eds.), Handbook of Document Image Processing and Recognition (Vol. D, pp. 523–551). Springer London.
Abstract: According to the Cambridge Dictionaries Online, a symbol is a sign, shape, or object that is used to represent something else. Symbol recognition is a subfield of general pattern recognition problems that focuses on identifying, detecting, and recognizing symbols in technical drawings, maps, or miscellaneous documents such as logos and musical scores. This chapter aims at providing the reader an overview of the different existing ways of describing and recognizing symbols and how the field has evolved to attain a certain degree of maturity.
Keywords: Pattern recognition; Shape descriptors; Structural descriptors; Symbolrecognition; Symbol spotting
|
Estefania Talavera, Alexandre Cola, Nicolai Petkov, & Petia Radeva. (2019). Towards Egocentric Person Re-identification and Social Pattern Analysis. In Frontiers in Artificial Intelligence and Applications (Vol. 310, pp. 203–211).
Abstract: CoRR abs/1905.04073
Wearable cameras capture a first-person view of the daily activities of the camera wearer, offering a visual diary of the user behaviour. Detection of the appearance of people the camera user interacts with for social interactions analysis is of high interest. Generally speaking, social events, lifestyle and health are highly correlated, but there is a lack of tools to monitor and analyse them. We consider that egocentric vision provides a tool to obtain information and understand users social interactions. We propose a model that enables us to evaluate and visualize social traits obtained by analysing social interactions appearance within egocentric photostreams. Given sets of egocentric images, we detect the appearance of faces within the days of the camera wearer, and rely on clustering algorithms to group their feature descriptors in order to re-identify persons. Recurrence of detected faces within photostreams allows us to shape an idea of the social pattern of behaviour of the user. We validated our model over several weeks recorded by different camera wearers. Our findings indicate that social profiles are potentially useful for social behaviour interpretation.
|
Michael Teutsch, Angel Sappa, & Riad I. Hammoud. (2022). Image and Video Enhancement. In Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision (pp. 9–21). SLCV. Springer.
Abstract: Image and video enhancement aims at improving the signal quality relative to imaging artifacts such as noise and blur or atmospheric perturbations such as turbulence and haze. It is usually performed in order to assist humans in analyzing image and video content or simply to present humans visually appealing images and videos. However, image and video enhancement can also be used as a preprocessing technique to ease the task and thus improve the performance of subsequent automatic image content analysis algorithms: preceding dehazing can improve object detection as shown by [23] or explicit turbulence modeling can improve moving object detection as discussed by [24]. But it remains an open question whether image and video enhancement should rather be performed explicitly as a preprocessing step or implicitly for example by feeding affected images directly to a neural network for image content analysis like object detection [25]. Especially for real-time video processing at low latency it can be better to handle image perturbation implicitly in order to minimize the processing time of an algorithm. This can be achieved by making algorithms for image content analysis robust or even invariant to perturbations such as noise or blur. Additionally, mistakes of an individual preprocessing module can obviously affect the quality of the entire processing pipeline.
|
Michael Teutsch, Angel Sappa, & Riad I. Hammoud. (2022). Cross-Spectral Image Processing. In Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision (pp. 23–34). SLCV. Springer.
Abstract: Although this book is on IR computer vision and its main focus lies on IR image and video processing and analysis, a special attention is dedicated to cross-spectral image processing due to the increasing number of publications and applications in this domain. In these cross-spectral frameworks, IR information is used together with information from other spectral bands to tackle some specific problems by developing more robust solutions. Tasks considered for cross-spectral processing are for instance dehazing, segmentation, vegetation index estimation, or face recognition. This increasing number of applications is motivated by cross- and multi-spectral camera setups available already on the market like for example smartphones, remote sensing multispectral cameras, or multi-spectral cameras for automotive systems or drones. In this chapter, different cross-spectral image processing techniques will be reviewed together with possible applications. Initially, image registration approaches for the cross-spectral case are reviewed: the registration stage is the first image processing task, which is needed to align images acquired by different sensors within the same reference coordinate system. Then, recent cross-spectral image colorization approaches, which are intended to colorize infrared images for different applications are presented. Finally, the cross-spectral image enhancement problem is tackled by including guided super resolution techniques, image dehazing approaches, cross-spectral filtering and edge detection. Figure 3.1 illustrates cross-spectral image processing stages as well as their possible connections. Table 3.1 presents some of the available public cross-spectral datasets generally used as reference data to evaluate cross-spectral image registration, colorization, enhancement, or exploitation results.
|