Ana Maria Ares, Jorge Bernal, Maria Jesus Nozal, F. Javier Sanchez, & Jose Bernal. (2018). Results of the use of Kahoot! gamification tool in a course of Chemistry. In 4th International Conference on Higher Education Advances (pp. 1215–1222).
Abstract: The present study examines the use of Kahoot! as a gamification tool to explore mixed learning strategies. We analyze its use in two different groups of a theoretical subject of the third course of the Degree in Chemistry. An empirical-analytical methodology was used using Kahoot! in two different groups of students, with different frequencies. The academic results of these two group of students were compared between them and with those obtained in the previous course, in which Kahoot! was not employed, with the aim of measuring the evolution in the students´ knowledge. The results showed, in all cases, that the use of Kahoot! has led to a significant increase in the overall marks, and in the number of students who passed the subject. Moreover, some differences were also observed in students´ academic performance according to the group. Finally, it can be concluded that the use of a gamification tool (Kahoot!) in a university classroom had generally improved students´ learning and marks, and that this improvement is more prevalent in those students who have achieved a better Kahoot! performance.
|
Anders Hast, & Alicia Fornes. (2016). A Segmentation-free Handwritten Word Spotting Approach by Relaxed Feature Matching. In 12th IAPR Workshop on Document Analysis Systems (pp. 150–155).
Abstract: The automatic recognition of historical handwritten documents is still considered challenging task. For this reason, word spotting emerges as a good alternative for making the information contained in these documents available to the user. Word spotting is defined as the task of retrieving all instances of the query word in a document collection, becoming a useful tool for information retrieval. In this paper we propose a segmentation-free word spotting approach able to deal with large document collections. Our method is inspired on feature matching algorithms that have been applied to image matching and retrieval. Since handwritten words have different shape, there is no exact transformation to be obtained. However, the sufficient degree of relaxation is achieved by using a Fourier based descriptor and an alternative approach to RANSAC called PUMA. The proposed approach is evaluated on historical marriage records, achieving promising results.
|
Andrea Gemelli, Sanket Biswas, Enrico Civitelli, Josep Llados, & Simone Marinai. (2022). Doc2Graph: A Task Agnostic Document Understanding Framework Based on Graph Neural Networks. In 17th European Conference on Computer Vision Workshops (Vol. 13804, 329–344). LNCS.
Abstract: Geometric Deep Learning has recently attracted significant interest in a wide range of machine learning fields, including document analysis. The application of Graph Neural Networks (GNNs) has become crucial in various document-related tasks since they can unravel important structural patterns, fundamental in key information extraction processes. Previous works in the literature propose task-driven models and do not take into account the full power of graphs. We propose Doc2Graph, a task-agnostic document understanding framework based on a GNN model, to solve different tasks given different types of documents. We evaluated our approach on two challenging datasets for key information extraction in form understanding, invoice layout analysis and table detection.
|
Andreas Fischer, Ching Y. Suen, Volkmar Frinken, Kaspar Riesen, & Horst Bunke. (2013). A Fast Matching Algorithm for Graph-Based Handwriting Recognition. In 9th IAPR – TC15 Workshop on Graph-based Representation in Pattern Recognition (Vol. 7877, pp. 194–203). LNCS. Springer Berlin Heidelberg.
Abstract: The recognition of unconstrained handwriting images is usually based on vectorial representation and statistical classification. Despite their high representational power, graphs are rarely used in this field due to a lack of efficient graph-based recognition methods. Recently, graph similarity features have been proposed to bridge the gap between structural representation and statistical classification by means of vector space embedding. This approach has shown a high performance in terms of accuracy but had shortcomings in terms of computational speed. The time complexity of the Hungarian algorithm that is used to approximate the edit distance between two handwriting graphs is demanding for a real-world scenario. In this paper, we propose a faster graph matching algorithm which is derived from the Hausdorff distance. On the historical Parzival database it is demonstrated that the proposed method achieves a speedup factor of 12.9 without significant loss in recognition accuracy.
|
Andreas Fischer, Volkmar Frinken, Alicia Fornes, & Horst Bunke. (2011). Transcription Alignment of Latin Manuscripts Using Hidden Markov Models. In Proceedings of the 2011 Workshop on Historical Document Imaging and Processing (pp. 29–36). ACM.
Abstract: Transcriptions of historical documents are a valuable source for extracting labeled handwriting images that can be used for training recognition systems. In this paper, we introduce the Saint Gall database that includes images as well as the transcription of a Latin manuscript from the 9th century written in Carolingian script. Although the available transcription is of high quality for a human reader, the spelling of the words is not accurate when compared with the handwriting image. Hence, the transcription poses several challenges for alignment regarding, e.g., line breaks, abbreviations, and capitalization. We propose an alignment system based on character Hidden Markov Models that can cope with these challenges and efficiently aligns complete document pages. On the Saint Gall database, we demonstrate that a considerable alignment accuracy can be achieved, even with weakly trained character models.
|
Andreas Fischer, Volkmar Frinken, Horst Bunke, & Ching Y. Suen. (2013). Improving HMM-Based Keyword Spotting with Character Language Models. In 12th International Conference on Document Analysis and Recognition (pp. 506–510).
Abstract: Facing high error rates and slow recognition speed for full text transcription of unconstrained handwriting images, keyword spotting is a promising alternative to locate specific search terms within scanned document images. We have previously proposed a learning-based method for keyword spotting using character hidden Markov models that showed a high performance when compared with traditional template image matching. In the lexicon-free approach pursued, only the text appearance was taken into account for recognition. In this paper, we integrate character n-gram language models into the spotting system in order to provide an additional language context. On the modern IAM database as well as the historical George Washington database, we demonstrate that character language models significantly improve the spotting performance.
|
Andreas Møgelmose, Chris Bahnsen, Thomas B. Moeslund, Albert Clapes, & Sergio Escalera. (2013). Tri-modal Person Re-identification with RGB, Depth and Thermal Features. In 9th IEEE Workshop on Perception beyond the visible Spectrum, Computer Vision and Pattern Recognition (pp. 301–307).
Abstract: Person re-identification is about recognizing people who have passed by a sensor earlier. Previous work is mainly based on RGB data, but in this work we for the first time present a system where we combine RGB, depth, and thermal data for re-identification purposes. First, from each of the three modalities, we obtain some particular features: from RGB data, we model color information from different regions of the body, from depth data, we compute different soft body biometrics, and from thermal data, we extract local structural information. Then, the three information types are combined in a joined classifier. The tri-modal system is evaluated on a new RGB-D-T dataset, showing successful results in re-identification scenarios.
|
Andrei Polzounov, Artsiom Ablavatski, Sergio Escalera, Shijian Lu, & Jianfei Cai. (2017). WordFences: Text Localization and Recognition. In 24th International Conference on Image Processing.
|
Andres Mafla, Rafael S. Rezende, Lluis Gomez, Diana Larlus, & Dimosthenis Karatzas. (2021). StacMR: Scene-Text Aware Cross-Modal Retrieval. In IEEE Winter Conference on Applications of Computer Vision (pp. 2219–2229).
|
Andres Mafla, Sounak Dey, Ali Furkan Biten, Lluis Gomez, & Dimosthenis Karatzas. (2021). Multi-modal reasoning graph for scene-text based fine-grained image classification and retrieval. In IEEE Winter Conference on Applications of Computer Vision (pp. 4022–4032).
|
Andres Mafla, Sounak Dey, Ali Furkan Biten, Lluis Gomez, & Dimosthenis Karatzas. (2020). Fine-grained Image Classification and Retrieval by Combining Visual and Locally Pooled Textual Features. In IEEE Winter Conference on Applications of Computer Vision.
Abstract: Text contained in an image carries high-level semantics that can be exploited to achieve richer image understanding. In particular, the mere presence of text provides strong guiding content that should be employed to tackle a diversity of computer vision tasks such as image retrieval, fine-grained classification, and visual question answering. In this paper, we address the problem of fine-grained classification and image retrieval by leveraging textual information along with visual cues to comprehend the existing intrinsic relation between the two modalities. The novelty of the proposed model consists of the usage of a PHOC descriptor to construct a bag of textual words along with a Fisher Vector Encoding that captures the morphology of text. This approach provides a stronger multimodal representation for this task and as our experiments demonstrate, it achieves state-of-the-art results on two different tasks, fine-grained classification and image retrieval.
|
Andres Traumann, Sergio Escalera, & Gholamreza Anbarjafari. (2015). A New Retexturing Method for Virtual Fitting Room Using Kinect 2 Camera. In 2015 IEEE Conference on Computer Vision and Pattern Recognition Worshops (CVPRW) (pp. 75–79).
|
Andrew Nolan, Daniel Serrano, Aura Hernandez-Sabate, Daniel Ponsa, & Antonio Lopez. (2013). Obstacle mapping module for quadrotors on outdoor Search and Rescue operations. In International Micro Air Vehicle Conference and Flight Competition.
Abstract: Obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAV), due to their limited payload capacity to carry advanced sensors. Unlike larger vehicles, MAV can only carry light weight sensors, for instance a camera, which is our main assumption in this work. We explore passive monocular depth estimation and propose a novel method Position Aided Depth Estimation
(PADE). We analyse PADE performance and compare it against the extensively used Time To Collision (TTC). We evaluate the accuracy, robustness to noise and speed of three Optical Flow (OF) techniques, combined with both depth estimation methods. Our results show PADE is more accurate than TTC at depths between 0-12 meters and is less sensitive to noise. Our findings highlight the potential application of PADE for MAV to perform safe autonomous navigation in
unknown and unstructured environments.
Keywords: UAV
|
Aneesh Rangnekar, Zachary Mulhollan, Anthony Vodacek, Matthew Hoffman, Angel Sappa, Erik Blasch, et al. (2022). Semi-Supervised Hyperspectral Object Detection Challenge Results – PBVS 2022. In IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) (pp. 390–398).
Abstract: This paper summarizes the top contributions to the first semi-supervised hyperspectral object detection (SSHOD) challenge, which was organized as a part of the Perception Beyond the Visible Spectrum (PBVS) 2022 workshop at the Computer Vision and Pattern Recognition (CVPR) conference. The SSHODC challenge is a first-of-its-kind hyperspectral dataset with temporally contiguous frames collected from a university rooftop observing a 4-way vehicle intersection over a period of three days. The dataset contains a total of 2890 frames, captured at an average resolution of 1600 × 192 pixels, with 51 hyperspectral bands from 400nm to 900nm. SSHOD challenge uses 989 images as the training set, 605 images as validation set and 1296 images as the evaluation (test) set. Each set was acquired on a different day to maximize the variance in weather conditions. Labels are provided for 10% of the annotated data, hence formulating a semi-supervised learning task for the participants which is evaluated in terms of average precision over the entire set of classes, as well as individual moving object classes: namely vehicle, bus and bike. The challenge received participation registration from 38 individuals, with 8 participating in the validation phase and 3 participating in the test phase. This paper describes the dataset acquisition, with challenge formulation, proposed methods and qualitative and quantitative results.
Keywords: Training; Computer visio; Conferences; Training data; Object detection; Semisupervised learning; Transformers
|
Angel Morera, Angel Sanchez, Angel Sappa, & Jose F. Velez. (2019). Robust Detection of Outdoor Urban Advertising Panels in Static Images. In 18th International Conference on Practical Applications of Agents and Multi-Agent Systems (pp. 246–256).
Abstract: One interesting publicity application for Smart City environments is recognizing brand information contained in urban advertising panels. For such a purpose, a previous stage is to accurately detect and locate the position of these panels in images. This work presents an effective solution to this problem using a Single Shot Detector (SSD) based on a deep neural network architecture that minimizes the number of false detections under multiple variable conditions regarding the panels and the scene. Achieved experimental results using the Intersection over Union (IoU) accuracy metric make this proposal applicable in real complex urban images.
Keywords: Object detection; Urban ads panels; Deep learning; Single Shot Detector (SSD) architecture; Intersection over Union (IoU) metric; Augmented Reality
|