Sounak Dey, Anjan Dutta, Josep Llados, Alicia Fornes, & Umapada Pal. (2017). Shallow Neural Network Model for Hand-drawn Symbol Recognition in Multi-Writer Scenario. In 14th International Conference on Document Analysis and Recognition (pp. 31–32).
Abstract: One of the main challenges in hand drawn symbol recognition is the variability among symbols because of the different writer styles. In this paper, we present and discuss some results recognizing hand-drawn symbols with a shallow neural network. A neural network model inspired from the LeNet architecture has been used to achieve state-of-the-art results with
very less training data, which is very unlikely to the data hungry deep neural network. From the results, it has become evident that the neural network architectures can efficiently describe and recognize hand drawn symbols from different writers and can model the inter author aberration
|
Pau Riba, Anjan Dutta, Josep Llados, & Alicia Fornes. (2017). Graph-based deep learning for graphics classification. In 14th International Conference on Document Analysis and Recognition (pp. 29–30).
Abstract: Graph-based representations are a common way to deal with graphics recognition problems. However, previous works were mainly focused on developing learning-free techniques. The success of deep learning frameworks have proved that learning is a powerful tool to solve many problems, however it is not straightforward to extend these methodologies to non euclidean data such as graphs. On the other hand, graphs are a good representational structure for graphical entities. In this work, we present some deep learning techniques that have been proposed in the literature for graph-based representations and
we show how they can be used in graphics recognition problems
|
Adria Rico, & Alicia Fornes. (2017). Camera-based Optical Music Recognition using a Convolutional Neural Network. In 12th IAPR International Workshop on Graphics Recognition (pp. 27–28).
Abstract: Optical Music Recognition (OMR) consists in recognizing images of music scores. Contrary to expectation, the current OMR systems usually fail when recognizing images of scores captured by digital cameras and smartphones. In this work, we propose a camera-based OMR system based on Convolutional Neural Networks, showing promising preliminary results
Keywords: optical music recognition; document analysis; convolutional neural network; deep learning
|
Oriol Vicente, Alicia Fornes, & Ramon Valdes. (2017). La Xarxa d Humanitats Digitals de la UABCie: una estructura inteligente para la investigación y la transferencia en Humanidades. In 3rd Congreso Internacional de Humanidades Digitales Hispánicas. Sociedad Internacional (pp. 281–383).
|
Alicia Fornes, Beata Megyesi, & Joan Mas. (2017). Transcription of Encoded Manuscripts with Image Processing Techniques. In Digital Humanities Conference (pp. 441–443).
|
Dimosthenis Karatzas, Lluis Gomez, & Marçal Rusiñol. (2017). The Robust Reading Competition Annotation and Evaluation Platform. In 1st International Workshop on Open Services and Tools for Document Analysis.
Abstract: The ICDAR Robust Reading Competition (RRC), initiated in 2003 and re-established in 2011, has become the defacto evaluation standard for the international community. Concurrent with its second incarnation in 2011, a continuous effort started to develop an online framework to facilitate the hosting and management of competitions. This short paper briefly outlines the Robust Reading Competition Annotation and Evaluation Platform, the backbone of the Robust Reading Competition, comprising a collection of tools and processes that aim to simplify the management and annotation
of data, and to provide online and offline performance evaluation and analysis services
|
Sergio Alloza, Flavio Escribano, Sergi Delgado, Ciprian Corneanu, & Sergio Escalera. (2017). XBadges. Identifying and training soft skills with commercial video games Improving persistence, risk taking & spatial reasoning with commercial video games and facial and emotional recognition system. In 4th Congreso de la Sociedad Española para las Ciencias del Videojuego (Vol. 1957, pp. 13–28).
Abstract: XBadges is a research project based on the hypothesis that commercial video games (nonserious games) can train soft skills. We measure persistence, patial reasoning and risk taking before and after subjects paticipate in controlled game playing sessions.
In addition, we have developed an automatic facial expression recognition system capable of inferring their emotions while playing, allowing us to study the role of emotions in soft skills acquisition. We have used Flappy Bird, Pacman and Tetris for assessing changes in persistence, risk taking and spatial reasoning respectively.
Results show how playing Tetris significantly improves spatial reasoning and how playing Pacman significantly improves prudence in certain areas of behavior. As for emotions, they reveal that being concentrated helps to improve performance and skills acquisition. Frustration is also shown as a key element. With the results obtained we are able to glimpse multiple applications in areas which need soft skills development.
Keywords: Video Games; Soft Skills; Training; Skilling Development; Emotions; Cognitive Abilities; Flappy Bird; Pacman; Tetris
|
Jun Wan, Sergio Escalera, Gholamreza Anbarjafari, Hugo Jair Escalante, Xavier Baro, Isabelle Guyon, et al. (2017). Results and Analysis of ChaLearn LAP Multi-modal Isolated and ContinuousGesture Recognition, and Real versus Fake Expressed Emotions Challenges. In Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV.
Abstract: We analyze the results of the 2017 ChaLearn Looking at People Challenge at ICCV. The challenge comprised three tracks: (1) large-scale isolated (2) continuous gesture recognition, and (3) real versus fake expressed emotions tracks. It is the second round for both gesture recognition challenges, which were held first in the context of the ICPR 2016 workshop on “multimedia challenges beyond visual analysis”. In this second round, more participants joined the competitions, and the performances considerably improved compared to the first round. Particularly, the best recognition accuracy of isolated gesture recognition has improved from 56.90% to 67.71% in the IsoGD test set, and Mean Jaccard Index (MJI) of continuous gesture recognition has improved from 0.2869 to 0.6103 in the ConGD test set. The third track is the first challenge on real versus fake expressed emotion classification, including six emotion categories, for which a novel database was introduced. The first place was shared between two teams who achieved 67.70% averaged recognition rate on the test set. The data of the three tracks, the participants' code and method descriptions are publicly available to allow researchers to keep making progress in the field.
|
Yagmur Gucluturk, Umut Guclu, Marc Perez, Hugo Jair Escalante, Xavier Baro, Isabelle Guyon, et al. (2017). Visualizing Apparent Personality Analysis with Deep Residual Networks. In Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV (pp. 3101–3109).
Abstract: Automatic prediction of personality traits is a subjective task that has recently received much attention. Specifically, automatic apparent personality trait prediction from multimodal data has emerged as a hot topic within the filed of computer vision and, more particularly, the so called “looking
at people” sub-field. Considering “apparent” personality traits as opposed to real ones considerably reduces the subjectivity of the task. The real world applications are encountered in a wide range of domains, including entertainment, health, human computer interaction, recruitment and security. Predictive models of personality traits are useful for individuals in many scenarios (e.g., preparing for job interviews, preparing for public speaking). However, these predictions in and of themselves might be deemed to be untrustworthy without human understandable supportive evidence. Through a series of experiments on a recently released benchmark dataset for automatic apparent personality trait prediction, this paper characterizes the audio and
visual information that is used by a state-of-the-art model while making its predictions, so as to provide such supportive evidence by explaining predictions made. Additionally, the paper describes a new web application, which gives feedback on apparent personality traits of its users by combining
model predictions with their explanations.
|
Maryam Asadi-Aghbolaghi, Hugo Bertiche, Vicent Roig, Shohreh Kasaei, & Sergio Escalera. (2017). Action Recognition from RGB-D Data: Comparison and Fusion of Spatio-temporal Handcrafted Features and Deep Strategies. In Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV.
|
Albert Clapes, Tinne Tuytelaars, & Sergio Escalera. (2017). Darwintrees for action recognition. In Chalearn Workshop on Action, Gesture, and Emotion Recognition: Large Scale Multimodal Gesture Recognition and Real versus Fake expressed emotions at ICCV.
|
Raul Gomez, Baoguang Shi, Lluis Gomez, Lukas Numann, Andreas Veit, Jiri Matas, et al. (2017). ICDAR2017 Robust Reading Challenge on COCO-Text. In 14th International Conference on Document Analysis and Recognition.
|
Masakazu Iwamura, Naoyuki Morimoto, Keishi Tainaka, Dena Bazazian, Lluis Gomez, & Dimosthenis Karatzas. (2017). ICDAR2017 Robust Reading Challenge on Omnidirectional Video. In 14th International Conference on Document Analysis and Recognition.
Abstract: Results of ICDAR 2017 Robust Reading Challenge on Omnidirectional Video are presented. This competition uses Downtown Osaka Scene Text (DOST) Dataset that was captured in Osaka, Japan with an omnidirectional camera. Hence, it consists of sequential images (videos) of different view angles. Regarding the sequential images as videos (video mode), two tasks of localisation and end-to-end recognition are prepared. Regarding them as a set of still images (still image mode), three tasks of localisation, cropped word recognition and end-to-end recognition are prepared. As the dataset has been captured in Japan, the dataset contains Japanese text but also include text consisting of alphanumeric characters (Latin text). Hence, a submitted result for each task is evaluated in three ways: using Japanese only ground truth (GT), using Latin only GT and using combined GTs of both. Finally, by the submission deadline, we have received two submissions in the text localisation task of the still image mode. We intend to continue the competition in the open mode. Expecting further submissions, in this report we provide baseline results in all the tasks in addition to the submissions from the community.
|
Laura Lopez-Fuentes, Claudio Rossi, & Harald Skinnemoen. (2017). River segmentation for flood monitoring. In Data Science for Emergency Management at Big Data 2017.
Abstract: Floods are major natural disasters which cause deaths and material damages every year. Monitoring these events is crucial in order to reduce both the affected people and the economic losses. In this work we train and test three different Deep Learning segmentation algorithms to estimate the water area from river images, and compare their performances. We discuss the implementation of a novel data chain aimed to monitor river water levels by automatically process data collected from surveillance cameras, and to give alerts in case of high increases of the water level or flooding. We also create and openly publish the first image dataset for river water segmentation.
|
Suman Ghosh, & Ernest Valveny. (2017). R-PHOC: Segmentation-Free Word Spotting using CNN. In 14th International Conference on Document Analysis and Recognition.
Abstract: arXiv:1707.01294
This paper proposes a region based convolutional neural network for segmentation-free word spotting. Our network takes as input an image and a set of word candidate bound- ing boxes and embeds all bounding boxes into an embedding space, where word spotting can be casted as a simple nearest neighbour search between the query representation and each of the candidate bounding boxes. We make use of PHOC embedding as it has previously achieved significant success in segmentation- based word spotting. Word candidates are generated using a simple procedure based on grouping connected components using some spatial constraints. Experiments show that R-PHOC which operates on images directly can improve the current state-of- the-art in the standard GW dataset and performs as good as PHOCNET in some cases designed for segmentation based word spotting.
Keywords: Convolutional neural network; Image segmentation; Artificial neural network; Nearest neighbor search
|