|
Hana Jarraya, Muhammad Muzzamil Luqman, & Jean-Yves Ramel. (2017). Improving Fuzzy Multilevel Graph Embedding Technique by Employing Topological Node Features: An Application to Graphics Recognition. In B. Lamiroy, & R Dueire Lins (Eds.), Graphics Recognition. Current Trends and Challenges (Vol. 9657). LNCS. Springer.
|
|
|
H. Martin Kjer, Jens Fagertun, Sergio Vera, & Debora Gil. (2017). Medial structure generation for registration of anatomical structures. In Skeletonization, Theory, Methods and Applications (Vol. 11).
|
|
|
Pau Riba, Alicia Fornes, & Josep Llados. (2017). Towards the Alignment of Handwritten Music Scores. In Bart Lamiroy, & R Dueire Lins (Eds.), International Workshop on Graphics Recognition. GREC 2015.Graphic Recognition. Current Trends and Challenges (Vol. 9657, pp. 103–116). LNCS.
Abstract: It is very common to nd dierent versions of the same music work in archives of Opera Theaters. These dierences correspond to modications and annotations from the musicians. From the musicologist point of view, these variations are very interesting and deserve study.
This paper explores the alignment of music scores as a tool for automatically detecting the passages that contain such dierences. Given the diculties in the recognition of handwritten music scores, our goal is to align the music scores and at the same time, avoid the recognition of music elements as much as possible. After removing the sta lines, braces and ties, the bar lines are detected. Then, the bar units are described as a whole using the Blurred Shape Model. The bar units alignment is performed by using Dynamic Time Warping. The analysis of the alignment path is used to detect the variations in the music scores. The method has been evaluated on a subset of the CVC-MUSCIMA dataset, showing encouraging results.
Keywords: Optical Music Recognition; Handwritten Music Scores; Dynamic Time Warping alignment
|
|
|
Thanh Ha Do, Salvatore Tabbone, & Oriol Ramos Terrades. (2016). Spotting Symbol over Graphical Documents Via Sparsity in Visual Vocabulary. In Recent Trends in Image Processing and Pattern Recognition (Vol. 709).
|
|
|
Maryam Asadi-Aghbolaghi, Albert Clapes, Marco Bellantonio, Hugo Jair Escalante, Victor Ponce, Xavier Baro, et al. (2017). Deep Learning for Action and Gesture Recognition in Image Sequences: A Survey. In Gesture Recognition (pp. 539–578).
Abstract: Interest in automatic action and gesture recognition has grown considerably in the last few years. This is due in part to the large number of application domains for this type of technology. As in many other computer vision areas, deep learning based methods have quickly become a reference methodology for obtaining state-of-the-art performance in both tasks. This chapter is a survey of current deep learning based methodologies for action and gesture recognition in sequences of images. The survey reviews both fundamental and cutting edge methodologies reported in the last few years. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. Details of the proposed architectures, fusion strategies, main datasets, and competitions are reviewed. Also, we summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, their highlighting features, and opportunities and challenges for future research. To the best of our knowledge this is the first survey in the topic. We foresee this survey will become a reference in this ever dynamic field of research.
Keywords: Action recognition; Gesture recognition; Deep learning architectures; Fusion strategies
|
|
|
Hans Stadthagen-Gonzalez, Luis Lopez, M. Carmen Parafita, & C. Alejandro Parraga. (2018). Using two-alternative forced choice tasks and Thurstone law of comparative judgments for code-switching research. In Linguistic Approaches to Bilingualism (pp. 67–97).
Abstract: This article argues that 2-alternative forced choice tasks and Thurstone’s law of comparative judgments (Thurstone, 1927) are well suited to investigate code-switching competence by means of acceptability judgments. We compare this method with commonly used Likert scale judgments and find that the 2-alternative forced choice task provides granular details that remain invisible in a Likert scale experiment. In order to compare and contrast both methods, we examined the syntactic phenomenon usually referred to as the Adjacency Condition (AC) (apud Stowell, 1981), which imposes a condition of adjacency between verb and object. Our interest in the AC comes from the fact that it is a subtle feature of English grammar which is absent in Spanish, and this provides an excellent springboard to create minimal code-switched pairs that allow us to formulate a clear research question that can be tested using both methods.
Keywords: two-alternative forced choice and Thurstone's law; acceptability judgment; code-switching
|
|
|
Sergio Escalera, Vassilis Athitsos, & Isabelle Guyon. (2017). Challenges in Multi-modal Gesture Recognition. (pp. 1–60).
Abstract: This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011–2015. We began right at the start of the Kinect TMTM revolution when inexpensive infrared cameras providing image depth recordings became available. We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision using multimodal data in this area of application. Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands of videos, which are available to conduct further research. We also overview recent state of the art works on gesture recognition based on a proposed taxonomy for gesture recognition, discussing challenges and future lines of research.
Keywords: Gesture recognition; Time series analysis; Multimodal data analysis; Computer vision; Pattern recognition; Wearable sensors; Infrared cameras; Kinect TMTM
|
|
|
Jose M. Armingol, Jorge Alfonso, Nourdine Aliane, Miguel Clavijo, Sergio Campos-Cordobes, Arturo de la Escalera, et al. (2018). Environmental Perception for Intelligent Vehicles. In Intelligent Vehicles. Enabling Technologies and Future Developments (23–101).
Abstract: Environmental perception represents, because of its complexity, a challenge for Intelligent Transport Systems due to the great variety of situations and different elements that can happen in road environments and that must be faced by these systems. In connection with this, so far there are a variety of solutions as regards sensors and methods, so the results of precision, complexity, cost, or computational load obtained by these works are different. In this chapter some systems based on computer vision and laser techniques are presented. Fusion methods are also introduced in order to provide advanced and reliable perception systems.
Keywords: Computer vision; laser techniques; data fusion; advanced driver assistance systems; traffic monitoring systems; intelligent vehicles
|
|
|
Antonio Lopez, David Vazquez, & Gabriel Villalonga. (2018). Data for Training Models, Domain Adaptation. In Intelligent Vehicles. Enabling Technologies and Future Developments (395–436).
Abstract: Simulation can enable several developments in the field of intelligent vehicles. This chapter is divided into three main subsections. The first one deals with driving simulators. The continuous improvement of hardware performance is a well-known fact that is allowing the development of more complex driving simulators. The immersion in the simulation scene is increased by high fidelity feedback to the driver. In the second subsection, traffic simulation is explained as well as how it can be used for intelligent transport systems. Finally, it is rather clear that sensor-based perception and action must be based on data-driven algorithms. Simulation could provide data to train and test algorithms that are afterwards implemented in vehicles. These tools are explained in the third subsection.
Keywords: Driving simulator; hardware; software; interface; traffic simulation; macroscopic simulation; microscopic simulation; virtual data; training data
|
|
|
Lluis Pere de las Heras, Oriol Ramos Terrades, & Josep Llados. (2017). Ontology-Based Understanding of Architectural Drawings. In International Workshop on Graphics Recognition. GREC 2015.Graphic Recognition. Current Trends and Challenges (Vol. 9657, pp. 75–85). LNCS.
Abstract: In this paper we present a knowledge base of architectural documents aiming at improving existing methods of floor plan classification and understanding. It consists of an ontological definition of the domain and the inclusion of real instances coming from both, automatically interpreted and manually labeled documents. The knowledge base has proven to be an effective tool to structure our knowledge and to easily maintain and upgrade it. Moreover, it is an appropriate means to automatically check the consistency of relational data and a convenient complement of hard-coded knowledge interpretation systems.
Keywords: Graphics recognition; Floor plan analysi; Domain ontology
|
|
|
Sergio Escalera, Markus Weimer, Mikhail Burtsev, Valentin Malykh, Varvara Logacheva, Ryan Lowe, et al. (2018). Introduction to NIPS 2017 Competition Track. In Sergio Escalera, & Markus Weimer (Eds.), The NIPS ’17 Competition: Building Intelligent Systems (pp. 1–23). Springer.
Abstract: Competitions have become a popular tool in the data science community to solve hard problems, assess the state of the art and spur new research directions. Companies like Kaggle and open source platforms like Codalab connect people with data and a data science problem to those with the skills and means to solve it. Hence, the question arises: What, if anything, could NIPS add to this rich ecosystem?
In 2017, we embarked to find out. We attracted 23 potential competitions, of which we selected five to be NIPS 2017 competitions. Our final selection features competitions advancing the state of the art in other sciences such as “Classifying Clinically Actionable Genetic Mutations” and “Learning to Run”. Others, like “The Conversational Intelligence Challenge” and “Adversarial Attacks and Defences” generated new data sets that we expect to impact the progress in their respective communities for years to come. And “Human-Computer Question Answering Competition” showed us just how far we as a field have come in ability and efficiency since the break-through performance of Watson in Jeopardy. Two additional competitions, DeepArt and AI XPRIZE Milestions, were also associated to the NIPS 2017 competition track, whose results are also presented within this chapter.
|
|
|
Rain Eric Haamer, Eka Rusadze, Iiris Lusi, Tauseef Ahmed, Sergio Escalera, & Gholamreza Anbarjafari. (2018). Review on Emotion Recognition Databases. In Human-Robot Interaction: Theory and Application.
Abstract: Over the past few decades human-computer interaction has become more important in our daily lives and research has developed in many directions: memory research, depression detection, and behavioural deficiency detection, lie detection, (hidden) emotion recognition etc. Because of that, the number of generic emotion and face databases or those tailored to specific needs have grown immensely large. Thus, a comprehensive yet compact guide is needed to help researchers find the most suitable database and understand what types of databases already exist. In this paper, different elicitation methods are discussed and the databases are primarily organized into neat and informative tables based on the format.
Keywords: emotion; computer vision; databases
|
|
|
Arnau Baro, Pau Riba, Jorge Calvo-Zaragoza, & Alicia Fornes. (2018). Optical Music Recognition by Long Short-Term Memory Networks. In B. L. A. Fornes (Ed.), Graphics Recognition. Current Trends and Evolutions (Vol. 11009, pp. 81–95). LNCS. Springer.
Abstract: Optical Music Recognition refers to the task of transcribing the image of a music score into a machine-readable format. Many music scores are written in a single staff, and therefore, they could be treated as a sequence. Therefore, this work explores the use of Long Short-Term Memory (LSTM) Recurrent Neural Networks for reading the music score sequentially, where the LSTM helps in keeping the context. For training, we have used a synthetic dataset of more than 40000 images, labeled at primitive level. The experimental results are promising, showing the benefits of our approach.
Keywords: Optical Music Recognition; Recurrent Neural Network; Long ShortTerm Memory
|
|
|
Antonio Lopez. (2018). Pedestrian Detection Systems. In Wiley Encyclopedia of Electrical and Electronics Engineering.
Abstract: Pedestrian detection is a highly relevant topic for both advanced driver assistance systems (ADAS) and autonomous driving. In this entry, we review the ideas behind pedestrian detection systems from the point of view of perception based on computer vision and machine learning.
|
|
|
Raul Gomez, Lluis Gomez, Jaume Gibert, & Dimosthenis Karatzas. (2019). Self-Supervised Learning from Web Data for Multimodal Retrieval. In Multi-Modal Scene Understanding Book (pp. 279–306).
Abstract: Self-Supervised learning from multimodal image and text data allows deep neural networks to learn powerful features with no need of human annotated data. Web and Social Media platforms provide a virtually unlimited amount of this multimodal data. In this work we propose to exploit this free available data to learn a multimodal image and text embedding, aiming to leverage the semantic knowledge learnt in the text domain and transfer it to a visual model for semantic image retrieval. We demonstrate that the proposed pipeline can learn from images with associated text without supervision and analyze the semantic structure of the learnt joint image and text embeddingspace. Weperformathoroughanalysisandperformancecomparisonoffivedifferentstateof the art text embeddings in three different benchmarks. We show that the embeddings learnt with Web and Social Media data have competitive performances over supervised methods in the text basedimageretrievaltask,andweclearlyoutperformstateoftheartintheMIRFlickrdatasetwhen training in the target data. Further, we demonstrate how semantic multimodal image retrieval can be performed using the learnt embeddings, going beyond classical instance-level retrieval problems. Finally, we present a new dataset, InstaCities1M, composed by Instagram images and their associated texts that can be used for fair comparison of image-text embeddings.
Keywords: self-supervised learning; webly supervised learning; text embeddings; multimodal retrieval; multimodal embedding
|
|