Y. Patel, Lluis Gomez, Marçal Rusiñol, Dimosthenis Karatzas, & C.V. Jawahar. (2019). Self-Supervised Visual Representations for Cross-Modal Retrieval. In ACM International Conference on Multimedia Retrieval (182–186).
Abstract: Cross-modal retrieval methods have been significantly improved in last years with the use of deep neural networks and large-scale annotated datasets such as ImageNet and Places. However, collecting and annotating such datasets requires a tremendous amount of human effort and, besides, their annotations are limited to discrete sets of popular visual classes that may not be representative of the richer semantics found on large-scale cross-modal retrieval datasets. In this paper, we present a self-supervised cross-modal retrieval framework that leverages as training data the correlations between images and text on the entire set of Wikipedia articles. Our method consists in training a CNN to predict: (1) the semantic context of the article in which an image is more probable to appear as an illustration, and (2) the semantic context of its caption. Our experiments demonstrate that the proposed method is not only capable of learning discriminative visual representations for solving vision tasks like classification, but that the learned representations are better for cross-modal retrieval when compared to supervised pre-training of the network on the ImageNet dataset.
|
Mohamed Ali Souibgui, Asma Bensalah, Jialuo Chen, Alicia Fornes, & Michelle Waldispühl. (2023). A User Perspective on HTR methods for the Automatic Transcription of Rare Scripts: The Case of Codex Runicus Just Accepted. JOCCH - ACM Journal on Computing and Cultural Heritage, 15(4), 1–18.
Abstract: Recent breakthroughs in Artificial Intelligence, Deep Learning and Document Image Analysis and Recognition have significantly eased the creation of digital libraries and the transcription of historical documents. However, for documents in rare scripts with few labelled training data available, current Handwritten Text Recognition (HTR) systems are too constraint. Moreover, research on HTR often focuses on technical aspects only, and rarely puts emphasis on implementing software tools for scholars in Humanities. In this article, we describe, compare and analyse different transcription methods for rare scripts. We evaluate their performance in a real use case of a medieval manuscript written in the runic script (Codex Runicus) and discuss advantages and disadvantages of each method from the user perspective. From this exhaustive analysis and comparison with a fully manual transcription, we raise conclusions and provide recommendations to scholars interested in using automatic transcription tools.
|
S. Chanda, Umapada Pal, & Oriol Ramos Terrades. (2009). Word-Wise Thai and Roman Script Identification. TALIP - ACM Transactions on Asian Language Information Processing, 1–21.
Abstract: In some Thai documents, a single text line of a printed document page may contain words of both Thai and Roman scripts. For the Optical Character Recognition (OCR) of such a document page it is better to identify, at first, Thai and Roman script portions and then to use individual OCR systems of the respective scripts on these identified portions. In this article, an SVM-based method is proposed for identification of word-wise printed Roman and Thai scripts from a single line of a document page. Here, at first, the document is segmented into lines and then lines are segmented into character groups (words). In the proposed scheme, we identify the script of a character group combining different character features obtained from structural shape, profile behavior, component overlapping information, topological properties, and water reservoir concept, etc. Based on the experiment on 10,000 data (words) we obtained 99.62% script identification accuracy from the proposed scheme.
|
Xiangyang Li, Luis Herranz, & Shuqiang Jiang. (2020). Multifaceted Analysis of Fine-Tuning in Deep Model for Visual Recognition. ACM - ACM Transactions on Data Science.
Abstract: In recent years, convolutional neural networks (CNNs) have achieved impressive performance for various visual recognition scenarios. CNNs trained on large labeled datasets can not only obtain significant performance on most challenging benchmarks but also provide powerful representations, which can be used to a wide range of other tasks. However, the requirement of massive amounts of data to train deep neural networks is a major drawback of these models, as the data available is usually limited or imbalanced. Fine-tuning (FT) is an effective way to transfer knowledge learned in a source dataset to a target task. In this paper, we introduce and systematically investigate several factors that influence the performance of fine-tuning for visual recognition. These factors include parameters for the retraining procedure (e.g., the initial learning rate of fine-tuning), the distribution of the source and target data (e.g., the number of categories in the source dataset, the distance between the source and target datasets) and so on. We quantitatively and qualitatively analyze these factors, evaluate their influence, and present many empirical observations. The results reveal insights into what fine-tuning changes CNN parameters and provide useful and evidence-backed intuitions about how to implement fine-tuning for computer vision tasks.
|
Hugo Bertiche, Meysam Madadi, & Sergio Escalera. (2021). PBNS: Physically Based Neural Simulation for Unsupervised Garment Pose Space Deformation. ACM Transactions on Graphics, 40(6), 1–14.
Abstract: We present a methodology to automatically obtain Pose Space Deformation (PSD) basis for rigged garments through deep learning. Classical approaches rely on Physically Based Simulations (PBS) to animate clothes. These are general solutions that, given a sufficiently fine-grained discretization of space and time, can achieve highly realistic results. However, they are computationally expensive and any scene modification prompts the need of re-simulation. Linear Blend Skinning (LBS) with PSD offers a lightweight alternative to PBS, though, it needs huge volumes of data to learn proper PSD. We propose using deep learning, formulated as an implicit PBS, to unsupervisedly learn realistic cloth Pose Space Deformations in a constrained scenario: dressed humans. Furthermore, we show it is possible to train these models in an amount of time comparable to a PBS of a few sequences. To the best of our knowledge, we are the first to propose a neural simulator for cloth.
While deep-based approaches in the domain are becoming a trend, these are data-hungry models. Moreover, authors often propose complex formulations to better learn wrinkles from PBS data. Supervised learning leads to physically inconsistent predictions that require collision solving to be used. Also, dependency on PBS data limits the scalability of these solutions, while their formulation hinders its applicability and compatibility. By proposing an unsupervised methodology to learn PSD for LBS models (3D animation standard), we overcome both of these drawbacks. Results obtained show cloth-consistency in the animated garments and meaningful pose-dependant folds and wrinkles. Our solution is extremely efficient, handles multiple layers of cloth, allows unsupervised outfit resizing and can be easily applied to any custom 3D avatar.
|
Hugo Bertiche, Meysam Madadi, & Sergio Escalera. (2022). Neural Cloth Simulation. ACMTGraph - ACM Transactions on Graphics, 41(6), 1–14.
Abstract: We present a general framework for the garment animation problem through unsupervised deep learning inspired in physically based simulation. Existing trends in the literature already explore this possibility. Nonetheless, these approaches do not handle cloth dynamics. Here, we propose the first methodology able to learn realistic cloth dynamics unsupervisedly, and henceforth, a general formulation for neural cloth simulation. The key to achieve this is to adapt an existing optimization scheme for motion from simulation based methodologies to deep learning. Then, analyzing the nature of the problem, we devise an architecture able to automatically disentangle static and dynamic cloth subspaces by design. We will show how this improves model performance. Additionally, this opens the possibility of a novel motion augmentation technique that greatly improves generalization. Finally, we show it also allows to control the level of motion in the predictions. This is a useful, never seen before, tool for artists. We provide of detailed analysis of the problem to establish the bases of neural cloth simulation and guide future research into the specifics of this domain.
ACM Transactions on GraphicsVolume 41Issue 6December 2022 Article No.: 220pp 1–
|
Wenjuan Gong, Zhang Yue, Wei Wang, Cheng Peng, & Jordi Gonzalez. (2022). Meta-MMFNet: Meta-Learning Based Multi-Model Fusion Network for Micro-Expression Recognition. ACMTMC - ACM Transactions on Multimedia Computing, Communications, and Applications, .
Abstract: Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method.
Keywords: Feature Fusion; Model Fusion; Meta-Learning; Micro-Expression Recognition
|
Wenjuan Gong, Yue Zhang, Wei Wang, Peng Cheng, & Jordi Gonzalez. (2023). Meta-MMFNet: Meta-learning-based Multi-model Fusion Network for Micro-expression Recognition. TMCCA - ACM Transactions on Multimedia Computing, Communications, and Applications, 20(2), 1–20.
Abstract: Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning-based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method.
|
Anastasios Doulamis, Nikolaos Doulamis, Marco Bertini, Jordi Gonzalez, & Thomas B. Moeslund. (2013). Analysis and Retrieval of Tracked Events and Motion in Imagery Streams.
|
Enric Marti, Ferran Poveda, Antoni Gurgui, & Debora Gil. (2011). Aprendizaje Basado en Proyectos en Ingeniería Informática. Resultados y reflexiones de seis años de experiencia.
Abstract: In this workshop a 6 years experience in Project Based Learning (PBL) in Computer Graphics, Computer Engineering course at the Autonomous University of Barcelona (UAB) is presented. We use a Moodle environment suited to manage the documentation generated in PBL. The course is organized by means of two alternative routes: a classic itinerary of lectures and test-based evaluation and another with PBL. In the PBL itinerary we explain the organization in teamgroups, homework tutoring and monitoring and evaluation guidelines for students. We provide some of the work done by students, and the results of assessment surveys carried out to students during these years. We report the evolution of our PBL itinerary in terms of, both, organization and student surveys.
The workshop aims at discussing about on the advantages and disadvantages of using these active methodologies in technical degrees such as computer engineering, in order to debate about the most suitable way of organizing PBL and assessing students learning rate.
|
Enric Marti, Debora Gil, Marc Vivet, & Carme Julia. (2008). Balance de cuatro años de experiencia en la implantación de la metodología de Aprendizaje Basado en Proyectos en la asignatura de Gráficos por Computador en ingeniería Informática.
Abstract: En este trabajo se presentan los resultados de la aplicación de la metodología del aprendizaje cooperativo a la docencia de dos asignaturas de programación en ingeniería informática. ‘Algoritmos y programación’ y ‘Lenguajes de programación’ son dos asignaturas complementarias que se organizan entorno a un proyecto común que engloba los contenidos de ambas asignaturas. En la docencia de una parte muy importante de estas asignaturas, la metodología del aprendizaje cooperativo se ha adaptado a sus características específicas. Como muestra de esta adaptación presentamos dos ejemplos de las actividades desarrolladas dentro de la docencia de estas asignaturas. Después de tres años de aplicación, el análisis a nivel cualitativo y cuantitativo de los resultados muestra que éstos son muy satisfactorios y que la aplicación del método cooperativo ha mejorado de forma considerable el rendimiento de los alumnos en ambas asignaturas.
Keywords: Aprendizaje cooperativo; aprendizaje basado en proyectos; experiencias docentes.
|
Monica Piñol, Angel Sappa, Angeles Lopez, & Ricardo Toledo. (2012). Feature Selection Based on Reinforcement Learning for Object Recognition. In Adaptive Learning Agents Workshop (pp. 33–39).
|
Antonio Lopez, J. Hilgenstock, A. Busse, Ramon Baldrich, Felipe Lumbreras, & Joan Serrat. (2008). Nightime Vehicle Detecion for Intelligent Headlight Control. In Advanced Concepts for Intelligent Vision Systems, 10th International Conference, Proceedings, (Vol. 5259, 113–124). LNCS.
Keywords: Intelligent Headlights; vehicle detection
|
Daniel Ponsa, & Antonio Lopez. (2007). Cascade of Classifiers for Vehicle Detection. In Advanced Concepts for Intelligent Vision Systems, LNCS 4678, volume 1, pp. 980–989.
Keywords: vehicle detection
|
Dennis G.Romero, Anselmo Frizera, Angel Sappa, Boris X. Vintimilla, & Teodiano F.Bastos. (2015). A predictive model for human activity recognition by observing actions and context. In Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015 (Vol. 9386, pp. 323–333). LNCS. Springer International Publishing.
Abstract: This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.
|