toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (up) Ana Maria Ares; Jorge Bernal; Maria Jesus Nozal; F. Javier Sanchez; Jose Bernal edit  url
doi  openurl
  Title Results of the use of Kahoot! gamification tool in a course of Chemistry Type Conference Article
  Year 2018 Publication 4th International Conference on Higher Education Advances Abbreviated Journal  
  Volume Issue Pages 1215-1222  
  Keywords  
  Abstract The present study examines the use of Kahoot! as a gamification tool to explore mixed learning strategies. We analyze its use in two different groups of a theoretical subject of the third course of the Degree in Chemistry. An empirical-analytical methodology was used using Kahoot! in two different groups of students, with different frequencies. The academic results of these two group of students were compared between them and with those obtained in the previous course, in which Kahoot! was not employed, with the aim of measuring the evolution in the students´ knowledge. The results showed, in all cases, that the use of Kahoot! has led to a significant increase in the overall marks, and in the number of students who passed the subject. Moreover, some differences were also observed in students´ academic performance according to the group. Finally, it can be concluded that the use of a gamification tool (Kahoot!) in a university classroom had generally improved students´ learning and marks, and that this improvement is more prevalent in those students who have achieved a better Kahoot! performance.  
  Address Valencia; June 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference HEAD  
  Notes MV; no proj Approved no  
  Call Number Admin @ si @ ABN2018 Serial 3246  
Permanent link to this record
 

 
Author (up) Anastasios Doulamis; Nikolaos Doulamis; Marco Bertini; Jordi Gonzalez; Thomas B. Moeslund edit   pdf
url  openurl
  Title Introduction to the Special Issue on the Analysis and Retrieval of Events/Actions and Workflows in Video Streams Type Journal Article
  Year 2016 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 75 Issue 22 Pages 14985-14990  
  Keywords  
  Abstract  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; HUPBA Approved no  
  Call Number Admin @ si @ DDB2016 Serial 2934  
Permanent link to this record
 

 
Author (up) Anastasios Doulamis; Nikolaos Doulamis; Marco Bertini; Jordi Gonzalez; Thomas B. Moeslund edit   pdf
openurl 
  Title Analysis and Retrieval of Tracked Events and Motion in Imagery Streams Type Miscellaneous
  Year 2013 Publication ACM/IEEE international workshop on Analysis and retrieval of tracked events and motion in imagery stream Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Barcelona; October 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ DDB2013 Serial 2372  
Permanent link to this record
 

 
Author (up) Anders Hast; Alicia Fornes edit   pdf
doi  openurl
  Title A Segmentation-free Handwritten Word Spotting Approach by Relaxed Feature Matching Type Conference Article
  Year 2016 Publication 12th IAPR Workshop on Document Analysis Systems Abbreviated Journal  
  Volume Issue Pages 150-155  
  Keywords  
  Abstract The automatic recognition of historical handwritten documents is still considered challenging task. For this reason, word spotting emerges as a good alternative for making the information contained in these documents available to the user. Word spotting is defined as the task of retrieving all instances of the query word in a document collection, becoming a useful tool for information retrieval. In this paper we propose a segmentation-free word spotting approach able to deal with large document collections. Our method is inspired on feature matching algorithms that have been applied to image matching and retrieval. Since handwritten words have different shape, there is no exact transformation to be obtained. However, the sufficient degree of relaxation is achieved by using a Fourier based descriptor and an alternative approach to RANSAC called PUMA. The proposed approach is evaluated on historical marriage records, achieving promising results.  
  Address Santorini; Greece; April 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference DAS  
  Notes DAG; 602.006; 600.061; 600.077; 600.097 Approved no  
  Call Number HaF2016 Serial 2753  
Permanent link to this record
 

 
Author (up) Anders Skaarup Johansen; Kamal Nasrollahi; Sergio Escalera; Thomas B. Moeslund edit  url
doi  openurl
  Title Who Cares about the Weather? Inferring Weather Conditions for Weather-Aware Object Detection in Thermal Images Type Journal Article
  Year 2023 Publication Applied Sciences Abbreviated Journal AS  
  Volume 13 Issue 18 Pages  
  Keywords thermal; object detection; concept drift; conditioning; weather recognition  
  Abstract Deployments of real-world object detection systems often experience a degradation in performance over time due to concept drift. Systems that leverage thermal cameras are especially susceptible because the respective thermal signatures of objects and their surroundings are highly sensitive to environmental changes. In this study, two types of weather-aware latent conditioning methods are investigated. The proposed method aims to guide two object detectors, (YOLOv5 and Deformable DETR) to become weather-aware. This is achieved by leveraging an auxiliary branch that predicts weather-related information while conditioning intermediate layers of the object detector. While the conditioning methods proposed do not directly improve the accuracy of baseline detectors, it can be observed that conditioned networks manage to extract a weather-related signal from the thermal images, thus resulting in a decreased miss rate at the cost of increased false positives. The extracted signal appears noisy and is thus challenging to regress accurately. This is most likely a result of the qualitative nature of the thermal sensor; thus, further work is needed to identify an ideal method for optimizing the conditioning branch, as well as to further improve the accuracy of the system.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA Approved no  
  Call Number Admin @ si @ SNE2023 Serial 3983  
Permanent link to this record
 

 
Author (up) Andre Litvin; Kamal Nasrollahi; Sergio Escalera; Cagri Ozcinar; Thomas B. Moeslund; Gholamreza Anbarjafari edit  url
openurl 
  Title A Novel Deep Network Architecture for Reconstructing RGB Facial Images from Thermal for Face Recognition Type Journal Article
  Year 2019 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 78 Issue 18 Pages 25259–25271  
  Keywords Fully convolutional networks; FusionNet; Thermal imaging; Face recognition  
  Abstract This work proposes a fully convolutional network architecture for RGB face image generation from a given input thermal face image to be applied in face recognition scenarios. The proposed method is based on the FusionNet architecture and increases robustness against overfitting using dropout after bridge connections, randomised leaky ReLUs (RReLUs), and orthogonal regularization. Furthermore, we propose to use a decoding block with resize convolution instead of transposed convolution to improve final RGB face image generation. To validate our proposed network architecture, we train a face classifier and compare its face recognition rate on the reconstructed RGB images from the proposed architecture, to those when reconstructing images with the original FusionNet, as well as when using the original RGB images. As a result, we are introducing a new architecture which leads to a more accurate network.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no menciona Approved no  
  Call Number Admin @ si @ LNE2019 Serial 3318  
Permanent link to this record
 

 
Author (up) Andrea Gemelli; Sanket Biswas; Enrico Civitelli; Josep Llados; Simone Marinai edit   pdf
url  doi
isbn  openurl
  Title Doc2Graph: A Task Agnostic Document Understanding Framework Based on Graph Neural Networks Type Conference Article
  Year 2022 Publication 17th European Conference on Computer Vision Workshops Abbreviated Journal  
  Volume 13804 Issue Pages 329–344  
  Keywords  
  Abstract Geometric Deep Learning has recently attracted significant interest in a wide range of machine learning fields, including document analysis. The application of Graph Neural Networks (GNNs) has become crucial in various document-related tasks since they can unravel important structural patterns, fundamental in key information extraction processes. Previous works in the literature propose task-driven models and do not take into account the full power of graphs. We propose Doc2Graph, a task-agnostic document understanding framework based on a GNN model, to solve different tasks given different types of documents. We evaluated our approach on two challenging datasets for key information extraction in form understanding, invoice layout analysis and table detection.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-031-25068-2 Medium  
  Area Expedition Conference ECCV-TiE  
  Notes DAG; 600.162; 600.140; 110.312 Approved no  
  Call Number Admin @ si @ GBC2022 Serial 3795  
Permanent link to this record
 

 
Author (up) Andreas Fischer; Ching Y. Suen; Volkmar Frinken; Kaspar Riesen; Horst Bunke edit   pdf
doi  isbn
openurl 
  Title A Fast Matching Algorithm for Graph-Based Handwriting Recognition Type Conference Article
  Year 2013 Publication 9th IAPR – TC15 Workshop on Graph-based Representation in Pattern Recognition Abbreviated Journal  
  Volume 7877 Issue Pages 194-203  
  Keywords  
  Abstract The recognition of unconstrained handwriting images is usually based on vectorial representation and statistical classification. Despite their high representational power, graphs are rarely used in this field due to a lack of efficient graph-based recognition methods. Recently, graph similarity features have been proposed to bridge the gap between structural representation and statistical classification by means of vector space embedding. This approach has shown a high performance in terms of accuracy but had shortcomings in terms of computational speed. The time complexity of the Hungarian algorithm that is used to approximate the edit distance between two handwriting graphs is demanding for a real-world scenario. In this paper, we propose a faster graph matching algorithm which is derived from the Hausdorff distance. On the historical Parzival database it is demonstrated that the proposed method achieves a speedup factor of 12.9 without significant loss in recognition accuracy.  
  Address Vienna; Austria; May 2013  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN 0302-9743 ISBN 978-3-642-38220-8 Medium  
  Area Expedition Conference GBR  
  Notes DAG; 600.045; 605.203 Approved no  
  Call Number Admin @ si @ FSF2013 Serial 2294  
Permanent link to this record
 

 
Author (up) Andreas Fischer; Volkmar Frinken; Alicia Fornes; Horst Bunke edit  doi
openurl 
  Title Transcription Alignment of Latin Manuscripts Using Hidden Markov Models Type Conference Article
  Year 2011 Publication Proceedings of the 2011 Workshop on Historical Document Imaging and Processing Abbreviated Journal  
  Volume Issue Pages 29-36  
  Keywords  
  Abstract Transcriptions of historical documents are a valuable source for extracting labeled handwriting images that can be used for training recognition systems. In this paper, we introduce the Saint Gall database that includes images as well as the transcription of a Latin manuscript from the 9th century written in Carolingian script. Although the available transcription is of high quality for a human reader, the spelling of the words is not accurate when compared with the handwriting image. Hence, the transcription poses several challenges for alignment regarding, e.g., line breaks, abbreviations, and capitalization. We propose an alignment system based on character Hidden Markov Models that can cope with these challenges and efficiently aligns complete document pages. On the Saint Gall database, we demonstrate that a considerable alignment accuracy can be achieved, even with weakly trained character models.  
  Address  
  Corporate Author Thesis  
  Publisher ACM Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference HIP  
  Notes DAG Approved no  
  Call Number Admin @ si @ FFF2011b Serial 1824  
Permanent link to this record
 

 
Author (up) Andreas Fischer; Volkmar Frinken; Horst Bunke; Ching Y. Suen edit   pdf
doi  openurl
  Title Improving HMM-Based Keyword Spotting with Character Language Models Type Conference Article
  Year 2013 Publication 12th International Conference on Document Analysis and Recognition Abbreviated Journal  
  Volume Issue Pages 506-510  
  Keywords  
  Abstract Facing high error rates and slow recognition speed for full text transcription of unconstrained handwriting images, keyword spotting is a promising alternative to locate specific search terms within scanned document images. We have previously proposed a learning-based method for keyword spotting using character hidden Markov models that showed a high performance when compared with traditional template image matching. In the lexicon-free approach pursued, only the text appearance was taken into account for recognition. In this paper, we integrate character n-gram language models into the spotting system in order to provide an additional language context. On the modern IAM database as well as the historical George Washington database, we demonstrate that character language models significantly improve the spotting performance.  
  Address Washington; USA; August 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1520-5363 ISBN Medium  
  Area Expedition Conference ICDAR  
  Notes DAG; 600.045; 605.203 Approved no  
  Call Number Admin @ si @ FFB2013 Serial 2295  
Permanent link to this record
 

 
Author (up) Andreas Møgelmose; Chris Bahnsen; Thomas B. Moeslund; Albert Clapes; Sergio Escalera edit   pdf
doi  isbn
openurl 
  Title Tri-modal Person Re-identification with RGB, Depth and Thermal Features Type Conference Article
  Year 2013 Publication 9th IEEE Workshop on Perception beyond the visible Spectrum, Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 301-307  
  Keywords  
  Abstract Person re-identification is about recognizing people who have passed by a sensor earlier. Previous work is mainly based on RGB data, but in this work we for the first time present a system where we combine RGB, depth, and thermal data for re-identification purposes. First, from each of the three modalities, we obtain some particular features: from RGB data, we model color information from different regions of the body, from depth data, we compute different soft body biometrics, and from thermal data, we extract local structural information. Then, the three information types are combined in a joined classifier. The tri-modal system is evaluated on a new RGB-D-T dataset, showing successful results in re-identification scenarios.  
  Address Portland; oregon; June 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-0-7695-4990-3 Medium  
  Area Expedition Conference CVPRW  
  Notes HUPBA;MILAB Approved no  
  Call Number Admin @ si @ MBM2013 Serial 2253  
Permanent link to this record
 

 
Author (up) Andreea Glavan; Alina Matei; Petia Radeva; Estefania Talavera edit  url
openurl 
  Title Does our social life influence our nutritional behaviour? Understanding nutritional habits from egocentric photo-streams Type Journal Article
  Year 2021 Publication Expert Systems with Applications Abbreviated Journal ESWA  
  Volume 171 Issue Pages 114506  
  Keywords  
  Abstract Nutrition and social interactions are both key aspects of the daily lives of humans. In this work, we propose a system to evaluate the influence of social interaction in the nutritional habits of a person from a first-person perspective. In order to detect the routine of an individual, we construct a nutritional behaviour pattern discovery model, which outputs routines over a number of days. Our method evaluates similarity of routines with respect to visited food-related scenes over the collected days, making use of Dynamic Time Warping, as well as considering social engagement and its correlation with food-related activities. The nutritional and social descriptors of the collected days are evaluated and encoded using an LSTM Autoencoder. Later, the obtained latent space is clustered to find similar days unaffected by outliers using the Isolation Forest method. Moreover, we introduce a new score metric to evaluate the performance of the proposed algorithm. We validate our method on 104 days and more than 100 k egocentric images gathered by 7 users. Several different visualizations are evaluated for the understanding of the findings. Our results demonstrate good performance and applicability of our proposed model for social-related nutritional behaviour understanding. At the end, relevant applications of the model are discussed by analysing the discovered routine of particular individuals.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB; no proj Approved no  
  Call Number Admin @ si @ GMR2021 Serial 3634  
Permanent link to this record
 

 
Author (up) Andrei Polzounov; Artsiom Ablavatski; Sergio Escalera; Shijian Lu; Jianfei Cai edit  openurl
  Title WordFences: Text Localization and Recognition Type Conference Article
  Year 2017 Publication 24th International Conference on Image Processing Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address Beijing; China; September 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICIP  
  Notes HUPBA; no menciona Approved no  
  Call Number Admin @ si @ PAE2017 Serial 3007  
Permanent link to this record
 

 
Author (up) Andres Mafla edit  isbn
openurl 
  Title Leveraging Scene Text Information for Image Interpretation Type Book Whole
  Year 2022 Publication PhD Thesis, Universitat Autonoma de Barcelona-CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Until recently, most computer vision models remained illiterate, largely ignoring the semantically rich and explicit information contained in scene text. Recent progress in scene text detection and recognition has recently allowed exploring its role in a diverse set of open computer vision problems, e.g. image classification, image-text retrieval, image captioning, and visual question answering to name a few. The explicit semantics of scene text closely requires specific modeling similar to language. However, scene text is a particular signal that has to be interpreted according to a comprehensive perspective that encapsulates all the visual cues in an image. Incorporating this information is a straightforward task for humans, but if we are unfamiliar with a language or scripture, achieving a complete world understanding is impossible (e.a. visiting a foreign country with a different alphabet). Despite the importance of scene text, modeling it requires considering the several ways in which scene text interacts with an image, processing and fusing an additional modality. In this thesis, we mainly focus
on two tasks, scene text-based fine-grained image classification, and cross-modal retrieval. In both studied tasks we identify existing limitations in current approaches and propose plausible solutions. Concretely, in each chapter: i) We define a compact way to embed scene text that generalizes to unseen words at training time while performing in real-time. ii) We incorporate the previously learned scene text embedding to create an image-level descriptor that overcomes optical character recognition (OCR) errors which is well-suited to the fine-grained image classification task. iii) We design a region-level reasoning network that learns the interaction through semantics among salient visual regions and scene text instances. iv) We employ scene text information in image-text matching and introduce the Scene Text Aware Cross-Modal retrieval StacMR task. We gather a dataset that incorporates scene text and design a model suited for the newly studied modality. v) We identify the drawbacks of current retrieval metrics in cross-modal retrieval. An image captioning metric is proposed as a way of better evaluating semantics in retrieved results. Ample experimentation shows that incorporating such semantics into a model yields better semantic results while
requiring significantly less data to converge.
 
  Address  
  Corporate Author Thesis Ph.D. thesis  
  Publisher IMPRIMA Place of Publication Editor Dimosthenis Karatzas;Lluis Gomez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-84-124793-6-2 Medium  
  Area Expedition Conference  
  Notes DAG Approved no  
  Call Number Admin @ si @ Maf2022 Serial 3756  
Permanent link to this record
 

 
Author (up) Andres Mafla; Rafael S. Rezende; Lluis Gomez; Diana Larlus; Dimosthenis Karatzas edit   pdf
doi  openurl
  Title StacMR: Scene-Text Aware Cross-Modal Retrieval Type Conference Article
  Year 2021 Publication IEEE Winter Conference on Applications of Computer Vision Abbreviated Journal  
  Volume Issue Pages 2219-2229  
  Keywords  
  Abstract  
  Address Virtual; January 2021  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference WACV  
  Notes DAG; 600.121 Approved no  
  Call Number Admin @ si @ MRG2021a Serial 3492  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: