|   | 
Details
   web
Records
Author Aura Hernandez-Sabate; Lluis Albarracin; F. Javier Sanchez
Title Graph-Based Problem Explorer: A Software Tool to Support Algorithm Design Learning While Solving the Salesperson Problem Type Journal
Year 2020 Publication Mathematics Abbreviated Journal MATH
Volume 20 Issue 8(9) Pages 1595
Keywords STEM education; Project-based learning; Coding; software tool
Abstract In this article, we present a sequence of activities in the form of a project in order to promote
learning on design and analysis of algorithms. The project is based on the resolution of a real problem, the salesperson problem, and it is theoretically grounded on the fundamentals of mathematical modelling. In order to support the students’ work, a multimedia tool, called Graph-based Problem Explorer (GbPExplorer), has been designed and refined to promote the development of computer literacy in engineering and science university students. This tool incorporates several modules to allow coding different algorithmic techniques solving the salesman problem. Based on an educational design research along five years, we observe that working with GbPExplorer during the project provides students with the possibility of representing the situation to be studied in the form of graphs and analyze them from a computational point of view.
Address September 2020
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; ISE Approved no
Call Number Admin @ si @ Serial 3722
Permanent link to this record
 

 
Author Razieh Rastgoo; Kourosh Kiani; Sergio Escalera
Title Multi-Modal Deep Hand Sign Language Recognition in Still Images Using Restricted Boltzmann Machine Type Journal Article
Year 2018 Publication Entropy Abbreviated Journal ENTROPY
Volume 20 Issue 11 Pages 809
Keywords hand sign language; deep learning; restricted Boltzmann machine (RBM); multi-modal; profoundly deaf; noisy image
Abstract In this paper, a deep learning approach, Restricted Boltzmann Machine (RBM), is used to perform automatic hand sign language recognition from visual data. We evaluate how RBM, as a deep generative model, is capable of generating the distribution of the input data for an enhanced recognition of unseen data. Two modalities, RGB and Depth, are considered in the model input in three forms: original image, cropped image, and noisy cropped image. Five crops of the input image are used and the hand of these cropped images are detected using Convolutional Neural Network (CNN). After that, three types of the detected hand images are generated for each modality and input to RBMs. The outputs of the RBMs for two modalities are fused in another RBM in order to recognize the output sign label of the input image. The proposed multi-modal model is trained on all and part of the American alphabet and digits of four publicly available datasets. We also evaluate the robustness of the proposal against noise. Experimental results show that the proposed multi-modal model, using crops and the RBM fusing methodology, achieves state-of-the-art results on Massey University Gesture Dataset 2012, American Sign Language (ASL). and Fingerspelling Dataset from the University of Surrey’s Center for Vision, Speech and Signal Processing, NYU, and ASL Fingerspelling A datasets.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HUPBA; no proj Approved no
Call Number Admin @ si @ RKE2018 Serial 3198
Permanent link to this record
 

 
Author Aura Hernandez-Sabate; Jose Elias Yauri; Pau Folch; Miquel Angel Piera; Debora Gil
Title Recognition of the Mental Workloads of Pilots in the Cockpit Using EEG Signals Type Journal Article
Year 2022 Publication Applied Sciences Abbreviated Journal APPLSCI
Volume 12 Issue 5 Pages 2298
Keywords Cognitive states; Mental workload; EEG analysis; Neural networks; Multimodal data fusion
Abstract The commercial flightdeck is a naturally multi-tasking work environment, one in which interruptions are frequent come in various forms, contributing in many cases to aviation incident reports. Automatic characterization of pilots’ workloads is essential to preventing these kind of incidents. In addition, minimizing the physiological sensor network as much as possible remains both a challenge and a requirement. Electroencephalogram (EEG) signals have shown high correlations with specific cognitive and mental states, such as workload. However, there is not enough evidence in the literature to validate how well models generalize in cases of new subjects performing tasks with workloads similar to the ones included during the model’s training. In this paper, we propose a convolutional neural network to classify EEG features across different mental workloads in a continuous performance task test that partly measures working memory and working memory capacity. Our model is valid at the general population level and it is able to transfer task learning to pilot mental workload recognition in a simulated operational environment.
Address February 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; ADAS; 600.139; 600.145; 600.118 Approved no
Call Number Admin @ si @ HYF2022 Serial 3720
Permanent link to this record
 

 
Author Guillermo Torres; Sonia Baeza; Carles Sanchez; Ignasi Guasch; Antoni Rosell; Debora Gil
Title An Intelligent Radiomic Approach for Lung Cancer Screening Type Journal Article
Year 2022 Publication Applied Sciences Abbreviated Journal APPLSCI
Volume 12 Issue 3 Pages 1568
Keywords Lung cancer; Early diagnosis; Screening; Neural networks; Image embedding; Architecture optimization
Abstract The efficiency of lung cancer screening for reducing mortality is hindered by the high rate of false positives. Artificial intelligence applied to radiomics could help to early discard benign cases from the analysis of CT scans. The available amount of data and the fact that benign cases are a minority, constitutes a main challenge for the successful use of state of the art methods (like deep learning), which can be biased, over-fitted and lack of clinical reproducibility. We present an hybrid approach combining the potential of radiomic features to characterize nodules in CT scans and the generalization of the feed forward networks. In order to obtain maximal reproducibility with minimal training data, we propose an embedding of nodules based on the statistical significance of radiomic features for malignancy detection. This representation space of lesions is the input to a feed
forward network, which architecture and hyperparameters are optimized using own-defined metrics of the diagnostic power of the whole system. Results of the best model on an independent set of patients achieve 100% of sensitivity and 83% of specificity (AUC = 0.94) for malignancy detection.
Address Jan 2022
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes IAM; 600.139; 600.145 Approved no
Call Number Admin @ si @ TBS2022 Serial 3699
Permanent link to this record
 

 
Author Carles Onielfa; Carles Casacuberta; Sergio Escalera
Title Influence in Social Networks Through Visual Analysis of Image Memes Type Conference Article
Year 2022 Publication Artificial Intelligence Research and Development Abbreviated Journal
Volume 356 Issue Pages 71-80
Keywords
Abstract Memes evolve and mutate through their diffusion in social media. They have the potential to propagate ideas and, by extension, products. Many studies have focused on memes, but none so far, to our knowledge, on the users that post them, their relationships, and the reach of their influence. In this article, we define a meme influence graph together with suitable metrics to visualize and quantify influence between users who post memes, and we also describe a process to implement our definitions using a new approach to meme detection based on text-to-image area ratio and contrast. After applying our method to a set of users of the social media platform Instagram, we conclude that our metrics add information to already existing user characteristics.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes HuPBA; no menciona Approved no
Call Number Admin @ si @ OCE2022 Serial 3799
Permanent link to this record
 

 
Author Jose Elias Yauri; Aura Hernandez-Sabate; Pau Folch; Debora Gil
Title Mental Workload Detection Based on EEG Analysis Type Conference Article
Year 2021 Publication Artificial Intelligent Research and Development. Proceedings 23rd International Conference of the Catalan Association for Artificial Intelligence. Abbreviated Journal
Volume 339 Issue Pages 268-277
Keywords Cognitive states; Mental workload; EEG analysis; Neural Networks.
Abstract The study of mental workload becomes essential for human work efficiency, health conditions and to avoid accidents, since workload compromises both performance and awareness. Although workload has been widely studied using several physiological measures, minimising the sensor network as much as possible remains both a challenge and a requirement.
Electroencephalogram (EEG) signals have shown a high correlation to specific cognitive and mental states like workload. However, there is not enough evidence in the literature to validate how well models generalize in case of new subjects performing tasks of a workload similar to the ones included during model’s training.
In this paper we propose a binary neural network to classify EEG features across different mental workloads. Two workloads, low and medium, are induced using two variants of the N-Back Test. The proposed model was validated in a dataset collected from 16 subjects and shown a high level of generalization capability: model reported an average recall of 81.81% in a leave-one-out subject evaluation.
Address Virtual; October 20-22 2021
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes IAM; 600.139; 600.118; 600.145 Approved no
Call Number Admin @ si @ Serial 3723
Permanent link to this record
 

 
Author Md. Mostafa Kamal Sarker; Syeda Furruka Banu; Hatem A. Rashwan; Mohamed Abdel-Nasser; Vivek Kumar Singh; Sylvie Chambon; Petia Radeva; Domenec Puig
Title Food Places Classification in Egocentric Images Using Siamese Neural Networks Type Conference Article
Year 2019 Publication 22nd International Conference of the Catalan Association of Artificial Intelligence Abbreviated Journal
Volume Issue Pages 145-151
Keywords
Abstract Wearable cameras are become more popular in recent years for capturing the unscripted moments of the first-person that help to analyze the users lifestyle. In this work, we aim to recognize the places related to food in egocentric images during a day to identify the daily food patterns of the first-person. Thus, this system can assist to improve their eating behavior to protect users against food-related diseases. In this paper, we use Siamese Neural Networks to learn the similarity between images from corresponding inputs for one-shot food places classification. We tested our proposed method with ‘MiniEgoFoodPlaces’ with 15 food related places. The proposed Siamese Neural Networks model with MobileNet achieved an overall classification accuracy of 76.74% and 77.53% on the validation and test sets of the “MiniEgoFoodPlaces” dataset, respectively outperforming with the base models, such as ResNet50, InceptionV3, and InceptionResNetV2.
Address Illes Balears; October 2019
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes MILAB; no proj Approved no
Call Number Admin @ si @ SBR2019 Serial 3368
Permanent link to this record
 

 
Author Antonio Hernandez; Nadezhda Zlateva; Alexander Marinov; Miguel Reyes; Petia Radeva; Dimo Dimov; Sergio Escalera
Title Human Limb Segmentation in Depth Maps based on Spatio-Temporal Graph Cuts Optimization Type Journal Article
Year 2012 Publication Journal of Ambient Intelligence and Smart Environments Abbreviated Journal JAISE
Volume 4 Issue 6 Pages 535-546
Keywords Multi-modal vision processing; Random Forest; Graph-cuts; multi-label segmentation; human body segmentation
Abstract We present a framework for object segmentation using depth maps based on Random Forest and Graph-cuts theory, and apply it to the segmentation of human limbs. First, from a set of random depth features, Random Forest is used to infer a set of label probabilities for each data sample. This vector of probabilities is used as unary term in α−β swap Graph-cuts algorithm. Moreover, depth values of spatio-temporal neighboring data points are used as boundary potentials. Results on a new multi-label human depth data set show high performance in terms of segmentation overlapping of the novel methodology compared to classical approaches.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 1876-1364 ISBN Medium
Area Expedition Conference
Notes MILAB;HuPBA Approved no
Call Number Admin @ si @ HZM2012a Serial 2006
Permanent link to this record
 

 
Author Alvaro Cepero; Albert Clapes; Sergio Escalera
Title Automatic non-verbal communication skills analysis: a quantitative evaluation Type Journal Article
Year 2015 Publication AI Communications Abbreviated Journal AIC
Volume 28 Issue 1 Pages 87-101
Keywords Social signal processing; human behavior analysis; multi-modal data description; multi-modal data fusion; non-verbal communication analysis; e-Learning
Abstract The oral communication competence is defined on the top of the most relevant skills for one's professional and personal life. Because of the importance of communication in our activities of daily living, it is crucial to study methods to evaluate and provide the necessary feedback that can be used in order to improve these communication capabilities and, therefore, learn how to express ourselves better. In this work, we propose a system capable of evaluating quantitatively the quality of oral presentations in an automatic fashion. The system is based on a multi-modal RGB, depth, and audio data description and a fusion approach in order to recognize behavioral cues and train classifiers able to eventually predict communication quality levels. The performance of the proposed system is tested on a novel dataset containing Bachelor thesis' real defenses, presentations from an 8th semester Bachelor courses, and Master courses' presentations at Universitat de Barcelona. Using as groundtruth the marks assigned by actual instructors, our system achieves high performance categorizing and ranking presentations by their quality, and also making real-valued mark predictions.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN 0921-7126 ISBN Medium
Area Expedition Conference
Notes HUPBA;MILAB Approved no
Call Number Admin @ si @ CCE2015 Serial 2549
Permanent link to this record
 

 
Author Md. Mostafa Kamal Sarker; Mohammed Jabreel; Hatem A. Rashwan; Syeda Furruka Banu; Petia Radeva; Domenec Puig
Title CuisineNet: Food Attributes Classification using Multi-scale Convolution Network Type Conference Article
Year 2018 Publication 21st International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume Issue Pages 365-372
Keywords
Abstract Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models.
Address Roses; catalonia; October 2018
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference CCIA
Notes MILAB; no menciona Approved no
Call Number Admin @ si @ SJR2018 Serial 3113
Permanent link to this record
 

 
Author Jordina Torrents-Barrena; Aida Valls; Petia Radeva; Meritxell Arenas; Domenec Puig
Title Automatic Recognition of Molecular Subtypes of Breast Cancer in X-Ray images using Segmentation-based Fractal Texture Analysis Type Book Chapter
Year 2015 Publication Artificial Intelligence Research and Development Abbreviated Journal
Volume 277 Issue Pages 247 - 256
Keywords
Abstract Breast cancer disease has recently been classified into four subtypes regarding the molecular properties of the affected tumor region. For each patient, an accurate diagnosis of the specific type is vital to decide the most appropriate therapy in order to enhance life prospects. Nowadays, advanced therapeutic diagnosis research is focused on gene selection methods, which are not robust enough. Hence, we hypothesize that computer vision algorithms can offer benefits to address the problem of discriminating among them through X-Ray images. In this paper, we propose a novel approach driven by texture feature descriptors and machine learning techniques. First, we segment the tumour part through an active contour technique and then, we perform a complete fractal analysis to collect qualitative information of the region of interest in the feature extraction stage. Finally, several supervised and unsupervised classifiers are used to perform multiclass classification of the aforementioned data. The experimental results presented in this paper support that it is possible to establish a relation between each tumor subtype and the extracted features of the patterns revealed on mammograms.
Address
Corporate Author Thesis
Publisher IOS Press Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Frontiers in Artificial Intelligence and Applications Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN Medium
Area Expedition Conference
Notes MILAB Approved no
Call Number Admin @ si @TVR2015 Serial 2780
Permanent link to this record
 

 
Author Agata Lapedriza; David Masip; David Sanchez
Title Emotions Classification using Facial Action Units Recognition Type Conference Article
Year 2014 Publication 17th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume 269 Issue Pages 55-64
Keywords
Abstract In this work we build a system for automatic emotion classification from image sequences. We analyze subtle changes in facial expressions by detecting a subset of 12 representative facial action units (AUs). Then, we classify emotions based on the output of these AUs classifiers, i.e. the presence/absence of AUs. We base the AUs classification upon a set of spatio-temporal geometric and appearance features for facial representation, fusing them within the emotion classifier. A decision tree is trained for emotion classifying, making the resulting model easy to interpret by capturing the combination of AUs activation that lead to a particular emotion. For Cohn-Kanade database, the proposed system classifies 7 emotions with a mean accuracy of near 90%, attaining a similar recognition accuracy in comparison with non-interpretable models that are not based in AUs detection.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-61499-451-0 Medium
Area Expedition Conference CCIA
Notes OR;MV Approved no
Call Number Admin @ si @ LMS2014 Serial 2622
Permanent link to this record
 

 
Author Maedeh Aghaei; Petia Radeva
Title Bag-of-Tracklets for Person Tracking in Life-Logging Data Type Conference Article
Year 2014 Publication 17th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume 269 Issue Pages 35-44
Keywords
Abstract By increasing popularity of wearable cameras, life-logging data analysis is becoming more and more important and useful to derive significant events out of this substantial collection of images. In this study, we introduce a new tracking method applied to visual life-logging, called bag-of-tracklets, which is based on detecting, localizing and tracking of people. Given the low spatial and temporal resolution of the image data, our model generates and groups tracklets in a unsupervised framework and extracts image sequences of person appearance according to a similarity score of the bag-of-tracklets. The model output is a meaningful sequence of events expressing human appearance and tracking them in life-logging data. The achieved results prove the robustness of our model in terms of efficiency and accuracy despite the low spatial and temporal resolution of the data.
Address
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-61499-451-0 Medium
Area Expedition Conference CCIA
Notes MILAB Approved no
Call Number Admin @ si @ AgR2015 Serial 2607
Permanent link to this record
 

 
Author Pierluigi Casale; Oriol Pujol; Petia Radeva; Jordi Vitria
Title A First Approach to Activity Recognition Using Topic Models Type Conference Article
Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume 202 Issue Pages 74 - 82
Keywords
Abstract In this work, we present a first approach to activity patterns discovery by mean of topic models. Using motion data collected with a wearable device we prototype, TheBadge, we analyse raw accelerometer data using Latent Dirichlet Allocation (LDA), a particular instantiation of topic models. Results show that for particular values of the parameters necessary for applying LDA to a countinous dataset, good accuracies in activity classification can be achieved.
Address Cardona, Spain
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-60750-061-2 Medium
Area Expedition Conference CCIA
Notes OR;MILAB;HuPBA;MV Approved no
Call Number BCNPCL @ bcnpcl @ CPR2009e Serial 1231
Permanent link to this record
 

 
Author Sergio Escalera; Oriol Pujol; Petia Radeva; Jordi Vitria
Title Measuring Interest of Human Dyadic Interactions Type Conference Article
Year 2009 Publication 12th International Conference of the Catalan Association for Artificial Intelligence Abbreviated Journal
Volume 202 Issue Pages 45-54
Keywords
Abstract In this paper, we argue that only using behavioural motion information, we are able to predict the interest of observers when looking at face-to-face interactions. We propose a set of movement-related features from body, face, and mouth activity in order to define a set of higher level interaction features, such as stress, activity, speaking engagement, and corporal engagement. Error-Correcting Output Codes framework with an Adaboost base classifier is used to learn to rank the perceived observer's interest in face-to-face interactions. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers. In particular, the learning system shows that stress features have a high predictive power for ranking interest of observers when looking at of face-to-face interactions.
Address Cardona (Spain)
Corporate Author Thesis
Publisher Place of Publication Editor
Language Summary Language Original Title
Series Editor Series Title Abbreviated Series Title
Series Volume Series Issue Edition
ISSN ISBN 978-1-60750-061-2 Medium
Area Expedition Conference CCIA
Notes OR;MILAB;HuPBA;MV Approved no
Call Number BCNPCL @ bcnpcl @ EPR2009b Serial 1182
Permanent link to this record