|
Fernando Vilariño, & Enric Marti. (2008). New didactic techniques in the EHES applying mobile technologies.
|
|
|
Alvaro Cepero, Albert Clapes, & Sergio Escalera. (2015). Automatic non-verbal communication skills analysis: a quantitative evaluation. AIC - AI Communications, 28(1), 87–101.
Abstract: The oral communication competence is defined on the top of the most relevant skills for one's professional and personal life. Because of the importance of communication in our activities of daily living, it is crucial to study methods to evaluate and provide the necessary feedback that can be used in order to improve these communication capabilities and, therefore, learn how to express ourselves better. In this work, we propose a system capable of evaluating quantitatively the quality of oral presentations in an automatic fashion. The system is based on a multi-modal RGB, depth, and audio data description and a fusion approach in order to recognize behavioral cues and train classifiers able to eventually predict communication quality levels. The performance of the proposed system is tested on a novel dataset containing Bachelor thesis' real defenses, presentations from an 8th semester Bachelor courses, and Master courses' presentations at Universitat de Barcelona. Using as groundtruth the marks assigned by actual instructors, our system achieves high performance categorizing and ranking presentations by their quality, and also making real-valued mark predictions.
Keywords: Social signal processing; human behavior analysis; multi-modal data description; multi-modal data fusion; non-verbal communication analysis; e-Learning
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2007). Semantic Annotation of Complex Human Scenes for Multimedia Surveillance. In AI* Artificial Intelligence and Human–Oriented Computing. 10th Congress of the Italian Association for Artificial Intelligence, (Vol. 4733, 698–709). LNCS.
|
|
|
Pedro Herruzo, Marc Bolaños, & Petia Radeva. (2016). Can a CNN Recognize Catalan Diet? In AIP Conference Proceedings (Vol. 1773).
Abstract: CoRR abs/1607.08811
Nowadays, we can find several diseases related to the unhealthy diet habits of the population, such as diabetes, obesity, anemia, bulimia and anorexia. In many cases, these diseases are related to the food consumption of people. Mediterranean diet is scientifically known as a healthy diet that helps to prevent many metabolic diseases. In particular, our work focuses on the recognition of Mediterranean food and dishes. The development of this methodology would allow to analise the daily habits of users with wearable cameras, within the topic of lifelogging. By using automatic mechanisms we could build an objective tool for the analysis of the patient’s behavior, allowing specialists to discover unhealthy food patterns and understand the user’s lifestyle.
With the aim to automatically recognize a complete diet, we introduce a challenging multi-labeled dataset related to Mediter-ranean diet called FoodCAT. The first type of label provided consists of 115 food classes with an average of 400 images per dish, and the second one consists of 12 food categories with an average of 3800 pictures per class. This dataset will serve as a basis for the development of automatic diet recognition. In this context, deep learning and more specifically, Convolutional Neural Networks (CNNs), currently are state-of-the-art methods for automatic food recognition. In our work, we compare several architectures for image classification, with the purpose of diet recognition. Applying the best model for recognising food categories, we achieve a top-1 accuracy of 72.29%, and top-5 of 97.07%. In a complete diet recognition of dishes from Mediterranean diet, enlarged with the Food-101 dataset for international dishes recognition, we achieve a top-1 accuracy of 68.07%, and top-5 of 89.53%, for a total of 115+101 food classes.
|
|
|
Bogdan Raducanu, & Fadi Dornaika. (2008). Dynamic Vs. Static Recognition of Facial Expressions. In Rabuñal (Ed.), Ambient Intelligence. European Conference (Vol. 5355, 13–25). LNCS.
|
|
|
Maurizio Mencuccini, Jordi Martinez-Vilalta, Josep Piñol, Lasse Loepfe, Mireia Burnat, Xavier Alvarez, et al. (2010). A quantitative and statistically robust method for the determination of xylem conduit spatial distribution. AJB - American Journal of Botany, 97(8), 1247–1259.
Abstract: Premise of the study: Because of their limited length, xylem conduits need to connect to each other to maintain water transport from roots to leaves. Conduit spatial distribution in a cross section plays an important role in aiding this connectivity. While indices of conduit spatial distribution already exist, they are not well defined statistically. * Methods: We used point pattern analysis to derive new spatial indices. One hundred and five cross-sectional images from different species were transformed into binary images. The resulting point patterns, based on the locations of the conduit centers-of-area, were analyzed to determine whether they departed from randomness. Conduit distribution was then modeled using a spatially explicit stochastic model. * Key results: The presence of conduit randomness, uniformity, or aggregation depended on the spatial scale of the analysis. The large majority of the images showed patterns significantly different from randomness at least at one spatial scale. A strong phylogenetic signal was detected in the spatial variables. * Conclusions: Conduit spatial arrangement has been largely conserved during evolution, especially at small spatial scales. Species in which conduits were aggregated in clusters had a lower conduit density compared to those with uniform distribution. Statistically sound spatial indices must be employed as an aid in the characterization of distributional patterns across species and in models of xylem water transport. Point pattern analysis is a very useful tool in identifying spatial patterns.
Keywords: Geyer; hydraulic conductivity; point pattern analysis; Ripley; Spatstat; vessel clusters; xylem anatomy; xylem network
|
|
|
Carolina Malagelada, Michal Drozdzal, Santiago Segui, Sara Mendez, Jordi Vitria, Petia Radeva, et al. (2015). Classification of functional bowel disorders by objective physiological criteria based on endoluminal image analysis. AJPGI - American Journal of Physiology-Gastrointestinal and Liver Physiology, 309(6), G413–G419.
Abstract: We have previously developed an original method to evaluate small bowel motor function based on computer vision analysis of endoluminal images obtained by capsule endoscopy. Our aim was to demonstrate intestinal motor abnormalities in patients with functional bowel disorders by endoluminal vision analysis. Patients with functional bowel disorders (n = 205) and healthy subjects (n = 136) ingested the endoscopic capsule (Pillcam-SB2, Given-Imaging) after overnight fast and 45 min after gastric exit of the capsule a liquid meal (300 ml, 1 kcal/ml) was administered. Endoluminal image analysis was performed by computer vision and machine learning techniques to define the normal range and to identify clusters of abnormal function. After training the algorithm, we used 196 patients and 48 healthy subjects, completely naive, as test set. In the test set, 51 patients (26%) were detected outside the normal range (P < 0.001 vs. 3 healthy subjects) and clustered into hypo- and hyperdynamic subgroups compared with healthy subjects. Patients with hypodynamic behavior (n = 38) exhibited less luminal closure sequences (41 ± 2% of the recording time vs. 61 ± 2%; P < 0.001) and more static sequences (38 ± 3 vs. 20 ± 2%; P < 0.001); in contrast, patients with hyperdynamic behavior (n = 13) had an increased proportion of luminal closure sequences (73 ± 4 vs. 61 ± 2%; P = 0.029) and more high-motion sequences (3 ± 1 vs. 0.5 ± 0.1%; P < 0.001). Applying an original methodology, we have developed a novel classification of functional gut disorders based on objective, physiological criteria of small bowel function.
Keywords: capsule endoscopy; computer vision analysis; functional bowel disorders; intestinal motility; machine learning
|
|
|
Sonia Baeza, R.Domingo, M.Salcedo, G.Moragas, J.Deportos, I.Garcia Olive, et al. (2021). Artificial Intelligence to Optimize Pulmonary Embolism Diagnosis During Covid-19 Pandemic by Perfusion SPECT/CT, a Pilot Study. American Journal of Respiratory and Critical Care Medicine, .
|
|
|
Henry Velesaca, Patricia Suarez, Angel Sappa, Dario Carpio, Rafael E. Rivadeneira, & Angel Sanchez. (2022). Review on Common Techniques for Urban Environment Video Analytics. In Anais do III Workshop Brasileiro de Cidades Inteligentes (pp. 107–118).
Abstract: This work compiles the different computer vision-based approaches
from the state-of-the-art intended for video analytics in urban environments.
The manuscript groups the different approaches according to the typical modules present in video analysis, including image preprocessing, object detection,
classification, and tracking. This proposed pipeline serves as a basic guide to
representing these most representative approaches in this topic of video analysis
that will be addressed in this work. Furthermore, the manuscript is not intended
to be an exhaustive review of the most advanced approaches, but only a list of
common techniques proposed to address recurring problems in this field.
Keywords: Video Analytics; Review; Urban Environments; Smart Cities
|
|
|
D. Seron, F. Moreso, C. Gratin, Jordi Vitria, & E. Condom. (1996). Automated classification of renal interstitium and tubules by local texture analysis and a neural network. Analytical and Quantitative Cytology and Histology, 18(5), 410–9, PMID: 8908314.
|
|
|
Petia Radeva. (2003). On the Role of Intravascular Ultrasound Image Analysis.
|
|
|
J. Suri, S. Singh, S. Laxminarayan, R. Cesar, H. Jelinek, Petia Radeva, et al. (2003). A Note on Future Research in Vascular and Plaque Segmentation.
|
|
|
Jose Luis Alba, A. Pujol, & Juan J. Villanueva. (2001). Novel SOM-PCA Network for Face Identification..
|
|
|
Xavier Baro, & Jordi Vitria. (2008). Evolutionary Object Detection by Means of Naive Bayes Models Estimation. In M. Giacobini (Ed.), Applications of Evolutionary Computing. EvoWorkshops (Vol. 4974, 235–244). LNCS.
|
|
|
Pau Rodriguez, Diego Velazquez, Guillem Cucurull, Josep M. Gonfaus, Xavier Roca, Seiichi Ozawa, et al. (2020). Personality Trait Analysis in Social Networks Based on Weakly Supervised Learning of Shared Images. APPLSCI - Applied Sciences, 10(22), 8170.
Abstract: Social networks have attracted the attention of psychologists, as the behavior of users can be used to assess personality traits, and to detect sentiments and critical mental situations such as depression or suicidal tendencies. Recently, the increasing amount of image uploads to social networks has shifted the focus from text to image-based personality assessment. However, obtaining the ground-truth requires giving personality questionnaires to the users, making the process very costly and slow, and hindering research on large populations. In this paper, we demonstrate that it is possible to predict which images are most associated with each personality trait of the OCEAN personality model, without requiring ground-truth personality labels. Namely, we present a weakly supervised framework which shows that the personality scores obtained using specific images textually associated with particular personality traits are highly correlated with scores obtained using standard text-based personality questionnaires. We trained an OCEAN trait model based on Convolutional Neural Networks (CNNs), learned from 120K pictures posted with specific textual hashtags, to infer whether the personality scores from the images uploaded by users are consistent with those scores obtained from text. In order to validate our claims, we performed a personality test on a heterogeneous group of 280 human subjects, showing that our model successfully predicts which kind of image will match a person with a given level of a trait. Looking at the results, we obtained evidence that personality is not only correlated with text, but with image content too. Interestingly, different visual patterns emerged from those images most liked by persons with a particular personality trait: for instance, pictures most associated with high conscientiousness usually contained healthy food, while low conscientiousness pictures contained injuries, guns, and alcohol. These findings could pave the way to complement text-based personality questionnaires with image-based questions.
Keywords: sentiment analysis, personality trait analysis; weakly-supervised learning; visual classification; OCEAN model; social networks
|
|