toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (down) Xavier Otazu; C. Alejandro Parraga; Maria Vanrell edit  url
doi  openurl
  Title Towards a unified chromatic inducction model Type Journal Article
  Year 2010 Publication Journal of Vision Abbreviated Journal VSS  
  Volume 10 Issue 12:5 Pages 1-24  
  Keywords Visual system; Color induction; Wavelet transform  
  Abstract In a previous work (X. Otazu, M. Vanrell, & C. A. Párraga, 2008b), we showed how several brightness induction effects can be predicted using a simple multiresolution wavelet model (BIWaM). Here we present a new model for chromatic induction processes (termed Chromatic Induction Wavelet Model or CIWaM), which is also implemented on a multiresolution framework and based on similar assumptions related to the spatial frequency and the contrast surround energy of the stimulus. The CIWaM can be interpreted as a very simple extension of the BIWaM to the chromatic channels, which in our case are defined in the MacLeod-Boynton (lsY) color space. This new model allows us to unify both chromatic assimilation and chromatic contrast effects in a single mathematical formulation. The predictions of the CIWaM were tested by means of several color and brightness induction experiments, which showed an acceptable agreement between model predictions and psychophysical data.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes CIC Approved no  
  Call Number CAT @ cat @ OPV2010 Serial 1450  
Permanent link to this record
 

 
Author (down) Xavier Carrillo; E Fernandez-Nofrerias; Francesco Ciompi; Oriol Rodriguez-Leor; Petia Radeva; Neus Salvatella; Oriol Pujol; J. Mauri; A. Bayes edit  openurl
  Title Changes in Radial Artery Volume Assessed Using Intravascular Ultrasound: A Comparison of Two Vasodilator Regimens in Transradial Coronary Intervention Type Journal Article
  Year 2011 Publication Journal of Invasive Cardiology Abbreviated Journal JOIC  
  Volume 23 Issue 10 Pages 401-404  
  Keywords radial; vasodilator treatment; percutaneous coronary intervention; IVUS; volumetric IVUS analysis  
  Abstract OBJECTIVES:
This study used intravascular ultrasound (IVUS) to evaluate radial artery volume changes after intraarterial administration of nitroglycerin and/or verapamil.
BACKGROUND:
Radial artery spasm, which is associated with radial artery size, is the main limitation of the transradial approach in percutaneous coronary interventions (PCI).
METHODS:
This prospective, randomized study compared the effect of two intra-arterial vasodilator regimens on radial artery volume: 0.2 mg of nitroglycerin plus 2.5 mg of verapamil (Group 1; n = 15) versus 2.5 mg of verapamil alone (Group 2; n = 15). Radial artery lumen volume was assessed using IVUS at two time points: at baseline (5 minutes after sheath insertion) and post-vasodilator (1 minute after drug administration). The luminal volume of the radial artery was computed using ECOC Random Fields (ECOC-RF), a technique used for automatic segmentation of luminal borders in longitudinal cut images from IVUS sequences.
RESULTS:
There was a significant increase in arterial lumen volume in both groups, with an increase from 451 ± 177 mm³ to 508 ± 192 mm³ (p = 0.001) in Group 1 and from 456 ± 188 mm³ to 509 ± 170 mm³ (p = 0.001) in Group 2. There were no significant differences between the groups in terms of absolute volume increase (58 mm³ versus 53 mm³, respectively; p = 0.65) or in relative volume increase (14% versus 20%, respectively; p = 0.69).
CONCLUSIONS:
Administration of nitroglycerin plus verapamil or verapamil alone to the radial artery resulted in similar increases in arterial lumen volume according to ECOC-RF IVUS measurements.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB;HuPBA Approved no  
  Call Number Admin @ si @ CFC2011 Serial 1797  
Permanent link to this record
 

 
Author (down) Xavier Boix; Josep M. Gonfaus; Joost Van de Weijer; Andrew Bagdanov; Joan Serrat; Jordi Gonzalez edit   pdf
url  doi
openurl 
  Title Harmony Potentials: Fusing Global and Local Scale for Semantic Image Segmentation Type Journal Article
  Year 2012 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 96 Issue 1 Pages 83-102  
  Keywords  
  Abstract The Hierarchical Conditional Random Field(HCRF) model have been successfully applied to a number of image labeling problems, including image segmentation. However, existing HCRF models of image segmentation do not allow multiple classes to be assigned to a single region, which limits their ability to incorporate contextual information across multiple scales.
At higher scales in the image, this representation yields an oversimpli ed model since multiple classes can be reasonably expected to appear within large regions. This simpli ed model particularly limits the impact of information at higher scales. Since class-label information at these scales is usually more reliable than at lower, noisier scales, neglecting this information is undesirable. To
address these issues, we propose a new consistency potential for image labeling problems, which we call the harmony potential. It can encode any possible combi-
nation of labels, penalizing only unlikely combinations of classes. We also propose an e ective sampling strategy over this expanded label set that renders tractable the underlying optimization problem. Our approach obtains state-of-the-art results on two challenging, standard benchmark datasets for semantic image segmentation: PASCAL VOC 2010, and MSRC-21.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0920-5691 ISBN Medium  
  Area Expedition Conference  
  Notes ISE;CIC;ADAS Approved no  
  Call Number Admin @ si @ BGW2012 Serial 1718  
Permanent link to this record
 

 
Author (down) Xavier Baro; Sergio Escalera; Jordi Vitria; Oriol Pujol; Petia Radeva edit  doi
openurl 
  Title Traffic Sign Recognition Using Evolutionary Adaboost Detection and Forest-ECOC Classification Type Journal Article
  Year 2009 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 10 Issue 1 Pages 113–126  
  Keywords  
  Abstract The high variability of sign appearance in uncontrolled environments has made the detection and classification of road signs a challenging problem in computer vision. In this paper, we introduce a novel approach for the detection and classification of traffic signs. Detection is based on a boosted detectors cascade, trained with a novel evolutionary version of Adaboost, which allows the use of large feature spaces. Classification is defined as a multiclass categorization problem. A battery of classifiers is trained to split classes in an Error-Correcting Output Code (ECOC) framework. We propose an ECOC design through a forest of optimal tree structures that are embedded in the ECOC matrix. The novel system offers high performance and better accuracy than the state-of-the-art strategies and is potentially better in terms of noise, affine deformation, partial occlusions, and reduced illumination.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes OR;MILAB;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ BEV2008 Serial 1116  
Permanent link to this record
 

 
Author (down) Wenwen Fu; Zhihong An; Wendong Huang; Haoran Sun; Wenjuan Gong; Jordi Gonzalez edit  url
openurl 
  Title A Spatio-Temporal Spotting Network with Sliding Windows for Micro-Expression Detection Type Journal Article
  Year 2023 Publication Electronics Abbreviated Journal ELEC  
  Volume 12 Issue 18 Pages 3947  
  Keywords micro-expression spotting; sliding window; key frame extraction  
  Abstract Micro-expressions reveal underlying emotions and are widely applied in political psychology, lie detection, law enforcement and medical care. Micro-expression spotting aims to detect the temporal locations of facial expressions from video sequences and is a crucial task in micro-expression recognition. In this study, the problem of micro-expression spotting is formulated as micro-expression classification per frame. We propose an effective spotting model with sliding windows called the spatio-temporal spotting network. The method involves a sliding window detection mechanism, combines the spatial features from the local key frames and the global temporal features and performs micro-expression spotting. The experiments are conducted on the CAS(ME)2 database and the SAMM Long Videos database, and the results demonstrate that the proposed method outperforms the state-of-the-art method by 30.58% for the CAS(ME)2 and 23.98% for the SAMM Long Videos according to overall F-scores.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ FAH2023 Serial 3864  
Permanent link to this record
 

 
Author (down) Wenlong Deng; Yongli Mou; Takahiro Kashiwa; Sergio Escalera; Kohei Nagai; Kotaro Nakayama; Yutaka Matsuo; Helmut Prendinger edit  url
openurl 
  Title Vision based Pixel-level Bridge Structural Damage Detection Using a Link ASPP Network Type Journal Article
  Year 2020 Publication Automation in Construction Abbreviated Journal AC  
  Volume 110 Issue Pages 102973  
  Keywords Semantic image segmentation; Deep learning  
  Abstract Structural Health Monitoring (SHM) has greatly benefited from computer vision. Recently, deep learning approaches are widely used to accurately estimate the state of deterioration of infrastructure. In this work, we focus on the problem of bridge surface structural damage detection, such as delamination and rebar exposure. It is well known that the quality of a deep learning model is highly dependent on the quality of the training dataset. Bridge damage detection, our application domain, has the following main challenges: (i) labeling the damages requires knowledgeable civil engineering professionals, which makes it difficult to collect a large annotated dataset; (ii) the damage area could be very small, whereas the background area is large, which creates an unbalanced training environment; (iii) due to the difficulty to exactly determine the extension of the damage, there is often a variation among different labelers who perform pixel-wise labeling. In this paper, we propose a novel model for bridge structural damage detection to address the first two challenges. This paper follows the idea of an atrous spatial pyramid pooling (ASPP) module that is designed as a novel network for bridge damage detection. Further, we introduce the weight balanced Intersection over Union (IoU) loss function to achieve accurate segmentation on a highly unbalanced small dataset. The experimental results show that (i) the IoU loss function improves the overall performance of damage detection, as compared to cross entropy loss or focal loss, and (ii) the proposed model has a better ability to detect a minority class than other light segmentation networks.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj Approved no  
  Call Number Admin @ si @ DMK2020 Serial 3314  
Permanent link to this record
 

 
Author (down) Wenjuan Gong; Zhang Yue; Wei Wang; Cheng Peng; Jordi Gonzalez edit  doi
openurl 
  Title Meta-MMFNet: Meta-Learning Based Multi-Model Fusion Network for Micro-Expression Recognition Type Journal Article
  Year 2022 Publication ACM Transactions on Multimedia Computing, Communications, and Applications Abbreviated Journal ACMTMC  
  Volume Issue Pages  
  Keywords Feature Fusion; Model Fusion; Meta-Learning; Micro-Expression Recognition  
  Abstract Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method.  
  Address May 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.157 Approved no  
  Call Number Admin @ si @ GYW2022 Serial 3692  
Permanent link to this record
 

 
Author (down) Wenjuan Gong; Yue Zhang; Wei Wang; Peng Cheng; Jordi Gonzalez edit  url
openurl 
  Title Meta-MMFNet: Meta-learning-based Multi-model Fusion Network for Micro-expression Recognition Type Journal Article
  Year 2023 Publication ACM Transactions on Multimedia Computing, Communications, and Applications Abbreviated Journal TMCCA  
  Volume 20 Issue 2 Pages 1–20  
  Keywords  
  Abstract Despite its wide applications in criminal investigations and clinical communications with patients suffering from autism, automatic micro-expression recognition remains a challenging problem because of the lack of training data and imbalanced classes problems. In this study, we proposed a meta-learning-based multi-model fusion network (Meta-MMFNet) to solve the existing problems. The proposed method is based on the metric-based meta-learning pipeline, which is specifically designed for few-shot learning and is suitable for model-level fusion. The frame difference and optical flow features were fused, deep features were extracted from the fused feature, and finally in the meta-learning-based framework, weighted sum model fusion method was applied for micro-expression classification. Meta-MMFNet achieved better results than state-of-the-art methods on four datasets. The code is available at https://github.com/wenjgong/meta-fusion-based-method.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ GZW2023 Serial 3862  
Permanent link to this record
 

 
Author (down) Wenjuan Gong; Xuena Zhang; Jordi Gonzalez; Andrews Sobral; Thierry Bouwmans; Changhe Tu; El-hadi Zahzah edit   pdf
url  doi
openurl 
  Title Human Pose Estimation from Monocular Images: A Comprehensive Survey Type Journal Article
  Year 2016 Publication Sensors Abbreviated Journal SENS  
  Volume 16 Issue 12 Pages 1966  
  Keywords human pose estimation; human bodymodels; generativemethods; discriminativemethods; top-down methods; bottom-up methods  
  Abstract Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling
methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.098; 600.119 Approved no  
  Call Number Admin @ si @ GZG2016 Serial 2933  
Permanent link to this record
 

 
Author (down) Wenjuan Gong; W.Zhang; Jordi Gonzalez; Y.Ren; Z.Li edit  doi
openurl 
  Title Enhanced Asymmetric Bilinear Model for Face Recognition Type Journal Article
  Year 2015 Publication International Journal of Distributed Sensor Networks Abbreviated Journal IJDSN  
  Volume Issue Pages Article ID 218514  
  Keywords  
  Abstract Bilinear models have been successfully applied to separate two factors, for example, pose variances and different identities in face recognition problems. Asymmetric model is a type of bilinear model which models a system in the most concise way. But seldom there are works exploring the applications of asymmetric bilinear model on face recognition problem with illumination changes. In this work, we propose enhanced asymmetric model for illumination-robust face recognition. Instead of initializing the factor probabilities randomly, we initialize them with nearest neighbor method and optimize them for the test data. Above that, we update the factor model to be identified. We validate the proposed method on a designed data sample and extended Yale B dataset. The experiment results show that the enhanced asymmetric models give promising results and good recognition accuracies.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE; 600.063; 600.078 Approved no  
  Call Number Admin @ si @ GZG2015 Serial 2592  
Permanent link to this record
 

 
Author (down) Wenjuan Gong; Jordi Gonzalez; Xavier Roca edit   pdf
doi  openurl
  Title Human Action Recognition based on Estimated Weak Poses Type Journal Article
  Year 2012 Publication EURASIP Journal on Advances in Signal Processing Abbreviated Journal EURASIPJ  
  Volume Issue Pages  
  Keywords  
  Abstract We present a novel method for human action recognition (HAR) based on estimated poses from image sequences. We use 3D human pose data as additional information and propose a compact human pose representation, called a weak pose, in a low-dimensional space while still keeping the most discriminative information for a given pose. With predicted poses from image features, we map the problem from image feature space to pose space, where a Bag of Poses (BOP) model is learned for the final goal of HAR. The BOP model is a modified version of the classical bag of words pipeline by building the vocabulary based on the most representative weak poses for a given action. Compared with the standard k-means clustering, our vocabulary selection criteria is proven to be more efficient and robust against the inherent challenges of action recognition. Moreover, since for action recognition the ordering of the poses is discriminative, the BOP model incorporates temporal information: in essence, groups of consecutive poses are considered together when computing the vocabulary and assignment. We tested our method on two well-known datasets: HumanEva and IXMAS, to demonstrate that weak poses aid to improve action recognition accuracies. The proposed method is scene-independent and is comparable with the state-of-art method.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ISE Approved no  
  Call Number Admin @ si @ GGR2012 Serial 2003  
Permanent link to this record
 

 
Author (down) Weiqing Min; Shuqiang Jiang; Jitao Sang; Huayang Wang; Xinda Liu; Luis Herranz edit  doi
openurl 
  Title Being a Supercook: Joint Food Attributes and Multimodal Content Modeling for Recipe Retrieval and Exploration Type Journal Article
  Year 2017 Publication IEEE Transactions on Multimedia Abbreviated Journal TMM  
  Volume 19 Issue 5 Pages 1100 - 1113  
  Keywords  
  Abstract This paper considers the problem of recipe-oriented image-ingredient correlation learning with multi-attributes for recipe retrieval and exploration. Existing methods mainly focus on food visual information for recognition while we model visual information, textual content (e.g., ingredients), and attributes (e.g., cuisine and course) together to solve extended recipe-oriented problems, such as multimodal cuisine classification and attribute-enhanced food image retrieval. As a solution, we propose a multimodal multitask deep belief network (M3TDBN) to learn joint image-ingredient representation regularized by different attributes. By grouping ingredients into visible ingredients (which are visible in the food image, e.g., “chicken” and “mushroom”) and nonvisible ingredients (e.g., “salt” and “oil”), M3TDBN is capable of learning both midlevel visual representation between images and visible ingredients and nonvisual representation. Furthermore, in order to utilize different attributes to improve the intermodality correlation, M3TDBN incorporates multitask learning to make different attributes collaborate each other. Based on the proposed M3TDBN, we exploit the derived deep features and the discovered correlations for three extended novel applications: 1) multimodal cuisine classification; 2) attribute-augmented cross-modal recipe image retrieval; and 3) ingredient and attribute inference from food images. The proposed approach is evaluated on the constructed Yummly dataset and the evaluation results have validated the effectiveness of the proposed approach.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; 600.120 Approved no  
  Call Number Admin @ si @ MJS2017 Serial 2964  
Permanent link to this record
 

 
Author (down) Volkmar Frinken; Andreas Fischer; Markus Baumgartner; Horst Bunke edit   pdf
doi  openurl
  Title Keyword spotting for self-training of BLSTM NN based handwriting recognition systems Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue 3 Pages 1073-1082  
  Keywords Document retrieval; Keyword spotting; Handwriting recognition; Neural networks; Semi-supervised learning  
  Abstract The automatic transcription of unconstrained continuous handwritten text requires well trained recognition systems. The semi-supervised paradigm introduces the concept of not only using labeled data but also unlabeled data in the learning process. Unlabeled data can be gathered at little or not cost. Hence it has the potential to reduce the need for labeling training data, a tedious and costly process. Given a weak initial recognizer trained on labeled data, self-training can be used to recognize unlabeled data and add words that were recognized with high confidence to the training set for re-training. This process is not trivial and requires great care as far as selecting the elements that are to be added to the training set is concerned. In this paper, we propose to use a bidirectional long short-term memory neural network handwritten recognition system for keyword spotting in order to select new elements. A set of experiments shows the high potential of self-training for bootstrapping handwriting recognition systems, both for modern and historical handwritings, and demonstrate the benefits of using keyword spotting over previously published self-training schemes.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.077; 602.101 Approved no  
  Call Number Admin @ si @ FFB2014 Serial 2297  
Permanent link to this record
 

 
Author (down) Victor Ponce; Sergio Escalera; Marc Perez; Oriol Janes; Xavier Baro edit  doi
openurl 
  Title Non-Verbal Communication Analysis in Victim-Offender Mediations Type Journal Article
  Year 2015 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 67 Issue 1 Pages 19-27  
  Keywords Victim–Offender Mediation; Multi-modal human behavior analysis; Face and gesture recognition; Social signal processing; Computer vision; Machine learning  
  Abstract We present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. We propose the use of computer vision and social signal processing technologies in real scenarios of Victim–Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real Victim–Offender Mediation sessions in Catalonia. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state of the art binary classification approaches, our system achieves recognition accuracies of 86% when predicting satisfaction, and 79% when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1–5] for the computed social signals.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA;MV Approved no  
  Call Number Admin @ si @ PEP2015 Serial 2583  
Permanent link to this record
 

 
Author (down) Victor Ponce; Mario Gorga; Xavier Baro; Petia Radeva; Sergio Escalera edit  url
openurl 
  Title Análisis de la expresión oral y gestual en proyectos fin de carrera vía un sistema de visión artificial Type Journal Article
  Year 2011 Publication ReVisión Abbreviated Journal  
  Volume 4 Issue 1 Pages  
  Keywords  
  Abstract La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1989-1199 ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; MILAB;MV Approved no  
  Call Number Admin @ si @ PGB2011d Serial 2514  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: