Home | [101–110] << 111 112 113 114 115 116 117 118 119 120 >> [121–130] |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Sergio Escalera; Oriol Pujol; Petia Radeva; Jordi Vitria; Maria Teresa Anguera | ||||
Title | Automatic Detection of Dominance and Expected Interest | Type | Journal Article | ||
Year | 2010 | Publication | EURASIP Journal on Advances in Signal Processing | Abbreviated Journal | EURASIPJ |
Volume | Issue | Pages ![]() |
12 | ||
Keywords | |||||
Abstract | Article ID 491819
Social Signal Processing is an emergent area of research that focuses on the analysis of social constructs. Dominance and interest are two of these social constructs. Dominance refers to the level of influence a person has in a conversation. Interest, when referred in terms of group interactions, can be defined as the degree of engagement that the members of a group collectively display during their interaction. In this paper, we argue that only using behavioral motion information, we are able to predict the interest of observers when looking at face-to-face interactions as well as the dominant people. First, we propose a simple set of movement-based features from body, face, and mouth activity in order to define a higher set of interaction indicators. The considered indicators are manually annotated by observers. Based on the opinions obtained, we define an automatic binary dominance detection problem and a multiclass interest quantification problem. Error-Correcting Output Codes framework is used to learn to rank the perceived observer's interest in face-to-face interactions meanwhile Adaboost is used to solve the dominant detection problem. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers in both dominance and interest detection problems. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1110-8657 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | OR;MILAB;HUPBA;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ EPR2010d | Serial | 1283 | ||
Permanent link to this record | |||||
Author | Agata Lapedriza; David Masip; Jordi Vitria | ||||
Title | On the Use of External Face Features for Identity Verification | Type | Journal | ||
Year | 2006 | Publication | Journal of Multimedia, 1(4): 11–20 | Abbreviated Journal | |
Volume | 1 | Issue | 4 | Pages ![]() |
11-20 |
Keywords | Face Verification, Computer Vision, Machine Learning | ||||
Abstract | In general automatic face classification applications images are captured in natural environments. In these cases, the performance is affected by variations in facial images related to illumination, pose, occlusion or expressions. Most of the existing face classification systems use only the internal features information, composed by eyes, nose and mouth, since they are more difficult to imitate. Nevertheless, nowadays a lot of applications not related to security are developed, and in these cases the information located at head, chin or ears zones (external features) can be useful to improve the current accuracies. However, the lack of a natural alignment in these areas makes difficult to extract these features applying classic Bottom-Up methods. In this paper, we propose a complete scheme based on a Top-Down reconstruction algorithm to extract external features of face images. To test our system we have performed face verification experiments using public databases, given that identity verification is a general task that has many real life applications. We have considered images uniformly illuminated, images with occlusions and images with high local changes in the illumination, and the obtained results show that the information contributed by the external features can be useful for verification purposes, specially significant when faces are partially occluded. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | OR;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ LMV2006b | Serial | 708 | ||
Permanent link to this record | |||||
Author | Alicia Fornes; Sergio Escalera; Josep Llados; Gemma Sanchez | ||||
Title | Symbol Recognition by Multi-class Blurred Shape Models | Type | Conference Article | ||
Year | 2007 | Publication | Seventh IAPR International Workshop on Graphics Recognition | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
11–13 | ||
Keywords | |||||
Abstract | |||||
Address | Curitiba (Brazil) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | GREC | ||
Notes | DAG; MILAB; HUPBA | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ FEL2007b | Serial | 910 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Oriol Pujol; Petia Radeva | ||||
Title | Recoding Error-Correcting Output Codes | Type | Conference Article | ||
Year | 2009 | Publication | 8th International Workshop of Multiple Classifier Systems | Abbreviated Journal | |
Volume | 5519 | Issue | Pages ![]() |
11–21 | |
Keywords | |||||
Abstract | One of the most widely applied techniques to deal with multi- class categorization problems is the pairwise voting procedure. Recently, this classical approach has been embedded in the Error-Correcting Output Codes framework (ECOC). This framework is based on a coding step, where a set of binary problems are learnt and coded in a matrix, and a decoding step, where a new sample is tested and classified according to a comparison with the positions of the coded matrix. In this paper, we present a novel approach to redefine without retraining, in a problem-dependent way, the one-versus-one coding matrix so that the new coded information increases the generalization capability of the system. Moreover, the final classification can be tuned with the inclusion of a weighting matrix in the decoding step. The approach has been validated over several UCI Machine Learning repository data sets and two real multi-class problems: traffic sign and face categorization. The results show that performance improvements are obtained when comparing the new approach to one of the best ECOC designs (one-versus-one). Furthermore, the novel methodology obtains at least the same performance than the one-versus-one ECOC design. | ||||
Address | Reykjavik (Iceland) | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-02325-5 | Medium | |
Area | Expedition | Conference | MCS | ||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ EPR2009d | Serial | 1190 | ||
Permanent link to this record | |||||
Author | Partha Pratim Roy; Umapada Pal; Josep Llados; Mathieu Nicolas Delalandre | ||||
Title | Multi-Oriented and Multi-Sized Touching Character Segmentation using Dynamic Programming | Type | Conference Article | ||
Year | 2009 | Publication | 10th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
11–15 | ||
Keywords | |||||
Abstract | In this paper, we present a scheme towards the segmentation of English multi-oriented touching strings into individual characters. When two or more characters touch, they generate a big cavity region at the background portion. Using Convex Hull information, we use these background information to find some initial points to segment a touching string into possible primitive segments (a primitive segment consists of a single character or a part of a character). Next these primitive segments are merged to get optimum segmentation and dynamic programming is applied using total likelihood of characters as the objective function. SVM classifier is used to find the likelihood of a character. To consider multi-oriented touching strings the features used in the SVM are invariant to character orientation. Circular ring and convex hull ring based approach has been used along with angular information of the contour pixels of the character to make the feature rotation invariant. From the experiment, we obtained encouraging results. | ||||
Address | Barcelona, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1520-5363 | ISBN | 978-1-4244-4500-4 | Medium | |
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ RPL2009a | Serial | 1240 | ||
Permanent link to this record | |||||
Author | Murad Al Haj; Carles Fernandez; Zhanwu Xiong; Ivan Huerta; Jordi Gonzalez; Xavier Roca | ||||
Title | Beyond the Static Camera: Issues and Trends in Active Vision | Type | Book Chapter | ||
Year | 2011 | Publication | Visual Analysis of Humans: Looking at People | Abbreviated Journal | |
Volume | Issue | 2 | Pages ![]() |
11-30 | |
Keywords | |||||
Abstract | Maximizing both the area coverage and the resolution per target is highly desirable in many applications of computer vision. However, with a limited number of cameras viewing a scene, the two objectives are contradictory. This chapter is dedicated to active vision systems, trying to achieve a trade-off between these two aims and examining the use of high-level reasoning in such scenarios. The chapter starts by introducing different approaches to active cameras configurations. Later, a single active camera system to track a moving object is developed, offering the reader first-hand understanding of the issues involved. Another section discusses practical considerations in building an active vision platform, taking as an example a multi-camera system developed for a European project. The last section of the chapter reflects upon the future trends of using semantic factors to drive smartly coordinated active systems. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer London | Place of Publication | Editor | Th.B. Moeslund; A. Hilton; V. Krüger; L. Sigal | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-0-85729-996-3 | Medium | ||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ AFX2011 | Serial | 1814 | ||
Permanent link to this record | |||||
Author | Cesar Isaza; Joaquin Salas; Bogdan Raducanu | ||||
Title | Synthetic ground truth dataset to detect shadow cast by static objects in outdoor | Type | Conference Article | ||
Year | 2012 | Publication | 1st International Workshop on Visual Interfaces for Ground Truth Collection in Computer Vision Applications | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
art. 11 | ||
Keywords | |||||
Abstract | In this paper, we propose a precise synthetic ground truth dataset to study the problem of detection of the shadows cast by static objects in outdoor environments during extended periods of time (days). For our dataset, we have created a virtual scenario using a rendering software. To increase the realism of the simulated environment, we have defined the scenario in a precise geographical location. In our dataset the sun is by far the main illumination source. The sun position during the simulation time takes into consideration factors related to the geographical location, such as the latitude, longitude, elevation above sea level, and precise image capturing day and time. In our simulation the camera remains fixed. The dataset consists of seven days of simulation, from 10:00am to 5:00pm. Images are captured every 10 seconds. The shadows' ground truth is automatically computed by the rendering software. | ||||
Address | Capri, Italy | ||||
Corporate Author | Thesis | ||||
Publisher | ACM | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4503-1405-3 | Medium | ||
Area | Expedition | Conference | VIGTA | ||
Notes | OR;MV | Approved | no | ||
Call Number | Admin @ si @ ISR2012a | Serial | 2037 | ||
Permanent link to this record | |||||
Author | Bogdan Raducanu; Alireza Bosaghzadeh; Fadi Dornaika | ||||
Title | Multi-observation Face Recognition in Videos based on Label Propagation | Type | Conference Article | ||
Year | 2015 | Publication | 6th Workshop on Analysis and Modeling of Faces and Gestures AMFG2015 | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
10-17 | ||
Keywords | |||||
Abstract | In order to deal with the huge amount of content generated by social media, especially for indexing and retrieval purposes, the focus shifted from single object recognition to multi-observation object recognition. Of particular interest is the problem of face recognition (used as primary cue for persons’ identity assessment), since it is highly required by popular social media search engines like Facebook and Youtube. Recently, several approaches for graph-based label propagation were proposed. However, the associated graphs were constructed in an ad-hoc manner (e.g., using the KNN graph) that cannot cope properly with the rapid and frequent changes in data appearance, a phenomenon intrinsically related with video sequences. In this paper, we
propose a novel approach for efficient and adaptive graph construction, based on a two-phase scheme: (i) the first phase is used to adaptively find the neighbors of a sample and also to find the adequate weights for the minimization function of the second phase; (ii) in the second phase, the selected neighbors along with their corresponding weights are used to locally and collaboratively estimate the sparse affinity matrix weights. Experimental results performed on Honda Video Database (HVDB) and a subset of video sequences extracted from the popular TV-series ’Friends’ show a distinct advantage of the proposed method over the existing standard graph construction methods. |
||||
Address | Boston; USA; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | OR; 600.068; 600.072;MV | Approved | no | ||
Call Number | Admin @ si @ RBD2015 | Serial | 2627 | ||
Permanent link to this record | |||||
Author | Xim Cerda-Company; Xavier Otazu; Nilai Sallent; C. Alejandro Parraga | ||||
Title | The effect of luminance differences on color assimilation | Type | Journal Article | ||
Year | 2018 | Publication | Journal of Vision | Abbreviated Journal | JV |
Volume | 18 | Issue | 11 | Pages ![]() |
10-10 |
Keywords | |||||
Abstract | The color appearance of a surface depends on the color of its surroundings (inducers). When the perceived color shifts towards that of the surroundings, the effect is called “color assimilation” and when it shifts away from the surroundings it is called “color contrast.” There is also evidence that the phenomenon depends on the spatial configuration of the inducer, e.g., uniform surrounds tend to induce color contrast and striped surrounds tend to induce color assimilation. However, previous work found that striped surrounds under certain conditions do not induce color assimilation but induce color contrast (or do not induce anything at all), suggesting that luminance differences and high spatial frequencies could be key factors in color assimilation. Here we present a new psychophysical study of color assimilation where we assessed the contribution of luminance differences (between the target and its surround) present in striped stimuli. Our results show that luminance differences are key factors in color assimilation for stimuli varying along the s axis of MacLeod-Boynton color space, but not for stimuli varying along the l axis. This asymmetry suggests that koniocellular neural mechanisms responsible for color assimilation only contribute when there is a luminance difference, supporting the idea that mutual-inhibition has a major role in color induction. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | NEUROBIT; 600.120; 600.128 | Approved | no | ||
Call Number | Admin @ si @ COS2018 | Serial | 3148 | ||
Permanent link to this record | |||||
Author | Fernando Vilariño; Panagiota Spyridonos; Jordi Vitria; Fernando Azpiroz; Petia Radeva | ||||
Title | Cascade analysis for intestinal contraction detection | Type | Conference Article | ||
Year | 2006 | Publication | 20th International Congress and exhibition Computer Assisted Radiology and Surgery | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
9-10 | ||
Keywords | intestine video analysis, anisotropic features, support vector machine, cascade of classifiers | ||||
Abstract | In this work, we address the study of intestinal contractions in a novel approach based on a machine learning framework to process data from Wireless Capsule Video Endoscopy. Wireless endoscopy represents a unique way to visualize the intestine motility by creating long videos to visualize intestine dynamics. In this paper we argue that to analyze huge amount of wireless endoscopy data and define robust methods for contraction detection we should base our approach on sophisticated machine learning techniques. In particular, we propose a cascade of classifiers in order to remove different physiological phenomenon and obtain the motility pattern of small intestines. Our results show obtaining high specificity and sensitivity rates that highlight the high efficiency of the selected approach and support the feasibility of the proposed methodology in the automatic detection and analysis of intestine contractions. | ||||
Address | Osaka (Japan) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | 800 | Expedition | Conference | CARS | |
Notes | MV;OR;MILAB;SIAI | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ VSV2006a; IAM @ iam @ VSV2006h | Serial | 726 | ||
Permanent link to this record | |||||
Author | Arnau Ramisa; Shrihari Vasudevan; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras | ||||
Title | Evaluation of the SIFT Object Recognition Method in Mobile Robots: Frontiers in Artificial Intelligence and Applications | Type | Conference Article | ||
Year | 2009 | Publication | 12th International Conference of the Catalan Association for Artificial Intelligence | Abbreviated Journal | |
Volume | 202 | Issue | Pages ![]() |
9-18 | |
Keywords | |||||
Abstract | General object recognition in mobile robots is of primary importance in order to enhance the representation of the environment that robots will use for their reasoning processes. Therefore, we contribute reduce this gap by evaluating the SIFT Object Recognition method in a challenging dataset, focusing on issues relevant to mobile robotics. Resistance of the method to the robotics working conditions was found, but it was limited mainly to well-textured objects. | ||||
Address | Cardona, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0922-6389 | ISBN | 978-1-60750-061-2 | Medium | |
Area | Expedition | Conference | CCIA | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ RVA2009 | Serial | 1248 | ||
Permanent link to this record | |||||
Author | Jorge Bernal | ||||
Title | Polyp Localization and Segmentation in Colonoscopy Images by Means of a Model of Appearance for Polyps | Type | Journal Article | ||
Year | 2014 | Publication | Electronic Letters on Computer Vision and Image Analysis | Abbreviated Journal | ELCVIA |
Volume | 13 | Issue | 2 | Pages ![]() |
9-10 |
Keywords | Colonoscopy; polyp localization; polyp segmentation; Eye-tracking | ||||
Abstract | Colorectal cancer is the fourth most common cause of cancer death worldwide and its survival rate depends on the stage in which it is detected on hence the necessity for an early colon screening. There are several screening techniques but colonoscopy is still nowadays the gold standard, although it has some drawbacks such as the miss rate. Our contribution, in the field of intelligent systems for colonoscopy, aims at providing a polyp localization and a polyp segmentation system based on a model of appearance for polyps. To develop both methods we define a model of appearance for polyps, which describes a polyp as enclosed by intensity valleys. The novelty of our contribution resides on the fact that we include in our model aspects of the image formation and we also consider the presence of other elements from the endoluminal scene such as specular highlights and blood vessels, which have an impact on the performance of our methods. In order to develop our polyp localization method we accumulate valley information in order to generate energy maps, which are also used to guide the polyp segmentation. Our methods achieve promising results in polyp localization and segmentation. As we want to explore the usability of our methods we present a comparative analysis between physicians fixations obtained via an eye tracking device and our polyp localization method. The results show that our method is indistinguishable to novice physicians although it is far from expert physicians. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | Alicia Fornes; Volkmar Frinken | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MV | Approved | no | ||
Call Number | Admin @ si @ Ber2014 | Serial | 2487 | ||
Permanent link to this record | |||||
Author | Pedro Martins; Paulo Carvalho; Carlo Gatta | ||||
Title | On the completeness of feature-driven maximally stable extremal regions | Type | Journal Article | ||
Year | 2016 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 74 | Issue | Pages ![]() |
9-16 | |
Keywords | Local features; Completeness; Maximally Stable Extremal Regions | ||||
Abstract | By definition, local image features provide a compact representation of the image in which most of the image information is preserved. This capability offered by local features has been overlooked, despite being relevant in many application scenarios. In this paper, we analyze and discuss the performance of feature-driven Maximally Stable Extremal Regions (MSER) in terms of the coverage of informative image parts (completeness). This type of features results from an MSER extraction on saliency maps in which features related to objects boundaries or even symmetry axes are highlighted. These maps are intended to be suitable domains for MSER detection, allowing this detector to provide a better coverage of informative image parts. Our experimental results, which were based on a large-scale evaluation, show that feature-driven MSER have relatively high completeness values and provide more complete sets than a traditional MSER detection even when sets of similar cardinality are considered. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier B.V. | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0167-8655 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | LAMP;MILAB; | Approved | no | ||
Call Number | Admin @ si @ MCG2016 | Serial | 2748 | ||
Permanent link to this record | |||||
Author | Michael Teutsch; Angel Sappa; Riad I. Hammoud | ||||
Title | Image and Video Enhancement | Type | Book Chapter | ||
Year | 2022 | Publication | Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages ![]() |
9-21 | ||
Keywords | |||||
Abstract | Image and video enhancement aims at improving the signal quality relative to imaging artifacts such as noise and blur or atmospheric perturbations such as turbulence and haze. It is usually performed in order to assist humans in analyzing image and video content or simply to present humans visually appealing images and videos. However, image and video enhancement can also be used as a preprocessing technique to ease the task and thus improve the performance of subsequent automatic image content analysis algorithms: preceding dehazing can improve object detection as shown by [23] or explicit turbulence modeling can improve moving object detection as discussed by [24]. But it remains an open question whether image and video enhancement should rather be performed explicitly as a preprocessing step or implicitly for example by feeding affected images directly to a neural network for image content analysis like object detection [25]. Especially for real-time video processing at low latency it can be better to handle image perturbation implicitly in order to minimize the processing time of an algorithm. This can be achieved by making algorithms for image content analysis robust or even invariant to perturbations such as noise or blur. Additionally, mistakes of an individual preprocessing module can obviously affect the quality of the entire processing pipeline. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | SLCV | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MSIAU; MACO | Approved | no | ||
Call Number | Admin @ si @ TSH2022a | Serial | 3807 | ||
Permanent link to this record | |||||
Author | Victor Ponce; Mario Gorga; Xavier Baro; Petia Radeva; Sergio Escalera | ||||
Title | Analisis de la Expresion Oral y Gestual en Proyectos Fin de Carrera Via un Sistema de Vision Artificial | Type | Miscellaneous | ||
Year | 2011 | Publication | Revista electronica de la asociacion de enseñantes universitarios de la informatica AENUI | Abbreviated Journal | ReVision |
Volume | 4 | Issue | 1 | Pages ![]() |
8-18 |
Keywords | |||||
Abstract | La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1989-1199 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA;MV | Approved | no | ||
Call Number | Admin @ si @ PGB2011c | Serial | 1783 | ||
Permanent link to this record |