Records |
Author |
Andreas Fischer; Volkmar Frinken; Horst Bunke; Ching Y. Suen |
Title |
Improving HMM-Based Keyword Spotting with Character Language Models |
Type |
Conference Article |
Year |
2013 |
Publication |
12th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
506-510 |
Keywords |
|
Abstract |
Facing high error rates and slow recognition speed for full text transcription of unconstrained handwriting images, keyword spotting is a promising alternative to locate specific search terms within scanned document images. We have previously proposed a learning-based method for keyword spotting using character hidden Markov models that showed a high performance when compared with traditional template image matching. In the lexicon-free approach pursued, only the text appearance was taken into account for recognition. In this paper, we integrate character n-gram language models into the spotting system in order to provide an additional language context. On the modern IAM database as well as the historical George Washington database, we demonstrate that character language models significantly improve the spotting performance. |
Address |
Washington; USA; August 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1520-5363 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG; 600.045; 605.203 |
Approved |
no |
Call Number |
Admin @ si @ FFB2013 |
Serial |
2295 |
Permanent link to this record |
|
|
|
Author |
Lluis Gomez; Dimosthenis Karatzas |
Title |
Multi-script Text Extraction from Natural Scenes |
Type |
Conference Article |
Year |
2013 |
Publication |
12th International Conference on Document Analysis and Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
467-471 |
Keywords |
|
Abstract |
Scene text extraction methodologies are usually based in classification of individual regions or patches, using a priori knowledge for a given script or language. Human perception of text, on the other hand, is based on perceptual organisation through which text emerges as a perceptually significant group of atomic objects. Therefore humans are able to detect text even in languages and scripts never seen before. In this paper, we argue that the text extraction problem could be posed as the detection of meaningful groups of regions. We present a method built around a perceptual organisation framework that exploits collaboration of proximity and similarity laws to create text-group hypotheses. Experiments demonstrate that our algorithm is competitive with state of the art approaches on a standard dataset covering text in variable orientations and two languages. |
Address |
Washington; USA; August 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1520-5363 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICDAR |
Notes |
DAG; 600.056; 601.158; 601.197 |
Approved |
no |
Call Number |
Admin @ si @ GoK2013 |
Serial |
2310 |
Permanent link to this record |
|
|
|
Author |
Javier Marin; David Vazquez; Antonio Lopez; Jaume Amores; Bastian Leibe |
Title |
Random Forests of Local Experts for Pedestrian Detection |
Type |
Conference Article |
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
2592 - 2599 |
Keywords |
ADAS; Random Forest; Pedestrian Detection |
Abstract |
Pedestrian detection is one of the most challenging tasks in computer vision, and has received a lot of attention in the last years. Recently, some authors have shown the advantages of using combinations of part/patch-based detectors in order to cope with the large variability of poses and the existence of partial occlusions. In this paper, we propose a pedestrian detection method that efficiently combines multiple local experts by means of a Random Forest ensemble. The proposed method works with rich block-based representations such as HOG and LBP, in such a way that the same features are reused by the multiple local experts, so that no extra computational cost is needed with respect to a holistic method. Furthermore, we demonstrate how to integrate the proposed approach with a cascaded architecture in order to achieve not only high accuracy but also an acceptable efficiency. In particular, the resulting detector operates at five frames per second using a laptop machine. We tested the proposed method with well-known challenging datasets such as Caltech, ETH, Daimler, and INRIA. The method proposed in this work consistently ranks among the top performers in all the datasets, being either the best method or having a small difference with the best one. |
Address |
Sydney; Australia; December 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
IEEE |
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1550-5499 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
ADAS; 600.057; 600.054 |
Approved |
no |
Call Number |
ADAS @ adas @ MVL2013 |
Serial |
2333 |
Permanent link to this record |
|
|
|
Author |
Gemma Roig; Xavier Boix; R. de Nijs; Sebastian Ramos; K. Kühnlenz; Luc Van Gool |
Title |
Active MAP Inference in CRFs for Efficient Semantic Segmentation |
Type |
Conference Article |
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
2312 - 2319 |
Keywords |
Semantic Segmentation |
Abstract |
Most MAP inference algorithms for CRFs optimize an energy function knowing all the potentials. In this paper, we focus on CRFs where the computational cost of instantiating the potentials is orders of magnitude higher than MAP inference. This is often the case in semantic image segmentation, where most potentials are instantiated by slow classifiers fed with costly features. We introduce Active MAP inference 1) to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest of the parameters of the potentials unknown, and 2) to estimate the MAP labeling from such incomplete energy function. Results for semantic segmentation benchmarks, namely PASCAL VOC 2010 [5] and MSRC-21 [19], show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains. |
Address |
Sydney; Australia; December 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1550-5499 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
ADAS; 600.057 |
Approved |
no |
Call Number |
ADAS @ adas @ RBN2013 |
Serial |
2377 |
Permanent link to this record |
|
|
|
Author |
Fares Alnajar; Theo Gevers; Roberto Valenti; Sennay Ghebreab |
Title |
Calibration-free Gaze Estimation using Human Gaze Patterns |
Type |
Conference Article |
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
137-144 |
Keywords |
|
Abstract |
We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at [12]. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4.3 im. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators. |
Address |
Sydney |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
ALTRES;ISE |
Approved |
no |
Call Number |
Admin @ si @ AGV2013 |
Serial |
2365 |
Permanent link to this record |
|
|
|
Author |
Hamdi Dibeklioglu; Albert Ali Salah; Theo Gevers |
Title |
Like Father, Like Son: Facial Expression Dynamics for Kinship Verification |
Type |
Conference Article |
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1497-1504 |
Keywords |
|
Abstract |
Kinship verification from facial appearance is a difficult problem. This paper explores the possibility of employing facial expression dynamics in this problem. By using features that describe facial dynamics and spatio-temporal appearance over smile expressions, we show that it is possible to improve the state of the art in this problem, and verify that it is indeed possible to recognize kinship by resemblance of facial expressions. The proposed method is tested on different kin relationships. On the average, 72.89% verification accuracy is achieved on spontaneous smiles. |
Address |
Sydney |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
ALTRES;ISE |
Approved |
no |
Call Number |
Admin @ si @ DSG2013 |
Serial |
2366 |
Permanent link to this record |
|
|
|
Author |
Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny |
Title |
Handwritten Word Spotting with Corrected Attributes |
Type |
Conference Article |
Year |
2013 |
Publication |
15th IEEE International Conference on Computer Vision |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
1017-1024 |
Keywords |
|
Abstract |
We propose an approach to multi-writer word spotting, where the goal is to find a query word in a dataset comprised of document images. We propose an attributes-based approach that leads to a low-dimensional, fixed-length representation of the word images that is fast to compute and, especially, fast to compare. This approach naturally leads to an unified representation of word images and strings, which seamlessly allows one to indistinctly perform query-by-example, where the query is an image, and query-by-string, where the query is a string. We also propose a calibration scheme to correct the attributes scores based on Canonical Correlation Analysis that greatly improves the results on a challenging dataset. We test our approach on two public datasets showing state-of-the-art results. |
Address |
Sydney; Australia; December 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1550-5499 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
ICCV |
Notes |
DAG |
Approved |
no |
Call Number |
Admin @ si @ AGF2013 |
Serial |
2327 |
Permanent link to this record |
|
|
|
Author |
Jorge Bernal; F. Javier Sanchez; Fernando Vilariño |
Title |
Impact of Image Preprocessing Methods on Polyp Localization in Colonoscopy Frames |
Type |
Conference Article |
Year |
2013 |
Publication |
35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
7350 - 7354 |
Keywords |
|
Abstract |
In this paper we present our image preprocessing methods as a key part of our automatic polyp localization scheme. These methods are used to assess the impact of different endoluminal scene elements when characterizing polyps. More precisely we tackle the influence of specular highlights, blood vessels and black mask surrounding the scene. Experimental results prove that the appropriate handling of these elements leads to a great improvement in polyp localization results. |
Address |
Osaka; Japan; July 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1557-170X |
ISBN |
|
Medium |
|
Area |
800 |
Expedition |
|
Conference |
EMBC |
Notes |
MV; 600.047; 600.060;SIAI |
Approved |
no |
Call Number |
Admin @ si @ BSV2013 |
Serial |
2286 |
Permanent link to this record |
|
|
|
Author |
Andreas Møgelmose; Chris Bahnsen; Thomas B. Moeslund; Albert Clapes; Sergio Escalera |
Title |
Tri-modal Person Re-identification with RGB, Depth and Thermal Features |
Type |
Conference Article |
Year |
2013 |
Publication |
9th IEEE Workshop on Perception beyond the visible Spectrum, Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
301-307 |
Keywords |
|
Abstract |
Person re-identification is about recognizing people who have passed by a sensor earlier. Previous work is mainly based on RGB data, but in this work we for the first time present a system where we combine RGB, depth, and thermal data for re-identification purposes. First, from each of the three modalities, we obtain some particular features: from RGB data, we model color information from different regions of the body, from depth data, we compute different soft body biometrics, and from thermal data, we extract local structural information. Then, the three information types are combined in a joined classifier. The tri-modal system is evaluated on a new RGB-D-T dataset, showing successful results in re-identification scenarios. |
Address |
Portland; oregon; June 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
978-0-7695-4990-3 |
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPRW |
Notes |
HUPBA;MILAB |
Approved |
no |
Call Number |
Admin @ si @ MBM2013 |
Serial |
2253 |
Permanent link to this record |
|
|
|
Author |
Fadi Dornaika; Bogdan Raducanu |
Title |
Out-of-Sample Embedding for Manifold Learning Applied to Face Recognition |
Type |
Conference Article |
Year |
2013 |
Publication |
IEEE International Workshop on Analysis and Modeling of Faces and Gestures |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
862-868 |
Keywords |
|
Abstract |
Manifold learning techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data---the out-of-sample problem. For the first aspect, the proposed schemes were heuristically driven. For the second aspect, the difficulty resides in finding an accurate mapping that transfers unseen data samples into an existing manifold. Past works addressing these two aspects were heavily parametric in the sense that the optimal performance is only reached for a suitable parameter choice that should be known in advance. In this paper, we demonstrate that sparse coding theory not only serves for automatic graph reconstruction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. To evaluate the effectiveness of the proposed out-of-sample embedding, experiments are conducted using the k-nearest neighbor (KNN) and Kernel Support Vector Machines (KSVM) classifiers on four public face databases. The experimental results show that the proposed model is able to achieve high categorization effectiveness as well as high consistency with non-linear embeddings/manifolds obtained in batch modes. |
Address |
Portland; USA; June 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPRW |
Notes |
OR; 600.046;MV |
Approved |
no |
Call Number |
Admin @ si @ DoR2013 |
Serial |
2236 |
Permanent link to this record |
|
|
|
Author |
David Vazquez; Jiaolong Xu; Sebastian Ramos; Antonio Lopez; Daniel Ponsa |
Title |
Weakly Supervised Automatic Annotation of Pedestrian Bounding Boxes |
Type |
Conference Article |
Year |
2013 |
Publication |
CVPR Workshop on Ground Truth – What is a good dataset? |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
706 - 711 |
Keywords |
Pedestrian Detection; Domain Adaptation |
Abstract |
Among the components of a pedestrian detector, its trained pedestrian classifier is crucial for achieving the desired performance. The initial task of the training process consists in collecting samples of pedestrians and background, which involves tiresome manual annotation of pedestrian bounding boxes (BBs). Thus, recent works have assessed the use of automatically collected samples from photo-realistic virtual worlds. However, learning from virtual-world samples and testing in real-world images may suffer the dataset shift problem. Accordingly, in this paper we assess an strategy to collect samples from the real world and retrain with them, thus avoiding the dataset shift, but in such a way that no BBs of real-world pedestrians have to be provided. In particular, we train a pedestrian classifier based on virtual-world samples (no human annotation required). Then, using such a classifier we collect pedestrian samples from real-world images by detection. After, a human oracle rejects the false detections efficiently (weak annotation). Finally, a new classifier is trained with the accepted detections. We show that this classifier is competitive with respect to the counterpart trained with samples collected by manually annotating hundreds of pedestrian BBs. |
Address |
Portland; Oregon; June 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
IEEE |
Place of Publication |
|
Editor |
|
Language |
English |
Summary Language |
English |
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPRW |
Notes |
ADAS; 600.054; 600.057; 601.217 |
Approved |
no |
Call Number |
ADAS @ adas @ VXR2013a |
Serial |
2219 |
Permanent link to this record |
|
|
|
Author |
Jiaolong Xu; David Vazquez; Sebastian Ramos; Antonio Lopez; Daniel Ponsa |
Title |
Adapting a Pedestrian Detector by Boosting LDA Exemplar Classifiers |
Type |
Conference Article |
Year |
2013 |
Publication |
CVPR Workshop on Ground Truth – What is a good dataset? |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
688 - 693 |
Keywords |
Pedestrian Detection; Domain Adaptation |
Abstract |
Training vision-based pedestrian detectors using synthetic datasets (virtual world) is a useful technique to collect automatically the training examples with their pixel-wise ground truth. However, as it is often the case, these detectors must operate in real-world images, experiencing a significant drop of their performance. In fact, this effect also occurs among different real-world datasets, i.e. detectors' accuracy drops when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, in order to avoid this problem, it is required to adapt the detector trained with synthetic data to operate in the real-world scenario. In this paper, we propose a domain adaptation approach based on boosting LDA exemplar classifiers from both virtual and real worlds. We evaluate our proposal on multiple real-world pedestrian detection datasets. The results show that our method can efficiently adapt the exemplar classifiers from virtual to real world, avoiding drops in average precision over the 15%. |
Address |
Portland; oregon; June 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
English |
Summary Language |
English |
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPRW |
Notes |
ADAS; 600.054; 600.057; 601.217 |
Approved |
yes |
Call Number |
XVR2013; ADAS @ adas @ xvr2013a |
Serial |
2220 |
Permanent link to this record |
|
|
|
Author |
Rahat Khan; Joost Van de Weijer; Fahad Shahbaz Khan; Damien Muselet; christophe Ducottet; Cecile Barat |
Title |
Discriminative Color Descriptors |
Type |
Conference Article |
Year |
2013 |
Publication |
IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
2866 - 2873 |
Keywords |
|
Abstract |
Color description is a challenging task because of large variations in RGB values which occur due to scene accidental events, such as shadows, shading, specularities, illuminant color changes, and changes in viewing geometry. Traditionally, this challenge has been addressed by capturing the variations in physics-based models, and deriving invariants for the undesired variations. The drawback of this approach is that sets of distinguishable colors in the original color space are mapped to the same value in the photometric invariant space. This results in a drop of discriminative power of the color description. In this paper we take an information theoretic approach to color description. We cluster color values together based on their discriminative power in a classification problem. The clustering has the explicit objective to minimize the drop of mutual information of the final representation. We show that such a color description automatically learns a certain degree of photometric invariance. We also show that a universal color representation, which is based on other data sets than the one at hand, can obtain competing performance. Experiments show that the proposed descriptor outperforms existing photometric invariants. Furthermore, we show that combined with shape description these color descriptors obtain excellent results on four challenging datasets, namely, PASCAL VOC 2007, Flowers-102, Stanford dogs-120 and Birds-200. |
Address |
Portland; Oregon; June 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1063-6919 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPR |
Notes |
CIC; 600.048 |
Approved |
no |
Call Number |
Admin @ si @ KWK2013a |
Serial |
2262 |
Permanent link to this record |
|
|
|
Author |
Ivo Everts; Jan van Gemert; Theo Gevers |
Title |
Evaluation of Color STIPs for Human Action Recognition |
Type |
Conference Article |
Year |
2013 |
Publication |
IEEE Conference on Computer Vision and Pattern Recognition |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
2850-2857 |
Keywords |
|
Abstract |
This paper is concerned with recognizing realistic human actions in videos based on spatio-temporal interest points (STIPs). Existing STIP-based action recognition approaches operate on intensity representations of the image data. Because of this, these approaches are sensitive to disturbing photometric phenomena such as highlights and shadows. Moreover, valuable information is neglected by discarding chromaticity from the photometric representation. These issues are addressed by Color STIPs. Color STIPs are multi-channel reformulations of existing intensity-based STIP detectors and descriptors, for which we consider a number of chromatic representations derived from the opponent color space. This enhanced modeling of appearance improves the quality of subsequent STIP detection and description. Color STIPs are shown to substantially outperform their intensity-based counterparts on the challenging UCF~sports, UCF11 and UCF50 action recognition benchmarks. Moreover, the results show that color STIPs are currently the single best low-level feature choice for STIP-based approaches to human action recognition. |
Address |
Portland; oregon; June 2013 |
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
1063-6919 |
ISBN |
|
Medium |
|
Area |
|
Expedition |
|
Conference |
CVPR |
Notes |
ALTRES;ISE |
Approved |
no |
Call Number |
Admin @ si @ EGG2013 |
Serial |
2364 |
Permanent link to this record |
|
|
|
Author |
Isabel Guitart; Jordi Conesa; Luis Villarejo; Agata Lapedriza; David Masip; Antoni Perez; Elena Planas |
Title |
Opinion Mining on Educational Resources at the Open University of Catalonia |
Type |
Conference Article |
Year |
2013 |
Publication |
3rd International Workshop on Adaptive Learning via Interactive, Collaborative and Emotional approaches. In conjunction with CISIS 2013: The 7th International Conference on Complex, Intelligent, and Software Intensive Systems |
Abbreviated Journal |
|
Volume |
|
Issue |
|
Pages |
385 - 390 |
Keywords |
|
Abstract |
In order to make improvements to teaching, it is vital to know what students think of the way they are taught. With that purpose in mind, exhaustively analyzing the forums associated with the subjects taught at the Universitat Oberta de Cataluya (UOC) would be extremely helpful, as the university's students often post comments on their learning experiences in them. Exploiting the content of such forums is not a simple undertaking. The volume of data involved is very large, and performing the task manually would require a great deal of effort from lecturers. As a first step to solve this problem, we propose a tool to automatically analyze the posts in forums of communities of UOC students and teachers, with a view to systematically mining the opinions they contain. This article defines the architecture of such tool and explains how lexical-semantic and language technology resources can be used to that end. For pilot testing purposes, the tool has been used to identify students' opinions on the UOC's Business Intelligence master's degree course during the last two years. The paper discusses the results of such test. The contribution of this paper is twofold. Firstly, it demonstrates the feasibility of using natural language parsing techniques to help teachers to make decisions. Secondly, it introduces a simple tool that can be refined and adapted to a virtual environment for the purpose in question. |
Address |
|
Corporate Author |
|
Thesis |
|
Publisher |
|
Place of Publication |
|
Editor |
|
Language |
|
Summary Language |
|
Original Title |
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
Series Volume |
|
Series Issue |
|
Edition |
|
ISSN |
|
ISBN |
978-0-7695-4992-7 |
Medium |
|
Area |
|
Expedition |
|
Conference |
ALICE |
Notes |
OR;MV |
Approved |
no |
Call Number |
GCV2013 |
Serial |
2268 |
Permanent link to this record |