Home | << 1 2 3 4 5 6 7 8 9 10 >> |
Records | |||||
---|---|---|---|---|---|
Author | Albert Gordo; Ernest Valveny | ||||
Title | A rotation invariant page layout descriptor for document classification and retrieval | Type | Conference Article | ||
Year | 2009 | Publication | 10th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 481–485 | ||
Keywords | |||||
Abstract | Document classification usually requires of structural features such as the physical layout to obtain good accuracy rates on complex documents. This paper introduces a descriptor of the layout and a distance measure based on the cyclic dynamic time warping which can be computed in O(n2). This descriptor is translation invariant and can be easily modified to be scale and rotation invariant. Experiments with this descriptor and its rotation invariant modification are performed on the Girona archives database and compared against another common layout distance, the minimum weight edge cover. The experiments show that these methods outperform the MWEC both in accuracy and speed, particularly on rotated documents. | ||||
Address | Barcelona, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1520-5363 | ISBN | 978-1-4244-4500-4 | Medium | |
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ GoV2009a | Serial | 1175 | ||
Permanent link to this record | |||||
Author | Marçal Rusiñol; Josep Llados | ||||
Title | Logo Spotting by a Bag-of-words Approach for Document Categorization | Type | Conference Article | ||
Year | 2009 | Publication | 10th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 111–115 | ||
Keywords | |||||
Abstract | In this paper we present a method for document categorization which processes incoming document images such as invoices or receipts. The categorization of these document images is done in terms of the presence of a certain graphical logo detected without segmentation. The graphical logos are described by a set of local features and the categorization of the documents is performed by the use of a bag-of-words model. Spatial coherence rules are added to reinforce the correct category hypothesis, aiming also to spot the logo inside the document image. Experiments which demonstrate the effectiveness of this system on a large set of real data are presented. | ||||
Address | Barcelona; Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1520-5363 | ISBN | 978-1-4244-4500-4 | Medium | |
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ RuL2009b | Serial | 1179 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Alicia Fornes; Oriol Pujol; Alberto Escudero; Petia Radeva | ||||
Title | Circular Blurred Shape Model for Symbol Spotting in Documents | Type | Conference Article | ||
Year | 2009 | Publication | 16th IEEE International Conference on Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | 1985-1988 | ||
Keywords | |||||
Abstract | Symbol spotting problem requires feature extraction strategies able to generalize from training samples and to localize the target object while discarding most part of the image. In the case of document analysis, symbol spotting techniques have to deal with a high variability of symbols' appearance. In this paper, we propose the Circular Blurred Shape Model descriptor. Feature extraction is performed capturing the spatial arrangement of significant object characteristics in a correlogram structure. Shape information from objects is shared among correlogram regions, being tolerant to the irregular deformations. Descriptors are learnt using a cascade of classifiers and Abadoost as the base classifier. Finally, symbol spotting is performed by means of a windowing strategy using the learnt cascade over plan and old musical score documents. Spotting and multi-class categorization results show better performance comparing with the state-of-the-art descriptors. | ||||
Address | Cairo, Egypt | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4244-5653-6 | Medium | ||
Area | Expedition | Conference | ICIP | ||
Notes | MILAB;HuPBA;DAG | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ EFP2009b | Serial | 1184 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Eloi Puertas; Petia Radeva; Oriol Pujol | ||||
Title | Multimodal laughter recognition in video conversations | Type | Conference Article | ||
Year | 2009 | Publication | 2nd IEEE Workshop on CVPR for Human communicative Behavior analysis | Abbreviated Journal | |
Volume | Issue | Pages | 110–115 | ||
Keywords | |||||
Abstract | Laughter detection is an important area of interest in the Affective Computing and Human-computer Interaction fields. In this paper, we propose a multi-modal methodology based on the fusion of audio and visual cues to deal with the laughter recognition problem in face-to-face conversations. The audio features are extracted from the spectogram and the video features are obtained estimating the mouth movement degree and using a smile and laughter classifier. Finally, the multi-modal cues are included in a sequential classifier. Results over videos from the public discussion blog of the New York Times show that both types of features perform better when considered together by the classifier. Moreover, the sequential methodology shows to significantly outperform the results obtained by an Adaboost classifier. | ||||
Address | Miami (USA) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2160-7508 | ISBN | 978-1-4244-3994-2 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ EPR2009c | Serial | 1188 | ||
Permanent link to this record | |||||
Author | Xavier Baro; Sergio Escalera; Petia Radeva; Jordi Vitria | ||||
Title | Visual Content Layer for Scalable Recognition in Urban Image Databases, Internet Multimedia Search and Mining | Type | Conference Article | ||
Year | 2009 | Publication | 10th IEEE International Conference on Multimedia and Expo | Abbreviated Journal | |
Volume | Issue | Pages | 1616–1619 | ||
Keywords | |||||
Abstract | Rich online map interaction represents a useful tool to get multimedia information related to physical places. With this type of systems, users can automatically compute the optimal route for a trip or to look for entertainment places or hotels near their actual position. Standard maps are defined as a fusion of layers, where each one contains specific data such height, streets, or a particular business location. In this paper we propose the construction of a visual content layer which describes the visual appearance of geographic locations in a city. We captured, by means of a Mobile Mapping system, a huge set of georeferenced images (> 500K) which cover the whole city of Barcelona. For each image, hundreds of region descriptions are computed off-line and described as a hash code. This allows an efficient and scalable way of accessing maps by visual content. | ||||
Address | New York (USA) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4244-4291-1 | Medium | ||
Area | Expedition | Conference | ICME | ||
Notes | OR;MILAB;HuPBA;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ BER2009 | Serial | 1189 | ||
Permanent link to this record | |||||
Author | Fahad Shahbaz Khan; Joost Van de Weijer; Maria Vanrell | ||||
Title | Top-Down Color Attention for Object Recognition | Type | Conference Article | ||
Year | 2009 | Publication | 12th International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 979 - 986 | ||
Keywords | |||||
Abstract | Generally the bag-of-words based image representation follows a bottom-up paradigm. The subsequent stages of the process: feature detection, feature description, vocabulary construction and image representation are performed independent of the intentioned object classes to be detected. In such a framework, combining multiple cues such as shape and color often provides below-expected results. This paper presents a novel method for recognizing object categories when using multiple cues by separating the shape and color cue. Color is used to guide attention by means of a top-down category-specific attention map. The color attention map is then further deployed to modulate the shape features by taking more features from regions within an image that are likely to contain an object instance. This procedure leads to a category-specific image histogram representation for each category. Furthermore, we argue that the method combines the advantages of both early and late fusion. We compare our approach with existing methods that combine color and shape cues on three data sets containing varied importance of both cues, namely, Soccer ( color predominance), Flower (color and shape parity), and PASCAL VOC Challenge 2007 (shape predominance). The experiments clearly demonstrate that in all three data sets our proposed framework significantly outperforms the state-of-the-art methods for combining color and shape information. | ||||
Address | Kyoto, Japan | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1550-5499 | ISBN | 978-1-4244-4420-5 | Medium | |
Area | Expedition | Conference | ICCV | ||
Notes | CIC | Approved | no | ||
Call Number | CAT @ cat @ SWV2009 | Serial | 1196 | ||
Permanent link to this record | |||||
Author | Arjan Gijsenij; Theo Gevers; Joost Van de Weijer | ||||
Title | Physics-based Edge Evaluation for Improved Color Constancy | Type | Conference Article | ||
Year | 2009 | Publication | 22nd IEEE Conference on Computer Vision and Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 581 – 588 | ||
Keywords | |||||
Abstract | Edge-based color constancy makes use of image derivatives to estimate the illuminant. However, different edge types exist in real-world images such as shadow, geometry, material and highlight edges. These different edge types may have a distinctive influence on the performance of the illuminant estimation. | ||||
Address | Miami, USA | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1063-6919 | ISBN | 978-1-4244-3992-8 | Medium | |
Area | Expedition | Conference | CVPR | ||
Notes | CAT;ISE | Approved | no | ||
Call Number | CAT @ cat @ GGW2009 | Serial | 1197 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Ferran Diego; Joan Serrat; Antonio Lopez | ||||
Title | Automatic Ground-truthing using video registration for on-board detection algorithms | Type | Conference Article | ||
Year | 2009 | Publication | 16th IEEE International Conference on Image Processing | Abbreviated Journal | |
Volume | Issue | Pages | 4389 - 4392 | ||
Keywords | |||||
Abstract | Ground-truth data is essential for the objective evaluation of object detection methods in computer vision. Many works claim their method is robust but they support it with experiments which are not quantitatively assessed with regard some ground-truth. This is one of the main obstacles to properly evaluate and compare such methods. One of the main reasons is that creating an extensive and representative ground-truth is very time consuming, specially in the case of video sequences, where thousands of frames have to be labelled. Could such a ground-truth be generated, at least in part, automatically? Though it may seem a contradictory question, we show that this is possible for the case of video sequences recorded from a moving camera. The key idea is transferring existing frame segmentations from a reference sequence into another video sequence recorded at a different time on the same track, possibly under a different ambient lighting. We have carried out experiments on several video sequence pairs and quantitatively assessed the precision of the transformed ground-truth, which prove that our approach is not only feasible but also quite accurate. | ||||
Address | Cairo, Egypt | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1522-4880 | ISBN | 978-1-4244-5653-6 | Medium | |
Area | Expedition | Conference | ICIP | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ ADS2009 | Serial | 1201 | ||
Permanent link to this record | |||||
Author | Enric Marti; Jaume Rocarias; Ricardo Toledo; Aura Hernandez-Sabate | ||||
Title | Caronte: plataforma Moodle con gestion flexible de grupos. Primeras experiencias en asignaturas de Ingenieria Informatica | Type | Miscellaneous | ||
Year | 2009 | Publication | 15th Jornadas de Enseñanza Universitaria de la Informatica | Abbreviated Journal | |
Volume | Issue | Pages | 461–468 | ||
Keywords | |||||
Abstract | En este artículo se presenta Caronte, entorno LMS (Learning Management System) basado en Moodle. Una característica importante del entorno es la gestión flexible de grupos en una asignatura. Entendemos por grupo un conjunto de alumnos que realizan un trabajo y uno de ellos entrega la actividad propuesta (práctica, encuesta, etc.) en representación del grupo. Hemos trabajado en la confección de estos grupos, implementando un sistema de inscripción por contraseña.
Caronte ofrece un conjunto de actividades basadas en este concepto de grupo: encuestas, tareas (entrega de trabajos o prácticas), encuestas de autoevaluación y cuestionarios, entre otras. Basada en nuestra actividad de encuesta, hemos definido una actividad de Control, que permite un cierto feedback electrónico del profesor sobre la actividad de los alumnos. Finalmente, se presenta un resumen de las experiencias de uso de Caronte sobre asignaturas de Ingeniería Informática en el curso 2007-08. |
||||
Address | Barcelona, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-692-2758-9 | Medium | ||
Area | Expedition | Conference | JENUI | ||
Notes | IAM;RV;ADAS | Approved | no | ||
Call Number | IAM @ iam @ MRT2009 | Serial | 1202 | ||
Permanent link to this record | |||||
Author | Nicola Bellotto; Eric Sommerlade; Ben Benfold; Charles Bibby; I. Reid; Daniel Roth; Luc Van Gool; Carles Fernandez; Jordi Gonzalez | ||||
Title | A Distributed Camera System for Multi-Resolution Surveillance | Type | Conference Article | ||
Year | 2009 | Publication | 3rd ACM/IEEE International Conference on Distributed Smart Cameras | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | 10.1109/ICDSC.2009.5289413 | ||||
Abstract | We describe an architecture for a multi-camera, multi-resolution surveillance system. The aim is to support a set of distributed static and pan-tilt-zoom (PTZ) cameras and visual tracking algorithms, together with a central supervisor unit. Each camera (and possibly pan-tilt device) has a dedicated process and processor. Asynchronous interprocess communications and archiving of data are achieved in a simple and effective way via a central repository, implemented using an SQL database. Visual tracking data from static views are stored dynamically into tables in the database via client calls to the SQL server. A supervisor process running on the SQL server determines if active zoom cameras should be dispatched to observe a particular target, and this message is effected via writing demands into another database table. We show results from a real implementation of the system comprising one static camera overviewing the environment under consideration and a PTZ camera operating under closed-loop velocity control, which uses a fast and robust level-set-based region tracker. Experiments demonstrate the effectiveness of our approach and its feasibility to multi-camera systems for intelligent surveillance. | ||||
Address | Como, Italy | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICDSC | ||
Notes | Approved | no | |||
Call Number | ISE @ ise @ BSB2009 | Serial | 1205 | ||
Permanent link to this record | |||||
Author | Mikhail Mozerov; Ariel Amato; Xavier Roca | ||||
Title | Occlusion Handling in Trinocular Stereo using Composite Disparity Space Image | Type | Conference Article | ||
Year | 2009 | Publication | 19th International Conference on Computer Graphics and Vision | Abbreviated Journal | |
Volume | Issue | Pages | 69–73 | ||
Keywords | |||||
Abstract | In this paper we propose a method that smartly improves occlusion handling in stereo matching using trinocular stereo. The main idea is based on the assumption that any occluded region in a matched stereo pair (middle-left images) in general is not occluded in the opposite matched pair (middle-right images). Then two disparity space images (DSI) can be merged in one composite DSI. The proposed integration differs from the known approach that uses a cumulative cost. A dense disparity map is obtained with a global optimization algorithm using the proposed composite DSI. The experimental results are evaluated on the Middlebury data set, showing high performance of the proposed algorithm especially in the occluded regions. One of the top positions in the rank of the Middlebury website confirms the performance of our method to be competitive with the best stereo matching. | ||||
Address | Moscow (Russia) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-5-317-02975-3 | Medium | ||
Area | Expedition | Conference | GRAPHICON | ||
Notes | ISE | Approved | no | ||
Call Number | ISE @ ise @ MAR2009b | Serial | 1207 | ||
Permanent link to this record | |||||
Author | Pau Baiget | ||||
Title | Modeling Human Behavior for Image Sequence Understanding and Generation | Type | Book Whole | ||
Year | 2009 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The comprehension of animal behavior, especially human behavior, is one of the most ancient and studied problems since the beginning of civilization. The big list of factors that interact to determine a person action require the collaboration of different disciplines, such as psichology, biology, or sociology. In the last years the analysis of human behavior has received great attention also from the computer vision community, given the latest advances in the acquisition of human motion data from image sequences.
Despite the increasing availability of that data, there still exists a gap towards obtaining a conceptual representation of the obtained observations. Human behavior analysis is based on a qualitative interpretation of the results, and therefore the assignment of concepts to quantitative data is linked to a certain ambiguity. This Thesis tackles the problem of obtaining a proper representation of human behavior in the contexts of computer vision and animation. On the one hand, a good behavior model should permit the recognition and explanation the observed activity in image sequences. On the other hand, such a model must allow the generation of new synthetic instances, which model the behavior of virtual agents. First, we propose methods to automatically learn the models from observations. Given a set of quantitative results output by a vision system, a normal behavior model is learnt. This results provides a tool to determine the normality or abnormality of future observations. However, machine learning methods are unable to provide a richer description of the observations. We confront this problem by means of a new method that incorporates prior knowledge about the enviornment and about the expected behaviors. This framework, formed by the reasoning engine FMTL and the modeling tool SGT allows the generation of conceptual descriptions of activity in new image sequences. Finally, we demonstrate the suitability of the proposed framework to simulate behavior of virtual agents, which are introduced into real image sequences and interact with observed real agents, thereby easing the generation of augmented reality sequences. The set of approaches presented in this Thesis has a growing set of potential applications. The analysis and description of behavior in image sequences has its principal application in the domain of smart video--surveillance, in order to detect suspicious or dangerous behaviors. Other applications include automatic sport commentaries, elderly monitoring, road traffic analysis, and the development of semantic video search engines. Alternatively, behavioral virtual agents allow to simulate accurate real situations, such as fires or crowds. Moreover, the inclusion of virtual agents into real image sequences has been widely deployed in the games and cinema industries. |
||||
Address | Bellaterra (Spain) | ||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Jordi Gonzalez;Xavier Roca | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ Bai2009 | Serial | 1210 | ||
Permanent link to this record | |||||
Author | Ivan Huerta; Michael Holte; Thomas B. Moeslund; Jordi Gonzalez | ||||
Title | Detection and Removal of Chromatic Moving Shadows in Surveillance Scenarios | Type | Conference Article | ||
Year | 2009 | Publication | 12th International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 1499 - 1506 | ||
Keywords | |||||
Abstract | Segmentation in the surveillance domain has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. In this paper we present a novel technique based on gradient and colour models for separating chromatic moving cast shadows from detected moving objects. Firstly, both a chromatic invariant colour cone model and an invariant gradient model are built to perform automatic segmentation while detecting potential shadows. In a second step, regions corresponding to potential shadows are grouped by considering “a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between textures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions. | ||||
Address | Kyoto, Japan | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1550-5499 | ISBN | 978-1-4244-4420-5 | Medium | |
Area | Expedition | Conference | ICCV | ||
Notes | Approved | no | |||
Call Number | ISE @ ise @ HHM2009 | Serial | 1213 | ||
Permanent link to this record | |||||
Author | D. Jayagopi; Bogdan Raducanu; D. Gatica-Perez | ||||
Title | Characterizing conversational group dynamics using nonverbal behaviour | Type | Conference Article | ||
Year | 2009 | Publication | 10th IEEE International Conference on Multimedia and Expo | Abbreviated Journal | |
Volume | Issue | Pages | 370–373 | ||
Keywords | |||||
Abstract | This paper addresses the novel problem of characterizing conversational group dynamics. It is well documented in social psychology that depending on the objectives a group, the dynamics are different. For example, a competitive meeting has a different objective from that of a collaborative meeting. We propose a method to characterize group dynamics based on the joint description of a group members' aggregated acoustical nonverbal behaviour to classify two meeting datasets (one being cooperative-type and the other being competitive-type). We use 4.5 hours of real behavioural multi-party data and show that our methodology can achieve a classification rate of upto 100%. | ||||
Address | New York, USA | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1945-7871 | ISBN | 978-1-4244-4290-4 | Medium | |
Area | Expedition | Conference | ICME | ||
Notes | OR;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ JRG2009 | Serial | 1217 | ||
Permanent link to this record | |||||
Author | Ricard Coll; Alicia Fornes; Josep Llados | ||||
Title | Graphological Analysis of Handwritten Text Documents for Human Resources Recruitment | Type | Conference Article | ||
Year | 2009 | Publication | 10th International Conference on Document Analysis and Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1081–1085 | ||
Keywords | |||||
Abstract | The use of graphology in recruitment processes has become a popular tool in many human resources companies. This paper presents a model that links features from handwritten images to a number of personality characteristics used to measure applicant aptitudes for the job in a particular hiring scenario. In particular we propose a model of measuring active personality and leadership of the writer. Graphological features that define such a profile are measured in terms of document and script attributes like layout configuration, letter size, shape, slant and skew angle of lines, etc. After the extraction, data is classified using a neural network. An experimental framework with real samples has been constructed to illustrate the performance of the approach. | ||||
Address | Barcelona, Spain | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1520-5363 | ISBN | 978-1-4244-4500-4 | Medium | |
Area | Expedition | Conference | ICDAR | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ CFL2009 | Serial | 1221 | ||
Permanent link to this record |