|
D. Perez, L. Tarazon, N. Serrano, F.M. Castro, Oriol Ramos Terrades, & A. Juan. (2009). The GERMANA Database. In 10th International Conference on Document Analysis and Recognition (pp. 301–305).
Abstract: A new handwritten text database, GERMANA, is presented to facilitate empirical comparison of different approaches to text line extraction and off-line handwriting recognition. GERMANA is the result of digitising and annotating a 764-page Spanish manuscript from 1891, in which most pages only contain nearly calligraphed text written on ruled sheets of well-separated lines. To our knowledge, it is the first publicly available database for handwriting research, mostly written in Spanish and comparable in size to standard databases. Due to its sequential book structure, it is also well-suited for realistic assessment of interactive handwriting recognition systems. To provide baseline results for reference in future studies, empirical results are also reported, using standard techniques and tools for preprocessing, feature extraction, HMM-based image modelling, and language modelling.
|
|
|
L.Tarazon, D. Perez, N. Serrano, V. Alabau, Oriol Ramos Terrades, A. Sanchis, et al. (2009). Confidence Measures for Error Correction in Interactive Transcription of Handwritten Text. In 15th International Conference on Image Analysis and Processing (Vol. 5716, pp. 567–574). LNCS. Springer Berlin Heidelberg.
Abstract: An effective approach to transcribe old text documents is to follow an interactive-predictive paradigm in which both, the system is guided by the human supervisor, and the supervisor is assisted by the system to complete the transcription task as efficiently as possible. In this paper, we focus on a particular system prototype called GIDOC, which can be seen as a first attempt to provide user-friendly, integrated support for interactive-predictive page layout analysis, text line detection and handwritten text transcription. More specifically, we focus on the handwriting recognition part of GIDOC, for which we propose the use of confidence measures to guide the human supervisor in locating possible system errors and deciding how to proceed. Empirical results are reported on two datasets showing that a word error rate not larger than a 10% can be achieved by only checking the 32% of words that are recognised with less confidence.
|
|
|
H. Chouaib, Oriol Ramos Terrades, Salvatore Tabbone, F. Cloppet, & N. Vincent. (2008). Feature Selection Combining Genetic Algorithm and Adaboost Classifiers. In 19th International Conference on Pattern Recognition (pp. 1–4).
|
|
|
T.O. Nguyen, Salvatore Tabbone, & Oriol Ramos Terrades. (2008). Symbol Descriptor Based on Shape Context and Vector Model of Information Retrieval. In Proceedings of the 8th IAPR International Workshop on Document Analysis Systems, (pp. 191–197).
|
|
|
H. Chouaib, Salvatore Tabbone, Oriol Ramos Terrades, F. Cloppet, N. Vincent, & A.T. Thierry Paquet. (2008). Sélection de Caractéristiques à partir d'un algorithme génétique et d'une combinaison de classifieurs Adaboost. In Colloque International Francophone sur l'Ecrit et le Document (pp. 181–186).
|
|
|
T.O. Nguyen, Salvatore Tabbone, Oriol Ramos Terrades, & A.T. Thierry. (2008). Proposition d'un descripteur de formes et du modèle vectoriel pour la recherche de symboles. In Colloque International Francophone sur l'Ecrit et le Document (pp. 79–84).
|
|
|
Salvatore Tabbone, Oriol Ramos Terrades, & S. Barrat. (2008). Histogram of radon transform. A useful descriptor for shape retrieval. In 19th International Conference on Pattern Recognition (pp. 1–4).
|
|
|
Jose Carlos Rubio, Joan Serrat, Antonio Lopez, & Daniel Ponsa. (2012). Multiple target tracking for intelligent headlights control. TITS - IEEE Transactions on Intelligent Transportation Systems, 13(2), 594–605.
Abstract: Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm.
Keywords: Intelligent Headlights
|
|
|
Yainuvis Socarras, Sebastian Ramos, David Vazquez, Antonio Lopez, & Theo Gevers. (2013). Adapting Pedestrian Detection from Synthetic to Far Infrared Images. In ICCV Workshop on Visual Domain Adaptation and Dataset Bias. Sydney, Australy.
Abstract: We present different techniques to adapt a pedestrian classifier trained with synthetic images and the corresponding automatically generated annotations to operate with far infrared (FIR) images. The information contained in this kind of images allow us to develop a robust pedestrian detector invariant to extreme illumination changes.
Keywords: Domain Adaptation; Far Infrared; Pedestrian Detection
|
|
|
V.C.Kieu, Alicia Fornes, M. Visani, N.Journet, & Anjan Dutta. (2013). The ICDAR/GREC 2013 Music Scores Competition on Staff Removal. In 10th IAPR International Workshop on Graphics Recognition.
Abstract: The first competition on music scores that was organized at ICDAR and GREC in 2011 awoke the interest of researchers, who participated both at staff removal and writer identification tasks. In this second edition, we propose a staff removal competition where we simulate old music scores. Thus, we have created a new set of images, which contain noise and 3D distortions. This paper describes the distortion methods, metrics, the participant’s methods and the obtained results.
Keywords: Competition; Music scores; Staff Removal
|
|
|
M. Visani, V.C.Kieu, Alicia Fornes, & N.Journet. (2013). The ICDAR 2013 Music Scores Competition: Staff Removal. In 12th International Conference on Document Analysis and Recognition (pp. 1439–1443).
Abstract: The first competition on music scores that was organized at ICDAR in 2011 awoke the interest of researchers, who participated both at staff removal and writer identification tasks. In this second edition, we focus on the staff removal task and simulate a real case scenario: old music scores. For this purpose, we have generated a new set of images using two kinds of degradations: local noise and 3D distortions. This paper describes the dataset, distortion methods, evaluation metrics, the participant's methods and the obtained results.
|
|
|
Sergio Escalera, Ana Puig, Oscar Amoros, & Maria Salamo. (2011). Intelligent GPGPU Classification in Volume Visualization: a framework based on Error-Correcting Output Codes. CGF - Computer Graphics Forum, 30(7), 2107–2115.
Abstract: IF JCR 1.455 2010 25/99
In volume visualization, the definition of the regions of interest is inherently an iterative trial-and-error process finding out the best parameters to classify and render the final image. Generally, the user requires a lot of expertise to analyze and edit these parameters through multi-dimensional transfer functions. In this paper, we present a framework of intelligent methods to label on-demand multiple regions of interest. These methods can be split into a two-level GPU-based labelling algorithm that computes in time of rendering a set of labelled structures using the Machine Learning Error-Correcting Output Codes (ECOC) framework. In a pre-processing step, ECOC trains a set of Adaboost binary classifiers from a reduced pre-labelled data set. Then, at the testing stage, each classifier is independently applied on the features of a set of unlabelled samples and combined to perform multi-class labelling. We also propose an alternative representation of these classifiers that allows to highly parallelize the testing stage. To exploit that parallelism we implemented the testing stage in GPU-OpenCL. The empirical results on different data sets for several volume structures shows high computational performance and classification accuracy.
|
|
|
Laura Igual, Joan Carles Soliva, Antonio Hernandez, Sergio Escalera, Xavier Jimenez, Oscar Vilarroya, et al. (2011). A fully-automatic caudate nucleus segmentation of brain MRI: Application in volumetric analysis of pediatric attention-deficit/hyperactivity disorder. BEO - BioMedical Engineering Online, 10(105), 1–23.
Abstract: Background
Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations.
Method
We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure.
Results
We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis.
Conclusion
CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
Keywords: Brain caudate nucleus; segmentation; MRI; atlas-based strategy; Graph Cut framework
|
|
|
Mario Rojas, David Masip, A. Todorov, & Jordi Vitria. (2011). Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models. Plos - PloS one, 6(8), e23323.
Abstract: JCR Impact Factor 2010: 4.411
Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance) can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a) derive a facial trait judgment model from training data and b) predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations) and classification rules (4 rules) suggest that a) prediction of perception of facial traits is learnable by both holistic and structural approaches; b) the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c) for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions
|
|
|
Bogdan Raducanu, & Fadi Dornaika. (2012). A Supervised Non-linear Dimensionality Reduction Approach for Manifold Learning. PR - Pattern Recognition, 45(6), 2432–2444.
Abstract: IF= 2.61
IF=2.61 (2010)
In this paper we introduce a novel supervised manifold learning technique called Supervised Laplacian Eigenmaps (S-LE), which makes use of class label information to guide the procedure of non-linear dimensionality reduction by adopting the large margin concept. The graph Laplacian is split into two components: within-class graph and between-class graph to better characterize the discriminant property of the data. Our approach has two important characteristics: (i) it adaptively estimates the local neighborhood surrounding each sample based on data density and similarity and (ii) the objective function simultaneously maximizes the local margin between heterogeneous samples and pushes the homogeneous samples closer to each other.
Our approach has been tested on several challenging face databases and it has been conveniently compared with other linear and non-linear techniques, demonstrating its superiority. Although we have concentrated in this paper on the face recognition problem, the proposed approach could also be applied to other category of objects characterized by large variations in their appearance (such as hand or body pose, for instance.
|
|