Josep M. Gonfaus. (2009). Semantic Segmentation of Images Using Random Ferns (Vol. 132). Master's thesis, , Bellaterra, Barcelona.
|
David Geronimo, Frederic Lerasle, & Antonio Lopez. (2012). State-driven particle filter for multi-person tracking. In J. Blanc-Talon et al. (Ed.), 11th International Conference on Advanced Concepts for Intelligent Vision Systems (Vol. 7517, pp. 467–478). Heidelberg: Springer.
Abstract: Multi-person tracking can be exploited in applications such as driver assistance, surveillance, multimedia and human-robot interaction. With the help of human detectors, particle filters offer a robust method able to filter noisy detections and provide temporal coherence. However, some traditional problems such as occlusions with other targets or the scene, temporal drifting or even the lost targets detection are rarely considered, making the systems performance decrease. Some authors propose to overcome these problems using heuristics not explained
and formalized in the papers, for instance by defining exceptions to the model updating depending on tracks overlapping. In this paper we propose to formalize these events by the use of a state-graph, defining the current state of the track (e.g., potential , tracked, occluded or lost) and the transitions between states in an explicit way. This approach has the advantage of linking track actions such as the online underlying models updating, which gives flexibility to the system. It provides an explicit representation to adapt the multiple parallel trackers depending on the context, i.e., each track can make use of a specific filtering strategy, dynamic model, number of particles, etc. depending on its state. We implement this technique in a single-camera multi-person tracker and test
it in public video sequences.
Keywords: human tracking
|
Alejandro Gonzalez Alzate. (2011). Evaluation of spatiotemporal descriptors for pedestrian detection in video sequences (Vol. 166). Master's thesis, , .
|
Yainuvis Socarras. (2011). Image segmentation for improving pedestrian detection (Vol. 167). Master's thesis, , .
|
Maria del Camp Davesa. (2011). Human action categorization in image sequences (Vol. 169). Master's thesis, , .
|
Monica Piñol. (2010). Adaptative Vocabulary Tree for Image Classification using Reinforcement Learning (Vol. 162). Master's thesis, , .
|
Javier Marin, David Geronimo, David Vazquez, & Antonio Lopez. (2012). Pedestrian Detection: Exploring Virtual Worlds. In Handbook of Pattern Recognition: Methods and Application (Vol. 5, pp. 145–162). iConcept Press.
Abstract: Handbook of pattern recognition will include contributions from university educators and active research experts. This Handbook is intended to serve as a basic reference on methods and applications of pattern recognition. The primary aim of this handbook is providing the community of pattern recognition with a readable, easy to understand resource that covers introductory, intermediate and advanced topics with equal clarity. Therefore, the Handbook of pattern recognition can serve equally well as reference resource and as classroom textbook. Contributions cover all methods, techniques and applications of pattern recognition. A tentative list of relevant topics might include: 1- Statistical, structural, syntactic pattern recognition. 2- Neural networks, machine learning, data mining. 3- Discrete geometry, algebraic, graph-based techniques for pattern recognition. 4- Face recognition, Signal analysis, image coding and processing, shape and texture analysis. 5- Document processing, text and graphics recognition, digital libraries. 6- Speech recognition, music analysis, multimedia systems. 7- Natural language analysis, information retrieval. 8- Biometrics, biomedical pattern analysis and information systems. 9- Other scientific, engineering, social and economical applications of pattern recognition. 10- Special hardware architectures, software packages for pattern recognition.
Keywords: Virtual worlds; Pedestrian Detection; Domain Adaptation
|
Sergio Escalera, Josep Moya, Laura Igual, Veronica Violant, & Maria Teresa Anguera. (2012). Automatic Human Behavior Analysis in ADHD. In Eunethydis 2nd International ADHD Conference.
|
Thanh Ha Do, Salvatore Tabbone, & Oriol Ramos Terrades. (2013). New Approach for Symbol Recognition Combining Shape Context of Interest Points with Sparse Representation. In 12th International Conference on Document Analysis and Recognition (pp. 265–269).
Abstract: In this paper, we propose a new approach for symbol description. Our method is built based on the combination of shape context of interest points descriptor and sparse representation. More specifically, we first learn a dictionary describing shape context of interest point descriptors. Then, based on information retrieval techniques, we build a vector model for each symbol based on its sparse representation in a visual vocabulary whose visual words are columns in the learneddictionary. The retrieval task is performed by ranking symbols based on similarity between vector models. Evaluation of our method, using benchmark datasets, demonstrates the validity of our approach and shows that it outperforms related state-of-theart methods.
|
R. Bertrand, P. Gomez-Krämer, Oriol Ramos Terrades, P. Franco, & Jean-Marc Ogier. (2013). A System Based On Intrinsic Features for Fraudulent Document Detection. In 12th International Conference on Document Analysis and Recognition (pp. 106–110).
Abstract: Paper documents still represent a large amount of information supports used nowadays and may contain critical data. Even though official documents are secured with techniques such as printed patterns or artwork, paper documents suffer froma lack of security.
However, the high availability of cheap scanning and printing hardware allows non-experts to easily create fake documents. As the use of a watermarking system added during the document production step is hardly possible, solutions have to be proposed to distinguish a genuine document from a forged one.
In this paper, we present an automatic forgery detection method based on document’s intrinsic features at character level. This method is based on the one hand on outlier character detection in a discriminant feature space and on the other hand on the detection of strictly similar characters. Therefore, a feature set iscomputed for all characters. Then, based on a distance between characters of the same class.
Keywords: paper document; document analysis; fraudulent document; forgery; fake
|
Javier Marin, David Vazquez, Antonio Lopez, Jaume Amores, & Bastian Leibe. (2013). Random Forests of Local Experts for Pedestrian Detection. In 15th IEEE International Conference on Computer Vision (pp. 2592–2599). IEEE.
Abstract: Pedestrian detection is one of the most challenging tasks in computer vision, and has received a lot of attention in the last years. Recently, some authors have shown the advantages of using combinations of part/patch-based detectors in order to cope with the large variability of poses and the existence of partial occlusions. In this paper, we propose a pedestrian detection method that efficiently combines multiple local experts by means of a Random Forest ensemble. The proposed method works with rich block-based representations such as HOG and LBP, in such a way that the same features are reused by the multiple local experts, so that no extra computational cost is needed with respect to a holistic method. Furthermore, we demonstrate how to integrate the proposed approach with a cascaded architecture in order to achieve not only high accuracy but also an acceptable efficiency. In particular, the resulting detector operates at five frames per second using a laptop machine. We tested the proposed method with well-known challenging datasets such as Caltech, ETH, Daimler, and INRIA. The method proposed in this work consistently ranks among the top performers in all the datasets, being either the best method or having a small difference with the best one.
Keywords: ADAS; Random Forest; Pedestrian Detection
|
Jon Almazan, Albert Gordo, Alicia Fornes, & Ernest Valveny. (2013). Handwritten Word Spotting with Corrected Attributes. In 15th IEEE International Conference on Computer Vision (pp. 1017–1024).
Abstract: We propose an approach to multi-writer word spotting, where the goal is to find a query word in a dataset comprised of document images. We propose an attributes-based approach that leads to a low-dimensional, fixed-length representation of the word images that is fast to compute and, especially, fast to compare. This approach naturally leads to an unified representation of word images and strings, which seamlessly allows one to indistinctly perform query-by-example, where the query is an image, and query-by-string, where the query is a string. We also propose a calibration scheme to correct the attributes scores based on Canonical Correlation Analysis that greatly improves the results on a challenging dataset. We test our approach on two public datasets showing state-of-the-art results.
|
Francisco Alvaro, Francisco Cruz, Joan Andreu Sanchez, Oriol Ramos Terrades, & Jose Miguel Bemedi. (2013). Page Segmentation of Structured Documents Using 2D Stochastic Context-Free Grammars. In 6th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 7887, pp. 133–140). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we define a bidimensional extension of Stochastic Context-Free Grammars for page segmentation of structured documents. Two sets of text classification features are used to perform an initial classification of each zone of the page. Then, the page segmentation is obtained as the most likely hypothesis according to a grammar. This approach is compared to Conditional Random Fields and results show significant improvements in several cases. Furthermore, grammars provide a detailed segmentation that allowed a semantic evaluation which also validates this model.
|
Francisco Cruz, & Oriol Ramos Terrades. (2013). Handwritten Line Detection via an EM Algorithm. In 12th International Conference on Document Analysis and Recognition (pp. 718–722).
Abstract: In this paper we present a handwritten line segmentation method devised to work on documents composed of several paragraphs with multiple line orientations. The method is based on a variation of the EM algorithm for the estimation of a set of regression lines between the connected components that compose the image. We evaluated our method on the ICDAR2009 handwriting segmentation contest dataset with promising results that overcome most of the presented methods. In addition, we prove the usability of the presented method by performing line segmentation on the George Washington database obtaining encouraging results.
|
Thanh Ha Do, Salvatore Tabbone, & Oriol Ramos Terrades. (2013). Document noise removal using sparse representations over learned dictionary. In Symposium on Document engineering (pp. 161–168).
Abstract: best paper award
In this paper, we propose an algorithm for denoising document images using sparse representations. Following a training set, this algorithm is able to learn the main document characteristics and also, the kind of noise included into the documents. In this perspective, we propose to model the noise energy based on the normalized cross-correlation between pairs of noisy and non-noisy documents. Experimental
results on several datasets demonstrate the robustness of our method compared with the state-of-the-art.
|