|
Albert Gordo. (2013). Document Image Representation, Classification and Retrieval in Large-Scale Domains (Ernest Valveny, & Florent Perronnin, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Despite the “paperless office” ideal that started in the decade of the seventies, businesses still strive against an increasing amount of paper documentation. Companies still receive huge amounts of paper documentation that need to be analyzed and processed, mostly in a manual way. A solution for this task consists in, first, automatically scanning the incoming documents. Then, document images can be analyzed and information can be extracted from the data. Documents can also be automatically dispatched to the appropriate workflows, used to retrieve similar documents in the dataset to transfer information, etc.
Due to the nature of this “digital mailroom”, we need document representation methods to be general, i.e., able to cope with very different types of documents. We need the methods to be sound, i.e., able to cope with unexpected types of documents, noise, etc. And, we need to methods to be scalable, i.e., able to cope with thousands or millions of documents that need to be processed, stored, and consulted. Unfortunately, current techniques of document representation, classification and retrieval are not apt for this digital mailroom framework, since they do not fulfill some or all of these requirements.
Through this thesis we focus on the problem of document representation aimed at classification and retrieval tasks under this digital mailroom framework. We first propose a novel document representation based on runlength histograms, and extend it to cope with more complex documents such as multiple-page documents, or documents that contain more sources of information such as extracted OCR text. Then we focus on the scalability requirements and propose a novel binarization method which we dubbed PCAE, as well as two general asymmetric distances between binary embeddings that can significantly improve the retrieval results at a minimal extra computational cost. Finally, we note the importance of supervised learning when performing large-scale retrieval, and study several approaches that can significantly boost the results at no extra cost at query time.
|
|
|
David Vazquez. (2013). Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection (Antonio Lopez, & Daniel Ponsa, Eds.) (Vol. 1). Ph.D. thesis, Ediciones Graficas Rey, Barcelona.
Abstract: Pedestrian detection is of paramount interest for many applications, e.g. Advanced Driver Assistance Systems, Intelligent Video Surveillance and Multimedia systems. Most promising pedestrian detectors rely on appearance-based classifiers trained with annotated data. However, the required annotation step represents an intensive and subjective task for humans, what makes worth to minimize their intervention in this process by using computational tools like realistic virtual worlds. The reason to use these kind of tools relies in the fact that they allow the automatic generation of precise and rich annotations of visual information. Nevertheless, the use of this kind of data comes with the following question: can a pedestrian appearance model learnt with virtual-world data work successfully for pedestrian detection in real-world scenarios?. To answer this question, we conduct different experiments that suggest a positive answer. However, the pedestrian classifiers trained with virtual-world data can suffer the so called dataset shift problem as real-world based classifiers does. Accordingly, we have designed different domain adaptation techniques to face this problem, all of them integrated in a same framework (V-AYLA). We have explored different methods to train a domain adapted pedestrian classifiers by collecting a few pedestrian samples from the target domain (real world) and combining them with many samples of the source domain (virtual world). The extensive experiments we present show that pedestrian detectors developed within the V-AYLA framework do achieve domain adaptation. Ideally, we would like to adapt our system without any human intervention. Therefore, as a first proof of concept we also propose an unsupervised domain adaptation technique that avoids human intervention during the adaptation process. To the best of our knowledge, this Thesis work is the first demonstrating adaptation of virtual and real worlds for developing an object detector. Last but not least, we also assessed a different strategy to avoid the dataset shift that consists in collecting real-world samples and retrain with them in such a way that no bounding boxes of real-world pedestrians have to be provided. We show that the generated classifier is competitive with respect to the counterpart trained with samples collected by manually annotating pedestrian bounding boxes. The results presented on this Thesis not only end with a proposal for adapting a virtual-world pedestrian detector to the real world, but also it goes further by pointing out a new methodology that would allow the system to adapt to different situations, which we hope will provide the foundations for future research in this unexplored area.
Keywords: Pedestrian Detection; Domain Adaptation
|
|
|
Jean-Marc Ogier, Wenyin Liu, & Josep Llados (Eds.). (2010). Graphics Recognition: Achievements, Challenges, and Evolution (Vol. 6020). LNCS. Springer Link.
|
|
|
Marçal Rusiñol, R.Roset, Josep Llados, & C.Montaner. (2011). Automatic Index Generation of Digitized Map Series by Coordinate Extraction and Interpretation. In In Proceedings of the Sixth International Workshop on Digital Technologies in Cartographic Heritage.
|
|
|
David Vazquez, Antonio Lopez, & Daniel Ponsa. (2012). Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection. In 21st International Conference on Pattern Recognition (pp. 3492–3495). Tsukuba Science City, JAPAN: IEEE.
Abstract: Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate, and lead it to self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1).
Keywords: Pedestrian Detection; Domain Adaptation; Virtual worlds
|
|
|
Diego Cheda, Daniel Ponsa, & Antonio Lopez. (2012). Monocular Egomotion Estimation based on Image Matching. In 1st International Conference on Pattern Recognition Applications and Methods (pp. 425–430).
|
|
|
Fernando Barrera, Felipe Lumbreras, Cristhian Aguilera, & Angel Sappa. (2012). Planar-Based Multispectral Stereo. In 11th Quantitative InfraRed Thermography.
|
|
|
Cristhian Aguilera, Fernando Barrera, Angel Sappa, & Ricardo Toledo. (2012). A Novel SIFT-Like-Based Approach for FIR-VS Images Registration. In 11th Quantitative InfraRed Thermography.
|
|
|
Monica Piñol, Angel Sappa, Angeles Lopez, & Ricardo Toledo. (2012). Feature Selection Based on Reinforcement Learning for Object Recognition. In Adaptive Learning Agents Workshop (pp. 33–39).
|
|
|
German Ros, Angel Sappa, Daniel Ponsa, & Antonio Lopez. (2012). Visual SLAM for Driverless Cars: A Brief Survey. In IEEE Workshop on Navigation, Perception, Accurate Positioning and Mapping for Intelligent Vehicles.
|
|
|
Jose Carlos Rubio, Joan Serrat, & Antonio Lopez. (2012). Multiple target tracking and identity linking under split, merge and occlusion of targets and observations. In 1st International Conference on Pattern Recognition Applications and Methods.
|
|
|
Sergio Vera, Debora Gil, Agnes Borras, F. Javier Sanchez, Frederic Perez, & Marius G. Linguraru. (2011). Computation and Evaluation of Medial Surfaces for Shape Representation of Abdominal Organs. In In H. Yoshida et al (Ed.), Workshop on Computational and Clinical Applications in Abdominal Imaging (Vol. 7029, pp. 223–230). Springer Berlin Heidelberg.
Abstract: Medial representations are powerful tools for describing and parameterizing the volumetric shape of anatomical structures. Existing methods show excellent results when applied to 2D objects, but their quality drops across dimensions. This paper contributes to the computation of medial manifolds in two aspects. First, we provide a standard scheme for the computation of medial manifolds that avoid degenerated medial axis segments; second, we introduce an energy based method which performs independently of the dimension. We evaluate quantitatively the performance of our method with respect to existing approaches, by applying them to synthetic shapes of known medial geometry. Finally, we show results on shape representation of multiple abdominal organs, exploring the use of medial manifolds for the representation of multi-organ relations.
|
|
|
Mohammad Ali Bagheri, Qigang Gao, & Sergio Escalera. (2012). Three-Dimensional Design of Error Correcting Output Codes. In 8th International Conference on Machine Learning and Data Mining (pp. 29–).
|
|
|
Dimosthenis Karatzas, & Ch. Lioutas. (1998). Software Package Development for Electron Diffraction Image Analysis. In Proceedings of the XIV Solid State Physics National Conference.
|
|
|
Volkmar Frinken, Francisco Zamora, Salvador España, Maria Jose Castro, Andreas Fischer, & Horst Bunke. (2012). Long-Short Term Memory Neural Networks Language Modeling for Handwriting Recognition. In 21st International Conference on Pattern Recognition (pp. 701–704).
Abstract: Unconstrained handwritten text recognition systems maximize the combination of two separate probability scores. The first one is the observation probability that indicates how well the returned word sequence matches the input image. The second score is the probability that reflects how likely a word sequence is according to a language model. Current state-of-the-art recognition systems use statistical language models in form of bigram word probabilities. This paper proposes to model the target language by means of a recurrent neural network with long-short term memory cells. Because the network is recurrent, the considered context is not limited to a fixed size especially as the memory cells are designed to deal with long-term dependencies. In a set of experiments conducted on the IAM off-line database we show the superiority of the proposed language model over statistical n-gram models.
|
|