|
Jose Manuel Alvarez, Theo Gevers, & Antonio Lopez. (2010). Learning photometric invariance for object detection. IJCV - International Journal of Computer Vision, 90(1), 45–61.
Abstract: Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously.
Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time.
Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods
Keywords: road detection
|
|
|
Sergio Escalera, Oriol Pujol, Eric Laciar, Jordi Vitria, Esther Pueyo, & Petia Radeva. (2010). Classification of Coronary Damage in Chronic Chagasic Patients. In M. H.(eds) V. Sgurev (Ed.), Intelligent Systems – From Theory to Practice. Studies in Computational Intelligence (Vol. 299, pp. 461–478). Springer-Verlag.
Abstract: Post Conference IEEE-IS 2008
The Chagas’ disease is endemic in all Latin America, affecting millions of people in the continent. In order to diagnose and treat the chagas’ disease, it is important to detect and measure the coronary damage of the patient. In this paper,
we analyze and categorize patients into different groups based on the coronary damage produced by the disease. Based on the features of the heart cycle extracted using high resolution ECG, a multi-class scheme of Error-Correcting Output Codes (ECOC)is formulated and successfully applied. The results show that the proposed scheme obtains significant performance improvements compared to previous works and state-of-the-art ECOC designs.
Keywords: Chagas disease; Error-Correcting Output Codes; High resolution ECG; Decoding
|
|
|
Francesco Ciompi, Oriol Pujol, E Fernandez-Nofrerias, J. Mauri, & Petia Radeva. (2010). Conditional Random Fields for image segmentation in Intravascular Ultrasound. In Medical Image Computing in Catalunya: Graduate Student Workshop (13–14).
Abstract: We present a Conditional Random Fields based approach for segmenting Intravascular Ultrasond (IVUS) images. The presented method uses a contextual discriminative graphical model to deal with the presence of distorsions and artifacts in IVUS images, that turns the segmentation of interesting regions into a difficult task. An accurate lumen segmentation on IVUS longitudinal images is achieved.
|
|
|
Jose Manuel Alvarez. (2010). Combining Context and Appearance for Road Detection (Antonio Lopez, & Theo Gevers, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Road traffic crashes have become a major cause of death and injury throughout the world.
Hence, in order to improve road safety, the automobile manufacture is moving towards the
development of vehicles with autonomous functionalities such as keeping in the right lane, safe distance keeping between vehicles or regulating the speed of the vehicle according to the traffic conditions. A key component of these systems is vision–based road detection that aims to detect the free road surface ahead the moving vehicle. Detecting the road using a monocular vision system is very challenging since the road is an outdoor scenario imaged from a mobile platform. Hence, the detection algorithm must be able to deal with continuously changing imaging conditions such as the presence ofdifferent objects (vehicles, pedestrians), different environments (urban, highways, off–road), different road types (shape, color), and different imaging conditions (varying illumination, different viewpoints and changing weather conditions). Therefore, in this thesis, we focus on vision–based road detection using a single color camera. More precisely, we first focus on analyzing and grouping pixels according to their low–level properties. In this way, two different approaches are presented to exploit
color and photometric invariance. Then, we focus the research of the thesis on exploiting context information. This information provides relevant knowledge about the road not using pixel features from road regions but semantic information from the analysis of the scene.
In this way, we present two different approaches to infer the geometry of the road ahead
the moving vehicle. Finally, we focus on combining these context and appearance (color)
approaches to improve the overall performance of road detection algorithms. The qualitative and quantitative results presented in this thesis on real–world driving sequences show that the proposed method is robust to varying imaging conditions, road types and scenarios going beyond the state–of–the–art.
|
|
|
Partha Pratim Roy. (2010). Multi-Oriented and Multi-Scaled Text Character Analysis and Recognition in Graphical Documents and their Applications to Document Image Retrieval (Josep Llados, & Umapada Pal, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: With the advent research of Document Image Analysis and Recognition (DIAR), an
important line of research is explored on indexing and retrieval of graphics rich documents. It aims at finding relevant documents relying on segmentation and recognition
of text and graphics components underlying in non-standard layout where commercial
OCRs can not be applied due to complexity. This thesis is focused towards text information extraction approaches in graphical documents and retrieval of such documents
using text information.
Automatic text recognition in graphical documents (map, engineering drawing,
etc.) involves many challenges because text characters are usually printed in multioriented and multi-scale way along with different graphical objects. Text characters
are used to annotate the graphical curve lines and hence, many times they follow
curvi-linear paths too. For OCR of such documents, individual text lines and their
corresponding words/characters need to be extracted.
For recognition of multi-font, multi-scale and multi-oriented characters, we have
proposed a feature descriptor for character shape using angular information from contour pixels to take care of the invariance nature. To improve the efficiency of OCR, an
approach towards the segmentation of multi-oriented touching strings into individual
characters is also discussed. Convex hull based background information is used to
segment a touching string into possible primitive segments and later these primitive
segments are merged to get optimum segmentation using dynamic programming. To
overcome the touching/overlapping problem of text with graphical lines, a character
spotting approach using SIFT and skeleton information is included. Afterwards, we
propose a novel method to extract individual curvi-linear text lines using the foreground and background information of the characters of the text and a water reservoir
concept is used to utilize the background information.
We have also formulated the methodologies for graphical document retrieval applications using query words and seals. The retrieval approaches are performed using
recognition results of individual components in the document. Given a query text,
the system extracts positional knowledge from the query word and uses the same to
generate hypothetical locations in the document. Indexing of documents is also performed based on automatic detection of seals from documents containing cluttered
background. A seal is characterized by scale and rotation invariant spatial feature
descriptors computed from labelled text characters and a concept based on the Generalized Hough Transform is used to locate the seal in documents.
|
|
|
Cesar Isaza, Joaquin Salas, & Bogdan Raducanu. (2010). Toward the Detection of Urban Infrastructures Edge Shadows. In eds. Blanc–Talon et al (Ed.), 12th International Conference on Advanced Concepts for Intelligent Vision Systems (Vol. 6474, 30–37). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper, we propose a novel technique to detect the shadows cast by urban infrastructure, such as buildings, billboards, and traffic signs, using a sequence of images taken from a fixed camera. In our approach, we compute two different background models in parallel: one for the edges and one for the reflected light intensity. An algorithm is proposed to train the system to distinguish between moving edges in general and edges that belong to static objects, creating an edge background model. Then, during operation, a background intensity model allow us to separate between moving and static objects. Those edges included in the moving objects and those that belong to the edge background model are subtracted from the current image edges. The remaining edges are the ones cast by urban infrastructure. Our method is tested on a typical crossroad scene and the results show that the approach is sound and promising.
|
|
|
Muhammad Muzzamil Luqman, Josep Llados, Jean-Yves Ramel, & Thierry Brouard. (2010). A Fuzzy-Interval Based Approach For Explicit Graph Embedding, Recognizing Patterns in Signals, Speech, Images and Video. In 20th International Conference on Pattern Recognition (Vol. 6388, 93–98). LNCS. Springer, Heidelberg.
Abstract: We present a new method for explicit graph embedding. Our algorithm extracts a feature vector for an undirected attributed graph. The proposed feature vector encodes details about the number of nodes, number of edges, node degrees, the attributes of nodes and the attributes of edges in the graph. The first two features are for the number of nodes and the number of edges. These are followed by w features for node degrees, m features for k node attributes and n features for l edge attributes — which represent the distribution of node degrees, node attribute values and edge attribute values, and are obtained by defining (in an unsupervised fashion), fuzzy-intervals over the list of node degrees, node attributes and edge attributes. Experimental results are provided for sample data of ICPR2010 contest GEPR.
|
|
|
Muhammad Muzzamil Luqman, Thierry Brouard, Jean-Yves Ramel, & Josep Llados. (2010). A Content Spotting System For Line Drawing Graphic Document Images. In 20th International Conference on Pattern Recognition (Vol. 20, 3420–3423).
Abstract: We present a content spotting system for line drawing graphic document images. The proposed system is sufficiently domain independent and takes the keyword based information retrieval for graphic documents, one step forward, to Query By Example (QBE) and focused retrieval. During offline learning mode: we vectorize the documents in the repository, represent them by attributed relational graphs, extract regions of interest (ROIs) from them, convert each ROI to a fuzzy structural signature, cluster similar signatures to form ROI classes and build an index for the repository. During online querying mode: a Bayesian network classifier recognizes the ROIs in the query image and the corresponding documents are fetched by looking up in the repository index. Experimental results are presented for synthetic images of architectural and electronic documents.
|
|
|
Thierry Brouard, A. Delaplace, Muhammad Muzzamil Luqman, H. Cardot, & Jean-Yves Ramel. (2010). Design of Evolutionary Methods Applied to the Learning of Bayesian Nerwork Structures. In Ahmed Rebai (Ed.), Bayesian Network (pp. 13–37). Sciyo.
|
|
|
Jaume Gibert, Ernest Valveny, & Horst Bunke. (2010). Graph of Words Embedding for Molecular Structure-Activity Relationship Analysis. In 15th Iberoamerican Congress on Pattern Recognition (Vol. 6419, 30–37). LNCS.
Abstract: Structure-Activity relationship analysis aims at discovering chemical activity of molecular compounds based on their structure. In this article we make use of a particular graph representation of molecules and propose a new graph embedding procedure to solve the problem of structure-activity relationship analysis. The embedding is essentially an arrangement of a molecule in the form of a vector by considering frequencies of appearing atoms and frequencies of covalent bonds between them. Results on two benchmark databases show the effectiveness of the proposed technique in terms of recognition accuracy while avoiding high operational costs in the transformation.
|
|
|
Ariel Amato, Mikhail Mozerov, Xavier Roca, & Jordi Gonzalez. (2010). Robust Real-Time Background Subtraction Based on Local Neighborhood Patterns. EURASIPJ - EURASIP Journal on Advances in Signal Processing, , 7.
Abstract: Article ID 901205
This paper describes an efficient background subtraction technique for detecting moving objects. The proposed approach is able to overcome difficulties like illumination changes and moving shadows. Our method introduces two discriminative features based on angular and modular patterns, which are formed by similarity measurement between two sets of RGB color vectors: one belonging to the background image and the other to the current image. We show how these patterns are used to improve foreground detection in the presence of moving shadows and in the case when there are strong similarities in color between background and foreground pixels. Experimental results over a collection of public and own datasets of real image sequences demonstrate that the proposed technique achieves a superior performance compared with state-of-the-art methods. Furthermore, both the low computational and space complexities make the presented algorithm feasible for real-time applications.
|
|
|
Pierluigi Casale, Oriol Pujol, & Petia Radeva. (2010). Embedding Random Projections in Regularized Gradient Boosting Machines. In Supervised and Unsupervised Ensemble Methods and their Applications in the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (44–53).
|
|
|
Pierluigi Casale, Oriol Pujol, & Petia Radeva. (2010). Classyfing Agitation in Sedated ICU Patients. In Medical Image Computing in Catalunya: Graduate Student Workshop (19–20).
Abstract: Agitation is a serious problem in sedated intensive care unit (ICU) patients. In this work, standard machine learning techniques working on wearable accelerometer data have been used to classifying agitation levels achieving very good classification performances.
|
|
|
Angel Sappa (Ed.). (2010). Computer Graphics and Imaging.
|
|
|
David Geronimo, & Antonio Lopez. (2010). Sistema de deteccion de peatones.
Abstract: Durante la próxima década, los sistemas de protección de peatones jugarán un papel fundamental en el reto de mejorar la seguridad viaria. El objetivo principal de estos sistemas, detectar peatones en entornos urbanos, implica procesar imágenes de escenas exteriores desde una plataforma móvil para buscar objetos de aspecto variable como son las personas. Dadas estas dificultades, estos sistemas hacen uso de las últimas técnicas de visión por computador. Esta propuesta consiste en un sistema de tres módulos basado tanto en información 2D como en 3D. El primer módulo utiliza información 3D para hacer una estimación de los parámetros de la carretera y seleccionar regiones de interés que serán analizadas después. El segundo módulo utiliza un clasificador de ventanas 2D para etiquetar las mencionadas regiones como peatón o no peatón. El módulo final vuelve a utilizar de nuevo la información 3D para verificar las regiones clasificadas y, con información 2D, refinar los resultados finales. Los resultados experimentales son positivos tanto en rendimiento como en tiempo de cómputo.
|
|