Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–12] |
Jose Seabra, F. Javier Sanchez, Francesco Ciompi, & Petia Radeva. (2010). Ultrasonographic Plaque Characterization using a Rayleigh Mixture Model. In 7th IEEE International Symposium on Biomedical Imaging (1–4).
Abstract: From Nano to Macro
A correct modelling of tissue morphology is determinant for the identification of vulnerable plaques. This paper aims at describing the plaque composition by means of a Rayleigh Mixture Model applied to ultrasonic data. The effectiveness of using a mixture of distributions is established through synthetic and real ultrasonic data samples. Furthermore, the proposed mixture model is used in a plaque classification problem in Intravascular Ultrasound (IVUS) images of coronary plaques. A classifier tested on a set of 67 in-vitro plaques, yields an overall accuracy of 86% and sensitivity of 92%, 94% and 82%, for fibrotic, calcified and lipidic tissues, respectively. These results strongly suggest that different plaques types can be distinguished by means of the coefficients and Rayleigh parameters of the mixture distribution. |
Jose Manuel Alvarez, Theo Gevers, & Antonio Lopez. (2010). 3D Scene Priors for Road Detection. In 23rd IEEE Conference on Computer Vision and Pattern Recognition (57–64).
Abstract: Vision-based road detection is important in different areas of computer vision such as autonomous driving, car collision warning and pedestrian crossing detection. However, current vision-based road detection methods are usually based on low-level features and they assume structured roads, road homogeneity, and uniform lighting conditions. Therefore, in this paper, contextual 3D information is used in addition to low-level cues. Low-level photometric invariant cues are derived from the appearance of roads. Contextual cues used include horizon lines, vanishing points, 3D scene layout and 3D road stages. Moreover, temporal road cues are included. All these cues are sensitive to different imaging conditions and hence are considered as weak cues. Therefore, they are combined to improve the overall performance of the algorithm. To this end, the low-level, contextual and temporal cues are combined in a Bayesian framework to classify road sequences. Large scale experiments on road sequences show that the road detection method is robust to varying imaging conditions, road types, and scenarios (tunnels, urban and highway). Further, using the combined cues outperforms all other individual cues. Finally, the proposed method provides highest road detection accuracy when compared to state-of-the-art methods.
Keywords: road detection
|
Jose Manuel Alvarez, Theo Gevers, & Antonio Lopez. (2010). Learning photometric invariance for object detection. IJCV - International Journal of Computer Vision, 90(1), 45–61.
Abstract: Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously. Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time. Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods Keywords: road detection
|
Jose Manuel Alvarez, Felipe Lumbreras, Theo Gevers, & Antonio Lopez. (2010). Geographic Information for vision-based Road Detection. In IEEE Intelligent Vehicles Symposium (621–626).
Abstract: Road detection is a vital task for the development of autonomous vehicles. The knowledge of the free road surface ahead of the target vehicle can be used for autonomous driving, road departure warning, as well as to support advanced driver assistance systems like vehicle or pedestrian detection. Using vision to detect the road has several advantages in front of other sensors: richness of features, easy integration, low cost or low power consumption. Common vision-based road detection approaches use low-level features (such as color or texture) as visual cues to group pixels exhibiting similar properties. However, it is difficult to foresee a perfect clustering algorithm since roads are in outdoor scenarios being imaged from a mobile platform. In this paper, we propose a novel high-level approach to vision-based road detection based on geographical information. The key idea of the algorithm is exploiting geographical information to provide a rough detection of the road. Then, this segmentation is refined at low-level using color information to provide the final result. The results presented show the validity of our approach.
Keywords: road detection
|
Jose Manuel Alvarez. (2010). Combining Context and Appearance for Road Detection (Antonio Lopez, & Theo Gevers, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Road traffic crashes have become a major cause of death and injury throughout the world.
Hence, in order to improve road safety, the automobile manufacture is moving towards the development of vehicles with autonomous functionalities such as keeping in the right lane, safe distance keeping between vehicles or regulating the speed of the vehicle according to the traffic conditions. A key component of these systems is vision–based road detection that aims to detect the free road surface ahead the moving vehicle. Detecting the road using a monocular vision system is very challenging since the road is an outdoor scenario imaged from a mobile platform. Hence, the detection algorithm must be able to deal with continuously changing imaging conditions such as the presence ofdifferent objects (vehicles, pedestrians), different environments (urban, highways, off–road), different road types (shape, color), and different imaging conditions (varying illumination, different viewpoints and changing weather conditions). Therefore, in this thesis, we focus on vision–based road detection using a single color camera. More precisely, we first focus on analyzing and grouping pixels according to their low–level properties. In this way, two different approaches are presented to exploit color and photometric invariance. Then, we focus the research of the thesis on exploiting context information. This information provides relevant knowledge about the road not using pixel features from road regions but semantic information from the analysis of the scene. In this way, we present two different approaches to infer the geometry of the road ahead the moving vehicle. Finally, we focus on combining these context and appearance (color) approaches to improve the overall performance of road detection algorithms. The qualitative and quantitative results presented in this thesis on real–world driving sequences show that the proposed method is robust to varying imaging conditions, road types and scenarios going beyond the state–of–the–art. |
Jose Carlos Rubio, Joan Serrat, Antonio Lopez, & Daniel Ponsa. (2010). Multiple-target tracking for the intelligent headlights control. In 13th Annual International Conference on Intelligent Transportation Systems (903–910).
Abstract: TA7.4
Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm. Keywords: Intelligent Headlights
|
Jose Antonio Rodriguez, Florent Perronnin, Gemma Sanchez, & Josep Llados. (2010). Unsupervised writer adaptation of whole-word HMMs with application to word-spotting. PRL - Pattern Recognition Letters, 31(8), 742–749.
Abstract: In this paper we propose a novel approach for writer adaptation in a handwritten word-spotting task. The method exploits the fact that the semi-continuous hidden Markov model separates the word model parameters into (i) a codebook of shapes and (ii) a set of word-specific parameters.
Our main contribution is to employ this property to derive writer-specific word models by statistically adapting an initial universal codebook to each document. This process is unsupervised and does not even require the appearance of the keyword(s) in the searched document. Experimental results show an increase in performance when this adaptation technique is applied. To the best of our knowledge, this is the first work dealing with adaptation for word-spotting. The preliminary version of this paper obtained an IBM Best Student Paper Award at the 19th International Conference on Pattern Recognition. Keywords: Word-spotting; Handwriting recognition; Writer adaptation; Hidden Markov model; Document analysis
|
Jorge Bernal, Fernando Vilariño, & F. Javier Sanchez. (2010). Feature Detectors and Feature Descriptors: Where We Are Now (Vol. 154).
Abstract: Feature Detection and Feature Description are clearly nowadays topics. Many Computer Vision applications rely on the use of several of these techniques in order to extract the most significant aspects of an image so they can help in some tasks such as image retrieval, image registration, object recognition, object categorization and texture classification, among others. In this paper we define what Feature Detection and Description are and then we present an extensive collection of several methods in order to show the different techniques that are being used right now. The aim of this report is to provide a glimpse of what is being used currently in these fields and to serve as a starting point for future endeavours.
|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2010). Reduction of Pattern Search Area in Colonoscopy Images by Merging Non-Informative Regions. In 28th Congreso Anual de la Sociedad Española de Ingeniería Biomédica.
Abstract: One of the first usual steps in pattern recognition schemas is image segmentation, in order to reduce the dimensionality of the problem and manage smaller quantity of data. In our case as we are pursuing real-time colon cancer polyp detection, this step is crucial. In this paper we present a non-informative region estimation algorithm that will let us discard some parts of the image where we will not expect to find colon cancer polyps. The performance of our approach will be measured in terms of both non-informative areas elimination and polyps’ areas preserving. The results obtained show the importance of having correct non- informative region estimation in order to fasten the whole recognition process.
|
Jon Almazan. (2010). Deforming the Blurred Shape Model for Shape Description and Recognition (Vol. 163). Master's thesis, , . |
Joan Serrat, & Antonio Lopez. (2010). Deteccion automatica de lineas de carril para la asistencia a la conduccion.
Abstract: La detección por cámara de las líneas de carril en las carreteras puede ser una solución asequible a los riesgos de conducción generados por los adelantamientos o las salidas de carril. Este trabajo propone un sistema que funciona en tiempo real y que obtiene muy buenos resultados. El sistema está preparado para identificar las líneas en condiciones de visibilidad poco favorables, como puede ser la conducción nocturna o con otros vehículos que dificulten la visión.
|
Joan Mas, Josep Llados, Gemma Sanchez, & J.A. Jorge. (2010). A syntactic approach based on distortion-tolerant Adjacency Grammars and a spatial-directed parser to interpret sketched diagrams. PR - Pattern Recognition, 43(12), 4148–4164.
Abstract: This paper presents a syntactic approach based on Adjacency Grammars (AG) for sketch diagram modeling and understanding. Diagrams are a combination of graphical symbols arranged according to a set of spatial rules defined by a visual language. AG describe visual shapes by productions defined in terms of terminal and non-terminal symbols (graphical primitives and subshapes), and a set functions describing the spatial arrangements between symbols. Our approach to sketch diagram understanding provides three main contributions. First, since AG are linear grammars, there is a need to define shapes and relations inherently bidimensional using a sequential formalism. Second, our parsing approach uses an indexing structure based on a spatial tessellation. This serves to reduce the search space when finding candidates to produce a valid reduction. This allows order-free parsing of 2D visual sentences while keeping combinatorial explosion in check. Third, working with sketches requires a distortion model to cope with the natural variations of hand drawn strokes. To this end we extended the basic grammar with a distortion measure modeled on the allowable variation on spatial constraints associated with grammar productions. Finally, the paper reports on an experimental framework an interactive system for sketch analysis. User tests performed on two real scenarios show that our approach is usable in interactive settings.
Keywords: Syntactic Pattern Recognition; Symbol recognition; Diagram understanding; Sketched diagrams; Adjacency Grammars; Incremental parsing; Spatial directed parsing
|
Joan Mas, Gemma Sanchez, & Josep Llados. (2010). SSP: Sketching slide Presentations, a Syntactic Approach. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 118–129). LNCS. Springer Berlin Heidelberg.
Abstract: The design of a slide presentation is a creative process. In this process first, humans visualize in their minds what they want to explain. Then, they have to be able to represent this knowledge in an understandable way. There exists a lot of commercial software that allows to create our own slide presentations but the creativity of the user is rather limited. In this article we present an application that allows the user to create and visualize a slide presentation from a sketch. A slide may be seen as a graphical document or a diagram where its elements are placed in a particular spatial arrangement. To describe and recognize slides a syntactic approach is proposed. This approach is based on an Adjacency Grammar and a parsing methodology to cope with this kind of grammars. The experimental evaluation shows the performance of our methodology from a qualitative and a quantitative point of view. Six different slides containing different number of symbols, from 4 to 7, have been given to the users and they have drawn them without restrictions in the order of the elements. The quantitative results give an idea on how suitable is our methodology to describe and recognize the different elements in a slide.
|
Joan Mas. (2010). A Syntactic Pattern Recognition Approach based on a Distribution Tolerant Adjacency Grammar and a Spatial Indexed Parser. Application to Sketched Document Recognition (Gemma Sanchez, & Josep Llados, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Sketch recognition is a discipline which has gained an increasing interest in the last
20 years. This is due to the appearance of new devices such as PDA, Tablet PC’s or digital pen & paper protocols. From the wide range of sketched documents we focus on those that represent structured documents such as: architectural floor-plans, engineering drawing, UML diagrams, etc. To recognize and understand these kinds of documents, first we have to recognize the different compounding symbols and then we have to identify the relations between these elements. From the way that a sketch is captured, there are two categories: on-line and off-line. On-line input modes refer to draw directly on a PDA or a Tablet PC’s while off-line input modes refer to scan a previously drawn sketch. This thesis is an overlapping of three different areas on Computer Science: Pattern Recognition, Document Analysis and Human-Computer Interaction. The aim of this thesis is to interpret sketched documents independently on whether they are captured on-line or off-line. For this reason, the proposed approach should contain the following features. First, as we are working with sketches the elements present in our input contain distortions. Second, as we would work in on-line or off-line input modes, the order in the input of the primitives is indifferent. Finally, the proposed method should be applied in real scenarios, its response time must be slow. To interpret a sketched document we propose a syntactic approach. A syntactic approach is composed of two correlated components: a grammar and a parser. The grammar allows describing the different elements on the document as well as their relations. The parser, given a document checks whether it belongs to the language generated by the grammar or not. Thus, the grammar should be able to cope with the distortions appearing on the instances of the elements. Moreover, it would be necessary to define a symbol independently of the order of their primitives. Concerning to the parser when analyzing 2D sentences, it does not assume an order in the primitives. Then, at each new primitive in the input, the parser searches among the previous analyzed symbols candidates to produce a valid reduction. Taking into account these features, we have proposed a grammar based on Adjacency Grammars. This kind of grammars defines their productions as a multiset of symbols rather than a list. This allows describing a symbol without an order in their components. To cope with distortion we have proposed a distortion model. This distortion model is an attributed estimated over the constraints of the grammar and passed through the productions. This measure gives an idea on how far is the symbol from its ideal model. In addition to the distortion on the constraints other distortions appear when working with sketches. These distortions are: overtracing, overlapping, gaps or spurious strokes. Some grammatical productions have been defined to cope with these errors. Concerning the recognition, we have proposed an incremental parser with an indexation mechanism. Incremental parsers analyze the input symbol by symbol given a response to the user when a primitive is analyzed. This makes incremental parser suitable to work in on-line as well as off-line input modes. The parser has been adapted with an indexation mechanism based on a spatial division. This indexation mechanism allows setting the primitives in the space and reducing the search to a neighbourhood. A third contribution is a grammatical inference algorithm. This method given a set of symbols captures the production describing it. In the field of formal languages, different approaches has been proposed but in the graphical domain not so much work is done in this field. The proposed method is able to capture the production from a set of symbol although they are drawn in different order. A matching step based on the Haussdorff distance and the Hungarian method has been proposed to match the primitives of the different symbols. In addition the proposed approach is able to capture the variability in the parameters of the constraints. From the experimental results, we may conclude that we have proposed a robust approach to describe and recognize sketches. Moreover, the addition of new symbols to the alphabet is not restricted to an expert. Finally, the proposed approach has been used in two real scenarios obtaining a good performance. |
Joan Arnedo-Moreno, & Agata Lapedriza. (2010). Visualizing key authenticity: turning your face into your public key. In 6th China International Conference on Information Security and Cryptology (pp. 605–618). LNCS.
Abstract: Biometric information has become a technology complementary to cryptography, allowing to conveniently manage cryptographic data. Two important needs are ful lled: rst of all, making such data always readily available, and additionally, making its legitimate owner easily identi able. In this work we propose a signature system which integrates face recognition biometrics with and identity-based signature scheme, so the user's face e ectively becomes his public key and system ID. Thus, other users may verify messages using photos of the claimed sender, providing a reasonable trade-o between system security and usability, as well as a much more straightforward public key authenticity and distribution process.
|