Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–14] |
Jorge Bernal. (2012). Polyp Localization and Segmentation in Colonoscopy Images by Means of a Model of Appearance for Polyps (F. Javier Sanchez, & Fernando Vilariño, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Colorectal cancer is the fourth most common cause of cancer death worldwide and its survival rate depends on the stage in which it is detected on hence the necessity for an early colon screening. There are several screening techniques but colonoscopy is still nowadays the gold standard, although it has some drawbacks such as the miss rate. Our contribution, in the field of intelligent systems for colonoscopy, aims at providing a polyp localization and a polyp segmentation system based on a model of appearance for polyps. To develop both methods we define a model of appearance for polyps, which describes a polyp as enclosed by intensity valleys. The novelty of our contribution resides on the fact that we include in our model aspects of the image formation and we also consider the presence of other elements from the endoluminal scene such as specular highlights and blood vessels, which have an impact on the performance of our methods. In order to develop our polyp localization method we accumulate valley information in order to generate energy maps, which are also used to guide the polyp segmentation. Our methods achieve promising results in polyp localization and segmentation. As we want to explore the usability of our methods we present a comparative analysis between physicians fixations obtained via an eye tracking device and our polyp localization method. The results show that our method is indistinguishable to novice physicians although it is far from expert physicians.
|
Jordi Vitria, Joao Sanchez, Miguel Raposo, & Mario Hernandez. (2011). Pattern Recognition and Image Analysis (J. Vitrià, J. Sanchez, M. Raposo, & M. Hernandez, Eds.) (Vol. 6669). Berlin: Springer-Verlag. |
Jordi Roca. (2012). Constancy and inconstancy in categorical colour perception (Maria Vanrell, & C. Alejandro Parraga, Eds.). Ph.D. thesis, , .
Abstract: To recognise objects is perhaps the most important task an autonomous system, either biological or artificial needs to perform. In the context of human vision, this is partly achieved by recognizing the colour of surfaces despite changes in the wavelength distribution of the illumination, a property called colour constancy. Correct surface colour recognition may be adequately accomplished by colour category matching without the need to match colours precisely, therefore categorical colour constancy is likely to play an important role for object identification to be successful. The main aim of this work is to study the relationship between colour constancy and categorical colour perception. Previous studies of colour constancy have shown the influence of factors such the spatio-chromatic properties of the background, individual observer's performance, semantics, etc. However there is very little systematic study of these influences. To this end, we developed a new approach to colour constancy which includes both individual observers' categorical perception, the categorical structure of the background, and their interrelations resulting in a more comprehensive characterization of the phenomenon. In our study, we first developed a new method to analyse the categorical structure of 3D colour space, which allowed us to characterize individual categorical colour perception as well as quantify inter-individual variations in terms of shape and centroid location of 3D categorical regions. Second, we developed a new colour constancy paradigm, termed chromatic setting, which allows measuring the precise location of nine categorically-relevant points in colour space under immersive illumination. Additionally, we derived from these measurements a new colour constancy index which takes into account the magnitude and orientation of the chromatic shift, memory effects and the interrelations among colours and a model of colour naming tuned to each observer/adaptation state. Our results lead to the following conclusions: (1) There exists large inter-individual variations in the categorical structure of colour space, and thus colour naming ability varies significantly but this is not well predicted by low-level chromatic discrimination ability; (2) Analysis of the average colour naming space suggested the need for an additional three basic colour terms (turquoise, lilac and lime) for optimal colour communication; (3) Chromatic setting improved the precision of more complex linear colour constancy models and suggested that mechanisms other than cone gain might be best suited to explain colour constancy; (4) The categorical structure of colour space is broadly stable under illuminant changes for categorically balanced backgrounds; (5) Categorical inconstancy exists for categorically unbalanced backgrounds thus indicating that categorical information perceived in the initial stages of adaptation may constrain further categorical perception.
|
Jordi Gonzalez, & Thomas B. Moeslund. (2008). Tracking Humans for the Evaluation of their Motion in Image Sequences. |
Jordi Gonzalez. (2004). Human Sequence Evaluation: the Key-frame Approach (Xavier Roca, & Javier Varona, Eds.). Ph.D. thesis, , . |
Jon Almazan. (2014). Learning to Represent Handwritten Shapes and Words for Matching and Recognition (Ernest Valveny, & Alicia Fornes, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Writing is one of the most important forms of communication and for centuries, handwriting had been the most reliable way to preserve knowledge. However, despite the recent development of printing houses and electronic devices, handwriting is still broadly used for taking notes, doing annotations, or sketching ideas.
Transferring the ability of understanding handwritten text or recognizing handwritten shapes to computers has been the goal of many researches due to its huge importance for many different fields. However, designing good representations to deal with handwritten shapes, e.g. symbols or words, is a very challenging problem due to the large variability of these kinds of shapes. One of the consequences of working with handwritten shapes is that we need representations to be robust, i.e., able to adapt to large intra-class variability. We need representations to be discriminative, i.e., able to learn what are the differences between classes. And, we need representations to be efficient, i.e., able to be rapidly computed and compared. Unfortunately, current techniques of handwritten shape representation for matching and recognition do not fulfill some or all of these requirements. Through this thesis we focus on the problem of learning to represent handwritten shapes aimed at retrieval and recognition tasks. Concretely, on the first part of the thesis, we focus on the general problem of representing any kind of handwritten shape. We first present a novel shape descriptor based on a deformable grid that deals with large deformations by adapting to the shape and where the cells of the grid can be used to extract different features. Then, we propose to use this descriptor to learn statistical models, based on the Active Appearance Model, that jointly learns the variability in structure and texture of a given class. Then, on the second part, we focus on a concrete application, the problem of representing handwritten words, for the tasks of word spotting, where the goal is to find all instances of a query word in a dataset of images, and recognition. First, we address the segmentation-free problem and propose an unsupervised, sliding-window-based approach that achieves state-of- the-art results in two public datasets. Second, we address the more challenging multi-writer problem, where the variability in words exponentially increases. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace, and where those that represent the same word are close together. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. This leads to a low-dimensional, unified representation of word images and strings, resulting in a method that allows one to perform either image and text searches, as well as image transcription, in a unified framework. We evaluate our methods on different public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks. |
Joan Mas, Gemma Sanchez, & Josep Llados. (2006). An Incremental Parser to Recognize Diagram Symbols and Gestures represented by Adjacency Grammars. |
Joan Mas. (2010). A Syntactic Pattern Recognition Approach based on a Distribution Tolerant Adjacency Grammar and a Spatial Indexed Parser. Application to Sketched Document Recognition (Gemma Sanchez, & Josep Llados, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Sketch recognition is a discipline which has gained an increasing interest in the last
20 years. This is due to the appearance of new devices such as PDA, Tablet PC’s or digital pen & paper protocols. From the wide range of sketched documents we focus on those that represent structured documents such as: architectural floor-plans, engineering drawing, UML diagrams, etc. To recognize and understand these kinds of documents, first we have to recognize the different compounding symbols and then we have to identify the relations between these elements. From the way that a sketch is captured, there are two categories: on-line and off-line. On-line input modes refer to draw directly on a PDA or a Tablet PC’s while off-line input modes refer to scan a previously drawn sketch. This thesis is an overlapping of three different areas on Computer Science: Pattern Recognition, Document Analysis and Human-Computer Interaction. The aim of this thesis is to interpret sketched documents independently on whether they are captured on-line or off-line. For this reason, the proposed approach should contain the following features. First, as we are working with sketches the elements present in our input contain distortions. Second, as we would work in on-line or off-line input modes, the order in the input of the primitives is indifferent. Finally, the proposed method should be applied in real scenarios, its response time must be slow. To interpret a sketched document we propose a syntactic approach. A syntactic approach is composed of two correlated components: a grammar and a parser. The grammar allows describing the different elements on the document as well as their relations. The parser, given a document checks whether it belongs to the language generated by the grammar or not. Thus, the grammar should be able to cope with the distortions appearing on the instances of the elements. Moreover, it would be necessary to define a symbol independently of the order of their primitives. Concerning to the parser when analyzing 2D sentences, it does not assume an order in the primitives. Then, at each new primitive in the input, the parser searches among the previous analyzed symbols candidates to produce a valid reduction. Taking into account these features, we have proposed a grammar based on Adjacency Grammars. This kind of grammars defines their productions as a multiset of symbols rather than a list. This allows describing a symbol without an order in their components. To cope with distortion we have proposed a distortion model. This distortion model is an attributed estimated over the constraints of the grammar and passed through the productions. This measure gives an idea on how far is the symbol from its ideal model. In addition to the distortion on the constraints other distortions appear when working with sketches. These distortions are: overtracing, overlapping, gaps or spurious strokes. Some grammatical productions have been defined to cope with these errors. Concerning the recognition, we have proposed an incremental parser with an indexation mechanism. Incremental parsers analyze the input symbol by symbol given a response to the user when a primitive is analyzed. This makes incremental parser suitable to work in on-line as well as off-line input modes. The parser has been adapted with an indexation mechanism based on a spatial division. This indexation mechanism allows setting the primitives in the space and reducing the search to a neighbourhood. A third contribution is a grammatical inference algorithm. This method given a set of symbols captures the production describing it. In the field of formal languages, different approaches has been proposed but in the graphical domain not so much work is done in this field. The proposed method is able to capture the production from a set of symbol although they are drawn in different order. A matching step based on the Haussdorff distance and the Hungarian method has been proposed to match the primitives of the different symbols. In addition the proposed approach is able to capture the variability in the parameters of the constraints. From the experimental results, we may conclude that we have proposed a robust approach to describe and recognize sketches. Moreover, the addition of new symbols to the alphabet is not restricted to an expert. Finally, the proposed approach has been used in two real scenarios obtaining a good performance. |
Joan Marti, Jose Miguel Benedi, Ana Maria Mendonça, & Joan Serrat. (2007). Pattern Recognition and Image Analysis (Vol. 6669). LNCS. |
Joan M. Nuñez. (2015). Vascular Pattern Characterization in Colonoscopy Images (Fernando Vilariño, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Colorectal cancer is the third most common cancer worldwide and the second most common malignant tumor in Europe. Screening tests have shown to be very eective in increasing the survival rates since they allow an early detection of polyps. Among the dierent screening techniques, colonoscopy is considered the gold standard although clinical studies mention several problems that have an impact in the quality of the procedure. The navigation through the rectum and colon track can be challenging for the physicians which can increase polyp miss rates. The thorough visualization of the colon track must be ensured so that
the chances of missing lesions are minimized. The visual analysis of colonoscopy images can provide important information to the physicians and support their navigation during the procedure. Blood vessels and their branching patterns can provide descriptive power to potentially develop biometric markers. Anatomical markers based on blood vessel patterns could be used to identify a particular scene in colonoscopy videos and to support endoscope navigation by generating a sequence of ordered scenes through the dierent colon sections. By verifying the presence of vascular content in the endoluminal scene it is also possible to certify a proper inspection of the colon mucosa and to improve polyp localization. Considering the potential uses of blood vessel description, this contribution studies the characterization of the vascular content and the analysis of the descriptive power of its branching patterns. Blood vessel characterization in colonoscopy images is shown to be a challenging task. The endoluminal scene is conformed by several elements whose similar characteristics hinder the development of particular models for each of them. To overcome such diculties we propose the use of the blood vessel branching characteristics as key features for pattern description. We present a model to characterize junctions in binary patterns. The implementation of the junction model allows us to develop a junction localization method. We created two data sets including manually labeled vessel information as well as manual ground truths of two types of keypoint landmarks: junctions and endpoints. The proposed method outperforms the available algorithms in the literature in experiments in both, our newly created colon vessel data set, and in DRIVE retinal fundus image data set. In the latter case, we created a manual ground truth of junction coordinates. Since we want to explore the descriptive potential of junctions and vessels, we propose a graph-based approach to create anatomical markers. In the context of polyp localization, we present a new method to inhibit the in uence of blood vessels in the extraction valley-prole information. The results show that our methodology decreases vessel in uence, increases polyp information and leads to an improvement in state-of-the-art polyp localization performance. We also propose a polyp-specic segmentation method that outperforms other general and specic approaches. |
Jiaolong Xu. (2015). Domain Adaptation of Deformable Part-based Models (Antonio Lopez, Ed.). Ph.D. thesis, , .
Abstract: On-board pedestrian detection is crucial for Advanced Driver Assistance Systems
(ADAS). An accurate classication is fundamental for vision-based pedestrian detection. The underlying assumption for learning classiers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classiers. However, in practice, there are dierent reasons that can break this constancy assumption. Accordingly, reusing existing classiers by adapting them from the previous training environment (source domain) to the new testing one (target domain) is an approach with increasing acceptance in the computer vision community. In this thesis we focus on the domain adaptation of deformable part-based models (DPMs) for pedestrian detection. As a prof of concept, we use a computer graphic based synthetic dataset, i.e. a virtual world, as the source domain, and adapt the virtual-world trained DPM detector to various real-world dataset. We start by exploiting the maximum detection accuracy of the virtual-world trained DPM. Even though, when operating in various real-world datasets, the virtualworld trained detector still suer from accuracy degradation due to the domain gap of virtual and real worlds. We then focus on domain adaptation of DPM. At the rst step, we consider single source and single target domain adaptation and propose two batch learning methods, namely A-SSVM and SA-SSVM. Later, we further consider leveraging multiple target (sub-)domains for progressive domain adaptation and propose a hierarchical adaptive structured SVM (HA-SSVM) for optimization. Finally, we extend HA-SSVM for the challenging online domain adaptation problem, aiming at making the detector to automatically adapt to the target domain online, without any human intervention. All of the proposed methods in this thesis do not require revisiting source domain data. The evaluations are done on the Caltech pedestrian detection benchmark. Results show that SA-SSVM slightly outperforms A-SSVM and avoids accuracy drops as high as 15 points when comparing with a non-adapted detector. The hierarchical model learned by HA-SSVM further boosts the domain adaptation performance. Finally, the online domain adaptation method has demonstrated that it can achieve comparable accuracy to the batch learned models while not requiring manually label target domain examples. Domain adaptation for pedestrian detection is of paramount importance and a relatively unexplored area. We humbly hope the work in this thesis could provide foundations for future work in this area. |
Jean-Marc Ogier, Wenyin Liu, & Josep Llados (Eds.). (2010). Graphics Recognition: Achievements, Challenges, and Evolution (Vol. 6020). LNCS. Springer Link. |
Javier Vazquez. (2011). Colour Constancy in Natural Through Colour Naming and Sensor Sharpening (Maria Vanrell, & Graham D. Finlayson, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Colour is derived from three physical properties: incident light, object reflectance and sensor sensitivities. Incident light varies under natural conditions; hence, recovering scene illuminant is an important issue in computational colour. One way to deal with this problem under calibrated conditions is by following three steps, 1) building a narrow-band sensor basis to accomplish the diagonal model, 2) building a feasible set of illuminants, and 3) defining criteria to select the best illuminant. In this work we focus on colour constancy for natural images by introducing perceptual criteria in the first and third stages.
To deal with the illuminant selection step, we hypothesise that basic colour categories can be used as anchor categories to recover the best illuminant. These colour names are related to the way that the human visual system has evolved to encode relevant natural colour statistics. Therefore the recovered image provides the best representation of the scene labelled with the basic colour terms. We demonstrate with several experiments how this selection criterion achieves current state-of-art results in computational colour constancy. In addition to this result, we psychophysically prove that usual angular error used in colour constancy does not correlate with human preferences, and we propose a new perceptual colour constancy evaluation. The implementation of this selection criterion strongly relies on the use of a diagonal model for illuminant change. Consequently, the second contribution focuses on building an appropriate narrow-band sensor basis to represent natural images. We propose to use the spectral sharpening technique to compute a unique narrow-band basis optimised to represent a large set of natural reflectances under natural illuminants and given in the basis of human cones. The proposed sensors allow predicting unique hues and the World colour Survey data independently of the illuminant by using a compact singularity function. Additionally, we studied different families of sharp sensors to minimise different perceptual measures. This study brought us to extend the spherical sampling procedure from 3D to 6D. Several research lines still remain open. One natural extension would be to measure the effects of using the computed sharp sensors on the category hypothesis, while another might be to insert spatial contextual information to improve category hypothesis. Finally, much work still needs to be done to explore how individual sensors can be adjusted to the colours in a scene. |
Javier Varona. (2001). Seguimiento visual robusto en entornos complejos. |
Javier Marin. (2013). Pedestrian Detection Based on Local Experts (Antonio Lopez, & Jaume Amores, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: During the last decade vision-based human detection systems have started to play a key rolein multiple applications linked to driver assistance, surveillance, robot sensing and home automation.
Detecting humans is by far one of the most challenging tasks in Computer Vision. This is mainly due to the high degree of variability in the human appearanceassociated to the clothing, pose, shape and size. Besides, other factors such as cluttered scenarios, partial occlusions, or environmental conditions can make the detection task even harder. Most promising methods of the state-of-the-art rely on discriminative learning paradigms which are fed with positive and negative examples. The training data is one of the most relevant elements in order to build a robust detector as it has to cope the large variability of the target. In order to create this dataset human supervision is required. The drawback at this point is the arduous effort of annotating as well as looking for such claimed variability. In this PhD thesis we address two recurrent problems in the literature. In the first stage,we aim to reduce the consuming task of annotating, namely, by using computer graphics. More concretely, we develop a virtual urban scenario for later generating a pedestrian dataset. Then, we train a detector using this dataset, and finally we assess if this detector can be successfully applied in a real scenario. In the second stage, we focus on increasing the robustness of our pedestrian detectors under partial occlusions. In particular, we present a novel occlusion handling approach to increase the performance of block-based holistic methods under partial occlusions. For this purpose, we make use of local experts via a RandomSubspaceMethod (RSM) to handle these cases. If the method infers a possible partial occlusion, then the RSM, based on performance statistics obtained from partially occluded data, is applied. The last objective of this thesis is to propose a robust pedestrian detector based on an ensemble of local experts. To achieve this goal, we use the random forest paradigm, where the trees act as ensembles an their nodesare the local experts. In particular, each expert focus on performing a robust classification ofa pedestrian body patch. This approach offers computational efficiency and far less design complexity when compared to other state-of-the-artmethods, while reaching better accuracy |