Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–12] |
Records | |||||
---|---|---|---|---|---|
Author | Ivan Huerta | ||||
Title | Foreground Object Segmentation and Shadow Detection for Video Sequences in Uncontrolled Environments | Type | Book Whole | ||
Year | 2010 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This Thesis is mainly divided in two parts. The first one presents a study of motion
segmentation problems. Based on this study, a novel algorithm for mobile-object segmentation from a static background scene is also presented. This approach is demonstrated robust and accurate under most of the common problems in motion segmentation. The second one tackles the problem of shadows in depth. Firstly, a bottom-up approach based on a chromatic shadow detector is presented to deal with umbra shadows. Secondly, a top-down approach based on a tracking system has been developed in order to enhance the chromatic shadow detection. In our first contribution, a case analysis of motion segmentation problems is presented by taking into account the problems associated with different cues, namely colour, edge and intensity. Our second contribution is a hybrid architecture which handles the main problems observed in such a case analysis, by fusing (i) the knowledge from these three cues and (ii) a temporal difference algorithm. On the one hand, we enhance the colour and edge models to solve both global/local illumination changes (shadows and highlights) and camouflage in intensity. In addition, local information is exploited to cope with a very challenging problem such as the camouflage in chroma. On the other hand, the intensity cue is also applied when colour and edge cues are not available, such as when beyond the dynamic range. Additionally, temporal difference is included to segment motion when these three cues are not available, such as that background not visible during the training period. Lastly, the approach is enhanced for allowing ghost detection. As a result, our approach obtains very accurate and robust motion segmentation in both indoor and outdoor scenarios, as quantitatively and qualitatively demonstrated in the experimental results, by comparing our approach with most best-known state-of-the-art approaches. Motion Segmentation has to deal with shadows to avoid distortions when detecting moving objects. Most segmentation approaches dealing with shadow detection are typically restricted to penumbra shadows. Therefore, such techniques cannot cope well with umbra shadows. Consequently, umbra shadows are usually detected as part of moving objects. Firstly, a bottom-up approach for detection and removal of chromatic moving shadows in surveillance scenarios is proposed. Secondly, a top-down approach based on kalman filters to detect and track shadows has been developed in order to enhance the chromatic shadow detection. In the Bottom-up part, the shadow detection approach applies a novel technique based on gradient and colour models for separating chromatic moving shadows from moving objects. Well-known colour and gradient models are extended and improved into an invariant colour cone model and an invariant gradient model, respectively, to perform automatic segmentation while detecting potential shadows. Hereafter, the regions corresponding to potential shadows are grouped by considering ”a bluish effect” and an edge partitioning. Lastly, (i) temporal similarities between local gradient structures and (ii) spatial similarities between chrominance angle and brightness distortions are analysed for all potential shadow regions in order to finally identify umbra shadows. In the top-down process, after detection of objects and shadows both are tracked using Kalman filters, in order to enhance the chromatic shadow detection, when it fails to detect a shadow. Firstly, this implies a data association between the blobs (foreground and shadow) and Kalman filters. Secondly, an event analysis of the different data association cases is performed, and occlusion handling is managed by a Probabilistic Appearance Model (PAM). Based on this association, temporal consistency is looked for the association between foregrounds and shadows and their respective Kalman Filters. From this association several cases are studied, as a result lost chromatic shadows are correctly detected. Finally, the tracking results are used as feedback to improve the shadow and object detection. Unlike other approaches, our method does not make any a-priori assumptions about camera location, surface geometries, surface textures, shapes and types of shadows, objects, and background. Experimental results show the performance and accuracy of our approach in different shadowed materials and illumination conditions. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Jordi Gonzalez;Xavier Roca | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-937261-3-3 | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | ISE @ ise @ Hue2010 | Serial | 1332 | ||
Permanent link to this record | |||||
Author | Carles Fernandez | ||||
Title | Understanding Image Sequences: the Role of Ontologies in Cognitive Vision | Type | Book Whole | ||
Year | 2010 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | The increasing ubiquitousness of digital information in our daily lives has positioned
video as a favored information vehicle, and given rise to an astonishing generation of social media and surveillance footage. This raises a series of technological demands for automatic video understanding and management, which together with the compromising attentional limitations of human operators, have motivated the research community to guide its steps towards a better attainment of such capabilities. As a result, current trends on cognitive vision promise to recognize complex events and self-adapt to different environments, while managing and integrating several types of knowledge. Future directions suggest to reinforce the multi-modal fusion of information sources and the communication with end-users. In this thesis we tackle the problem of recognizing and describing meaningful events in video sequences from different domains, and communicating the resulting knowledge to end-users by means of advanced interfaces for human–computer interaction. This problem is addressed by designing the high-level modules of a cognitive vision framework exploiting ontological knowledge. Ontologies allow us to define the relevant concepts in a domain and the relationships among them; we prove that the use of ontologies to organize, centralize, link, and reuse different types of knowledge is a key factor in the materialization of our objectives. The proposed framework contributes to: (i) automatically learn the characteristics of different scenarios in a domain; (ii) reason about uncertain, incomplete, or vague information from visual –camera’s– or linguistic –end-user’s– inputs; (iii) derive plausible interpretations of complex events from basic spatiotemporal developments; (iv) facilitate natural interfaces that adapt to the needs of end-users, and allow them to communicate efficiently with the system at different levels of interaction; and finally, (v) find mechanisms to guide modeling processes, maintain and extend the resulting models, and to exploit multimodal resources synergically to enhance the former tasks. We describe a holistic methodology to achieve these goals. First, the use of prior taxonomical knowledge is proved useful to guide MAP-MRF inference processes in the automatic identification of semantic regions, with independence of a particular scenario. Towards the recognition of complex video events, we combine fuzzy metric-temporal reasoning with SGTs, thus assessing high-level interpretations from spatiotemporal data. Here, ontological resources like T–Boxes, onomasticons, or factual databases become useful to derive video indexing and retrieval capabilities, and also to forward highlighted content to smart user interfaces. There, we explore the application of ontologies to discourse analysis and cognitive linguistic principles, or scene augmentation techniques towards advanced communication by means of natural language dialogs and synthetic visualizations. Ontologies become fundamental to coordinate, adapt, and reuse the different modules in the system. The suitability of our ontological framework is demonstrated by a series of applications that especially benefit the field of smart video surveillance, viz. automatic generation of linguistic reports about the content of video sequences in multiple natural languages; content-based filtering and summarization of these reports; dialogue-based interfaces to query and browse video contents; automatic learning of semantic regions in a scenario; and tools to evaluate the performance of components and models in the system, via simulation and augmented reality. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Jordi Gonzalez;Xavier Roca | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-937261-2-6 | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ Fer2010a | Serial | 1333 | ||
Permanent link to this record | |||||
Author | Joan Mas | ||||
Title | A Syntactic Pattern Recognition Approach based on a Distribution Tolerant Adjacency Grammar and a Spatial Indexed Parser. Application to Sketched Document Recognition | Type | Book Whole | ||
Year | 2010 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Sketch recognition is a discipline which has gained an increasing interest in the last
20 years. This is due to the appearance of new devices such as PDA, Tablet PC’s or digital pen & paper protocols. From the wide range of sketched documents we focus on those that represent structured documents such as: architectural floor-plans, engineering drawing, UML diagrams, etc. To recognize and understand these kinds of documents, first we have to recognize the different compounding symbols and then we have to identify the relations between these elements. From the way that a sketch is captured, there are two categories: on-line and off-line. On-line input modes refer to draw directly on a PDA or a Tablet PC’s while off-line input modes refer to scan a previously drawn sketch. This thesis is an overlapping of three different areas on Computer Science: Pattern Recognition, Document Analysis and Human-Computer Interaction. The aim of this thesis is to interpret sketched documents independently on whether they are captured on-line or off-line. For this reason, the proposed approach should contain the following features. First, as we are working with sketches the elements present in our input contain distortions. Second, as we would work in on-line or off-line input modes, the order in the input of the primitives is indifferent. Finally, the proposed method should be applied in real scenarios, its response time must be slow. To interpret a sketched document we propose a syntactic approach. A syntactic approach is composed of two correlated components: a grammar and a parser. The grammar allows describing the different elements on the document as well as their relations. The parser, given a document checks whether it belongs to the language generated by the grammar or not. Thus, the grammar should be able to cope with the distortions appearing on the instances of the elements. Moreover, it would be necessary to define a symbol independently of the order of their primitives. Concerning to the parser when analyzing 2D sentences, it does not assume an order in the primitives. Then, at each new primitive in the input, the parser searches among the previous analyzed symbols candidates to produce a valid reduction. Taking into account these features, we have proposed a grammar based on Adjacency Grammars. This kind of grammars defines their productions as a multiset of symbols rather than a list. This allows describing a symbol without an order in their components. To cope with distortion we have proposed a distortion model. This distortion model is an attributed estimated over the constraints of the grammar and passed through the productions. This measure gives an idea on how far is the symbol from its ideal model. In addition to the distortion on the constraints other distortions appear when working with sketches. These distortions are: overtracing, overlapping, gaps or spurious strokes. Some grammatical productions have been defined to cope with these errors. Concerning the recognition, we have proposed an incremental parser with an indexation mechanism. Incremental parsers analyze the input symbol by symbol given a response to the user when a primitive is analyzed. This makes incremental parser suitable to work in on-line as well as off-line input modes. The parser has been adapted with an indexation mechanism based on a spatial division. This indexation mechanism allows setting the primitives in the space and reducing the search to a neighbourhood. A third contribution is a grammatical inference algorithm. This method given a set of symbols captures the production describing it. In the field of formal languages, different approaches has been proposed but in the graphical domain not so much work is done in this field. The proposed method is able to capture the production from a set of symbol although they are drawn in different order. A matching step based on the Haussdorff distance and the Hungarian method has been proposed to match the primitives of the different symbols. In addition the proposed approach is able to capture the variability in the parameters of the constraints. From the experimental results, we may conclude that we have proposed a robust approach to describe and recognize sketches. Moreover, the addition of new symbols to the alphabet is not restricted to an expert. Finally, the proposed approach has been used in two real scenarios obtaining a good performance. |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Gemma Sanchez;Josep Llados | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-937261-4-0 | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ Mas2010 | Serial | 1334 | ||
Permanent link to this record | |||||
Author | Francisco Javier Orozco | ||||
Title | Human Emotion Evaluation on Facial Image Sequences | Type | Book Whole | ||
Year | 2010 | Publication | PhD Thesis, Universitat Autonoma de Barcelona-CVC | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Psychological evidence has emphasized the importance of affective behaviour understanding due to its high impact in nowadays interaction humans and computers. All
type of affective and behavioural patterns such as gestures, emotions and mental states are highly displayed through the face, head and body. Therefore, this thesis is focused to analyse affective behaviours on head and face. To this end, head and facial movements are encoded by using appearance based tracking methods. Specifically, a wise combination of deformable models captures rigid and non-rigid movements of different kinematics; 3D head pose, eyebrows, mouth, eyelids and irises are taken into account as basis for extracting features from databases of video sequences. This approach combines the strengths of adaptive appearance models, optimization methods and backtracking techniques. For about thirty years, computer sciences have addressed the investigation on human emotions to the automatic recognition of six prototypic emotions suggested by Darwin and systematized by Paul Ekman in the seventies. The Facial Action Coding System (FACS) which uses discrete movements of the face (called Action units or AUs) to code the six facial emotions named anger, disgust, fear, happy-Joy, sadness and surprise. However, human emotions are much complex patterns that have not received the same attention from computer scientists. Simon Baron-Cohen proposed a new taxonomy of emotions and mental states without a system coding of the facial actions. These 426 affective behaviours are more challenging for the understanding of human emotions. Beyond of classically classifying the six basic facial expressions, more subtle gestures, facial actions and spontaneous emotions are considered here. By assessing confidence on the recognition results, exploring spatial and temporal relationships of the features, some methods are combined and enhanced for developing new taxonomy of expressions and emotions. The objective of this dissertation is to develop a computer vision system, including both facial feature extraction, expression recognition and emotion understanding by building a bottom-up reasoning process. Building a detailed taxonomy of human affective behaviours is an interesting challenge for head-face-based image analysis methods. In this paper, we exploit the strengths of Canonical Correlation Analysis (CCA) to enhance an on-line head-face tracker. A relationship between head pose and local facial movements is studied according to their cognitive interpretation on affective expressions and emotions. Active Shape Models are synthesized for AAMs based on CCA-regression. Head pose and facial actions are fused into a maximally correlated space in order to assess expressiveness, confidence and classification in a CBR system. The CBR solutions are also correlated to the cognitive features, which allow avoiding exhaustive search when recognizing new head-face features. Subsequently, Support Vector Machines (SVMs) and Bayesian Networks are applied for learning the spatial relationships of facial expressions. Similarly, the temporal evolution of facial expressions, emotion and mental states are analysed based on Factorized Dynamic Bayesian Networks (FaDBN). As results, the bottom-up system recognizes six facial expressions, six basic emotions and six mental states, plus enhancing this categorization with confidence assessment at each level, intensity of expressions and a complete taxonomy |
||||
Address | |||||
Corporate Author | Thesis | Ph.D. thesis | |||
Publisher | Ediciones Graficas Rey | Place of Publication | Editor | Jordi Gonzalez;Xavier Roca | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-936529-3-7 | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ Oro2010 | Serial | 1335 | ||
Permanent link to this record | |||||
Author | Joan Mas; Josep Llados; Gemma Sanchez; J.A. Jorge | ||||
Title | A syntactic approach based on distortion-tolerant Adjacency Grammars and a spatial-directed parser to interpret sketched diagrams | Type | Journal Article | ||
Year | 2010 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 43 | Issue | 12 | Pages | 4148–4164 |
Keywords | Syntactic Pattern Recognition; Symbol recognition; Diagram understanding; Sketched diagrams; Adjacency Grammars; Incremental parsing; Spatial directed parsing | ||||
Abstract | This paper presents a syntactic approach based on Adjacency Grammars (AG) for sketch diagram modeling and understanding. Diagrams are a combination of graphical symbols arranged according to a set of spatial rules defined by a visual language. AG describe visual shapes by productions defined in terms of terminal and non-terminal symbols (graphical primitives and subshapes), and a set functions describing the spatial arrangements between symbols. Our approach to sketch diagram understanding provides three main contributions. First, since AG are linear grammars, there is a need to define shapes and relations inherently bidimensional using a sequential formalism. Second, our parsing approach uses an indexing structure based on a spatial tessellation. This serves to reduce the search space when finding candidates to produce a valid reduction. This allows order-free parsing of 2D visual sentences while keeping combinatorial explosion in check. Third, working with sketches requires a distortion model to cope with the natural variations of hand drawn strokes. To this end we extended the basic grammar with a distortion measure modeled on the allowable variation on spatial constraints associated with grammar productions. Finally, the paper reports on an experimental framework an interactive system for sketch analysis. User tests performed on two real scenarios show that our approach is usable in interactive settings. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ MLS2010 | Serial | 1336 | ||
Permanent link to this record | |||||
Author | Umapada Pal; Partha Pratim Roy; N. Tripathya; Josep Llados | ||||
Title | Multi-oriented Bangla and Devnagari text recognition | Type | Journal Article | ||
Year | 2010 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 43 | Issue | 12 | Pages | 4124–4136 |
Keywords | |||||
Abstract | There are printed complex documents where text lines of a single page may have different orientations or the text lines may be curved in shape. As a result, it is difficult to detect the skew of such documents and hence character segmentation and recognition of such documents are a complex task. In this paper, using background and foreground information we propose a novel scheme towards the recognition of Indian complex documents of Bangla and Devnagari script. In Bangla and Devnagari documents usually characters in a word touch and they form cavity regions. To take care of these cavity regions, background information of such documents is used. Convex hull and water reservoir principle have been applied for this purpose. Here, at first, the characters are segmented from the documents using the background information of the text. Next, individual characters are recognized using rotation invariant features obtained from the foreground part of the characters.
For character segmentation, at first, writing mode of a touching component (word) is detected using water reservoir principle based features. Next, depending on writing mode and the reservoir base-region of the touching component, a set of candidate envelope points is then selected from the contour points of the component. Based on these candidate points, the touching component is finally segmented into individual characters. For recognition of multi-sized/multi-oriented characters the features are computed from different angular information obtained from the external and internal contour pixels of the characters. These angular information are computed in such a way that they do not depend on the size and rotation of the characters. Circular and convex hull rings have been used to divide a character into smaller zones to get zone-wise features for higher recognition results. We combine circular and convex hull features to improve the results and these features are fed to support vector machines (SVM) for recognition. From our experiment we obtained recognition results of 99.18% (98.86%) accuracy when tested on 7515 (7874) Devnagari (Bangla) characters. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ PRT2010 | Serial | 1337 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Oriol Pujol; Petia Radeva | ||||
Title | Re-coding ECOCs without retraining | Type | Journal Article | ||
Year | 2010 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 31 | Issue | 7 | Pages | 555–562 |
Keywords | |||||
Abstract | A standard way to deal with multi-class categorization problems is by the combination of binary classifiers in a pairwise voting procedure. Recently, this classical approach has been formalized in the Error-Correcting Output Codes (ECOC) framework. In the ECOC framework, the one-versus-one coding demonstrates to achieve higher performance than the rest of coding designs. The binary problems that we train in the one-versus-one strategy are significantly smaller than in the rest of designs, and usually easier to be learnt, taking into account the smaller overlapping between classes. However, a high percentage of the positions coded by zero of the coding matrix, which implies a high sparseness degree, does not codify meta-class membership information. In this paper, we show that using the training data we can redefine without re-training, in a problem-dependent way, the one-versus-one coding matrix so that the new coded information helps the system to increase its generalization capability. Moreover, the new re-coding strategy is generalized to be applied over any binary code. The results over several UCI Machine Learning repository data sets and two real multi-class problems show that performance improvements can be obtained re-coding the classical one-versus-one and Sparse random designs compared to different state-of-the-art ECOC configurations. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB;HUPBA | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ EPR2010e | Serial | 1338 | ||
Permanent link to this record | |||||
Author | Marc Serra | ||||
Title | Estimating Intrinsic Images from Physical and Categorical Color Cues | Type | Report | ||
Year | 2010 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 151 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | Master's thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC | Approved | no | ||
Call Number | Admin @ si @ Ser2010 | Serial | 1345 | ||
Permanent link to this record | |||||
Author | Ahmed Mounir Gad | ||||
Title | Object Localization Enhancement by Multiple Segmentation Fusion | Type | Report | ||
Year | 2010 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 152 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | Master's thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ Mou2010 | Serial | 1346 | ||
Permanent link to this record | |||||
Author | Antonio Hernandez | ||||
Title | Pose and Face Recovery via Spatio-temporal GrabCut Human Segmentation | Type | Report | ||
Year | 2010 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 153 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | Master's thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ Her2010 | Serial | 1347 | ||
Permanent link to this record | |||||
Author | Jorge Bernal; Fernando Vilariño; F. Javier Sanchez | ||||
Title | Feature Detectors and Feature Descriptors: Where We Are Now | Type | Report | ||
Year | 2010 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 154 | Issue | Pages | ||
Keywords | |||||
Abstract | Feature Detection and Feature Description are clearly nowadays topics. Many Computer Vision applications rely on the use of several of these techniques in order to extract the most significant aspects of an image so they can help in some tasks such as image retrieval, image registration, object recognition, object categorization and texture classification, among others. In this paper we define what Feature Detection and Description are and then we present an extensive collection of several methods in order to show the different techniques that are being used right now. The aim of this report is to provide a glimpse of what is being used currently in these fields and to serve as a starting point for future endeavours. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | 800 | Expedition | Conference | ||
Notes | MV;SIAI | Approved | no | ||
Call Number | Admin @ si @ BVS2010; IAM @ iam @ BVS2010 | Serial | 1348 | ||
Permanent link to this record | |||||
Author | Lluis Pere de las Heras | ||||
Title | Syntactic Model for Semantic Document Analysis | Type | Report | ||
Year | 2010 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 158 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ Per2010 | Serial | 1350 | ||
Permanent link to this record | |||||
Author | Anjan Dutta | ||||
Title | Symbol Spotting in Graphical Documents by Serialized Subgraph Matching | Type | Report | ||
Year | 2010 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 159 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | Master's thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ Dut2010 | Serial | 1351 | ||
Permanent link to this record | |||||
Author | Ekain Artola | ||||
Title | Human Attention Map Prediction Combining Visual Features | Type | Report | ||
Year | 2010 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 160 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | Bachelor's thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | Admin @ si @ Art2010 | Serial | 1352 | ||
Permanent link to this record | |||||
Author | David Fernandez | ||||
Title | Handwritten Word Spotting in Old Manuscript Images using Shape Descriptors | Type | Report | ||
Year | 2010 | Publication | CVC Technical Report | Abbreviated Journal | |
Volume | 161 | Issue | Pages | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | Master's thesis | |||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ Fer2010b | Serial | 1353 | ||
Permanent link to this record |