toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author Albert Gordo; Florent Perronnin; Yunchao Gong; Svetlana Lazebnik edit   pdf
doi  openurl
  Title Asymmetric Distances for Binary Embeddings Type Journal Article
  Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 1 Pages 33-47  
  Keywords  
  Abstract In large-scale query-by-example retrieval, embedding image signatures in a binary space offers two benefits: data compression and search efficiency. While most embedding algorithms binarize both query and database signatures, it has been noted that this is not strictly a requirement. Indeed, asymmetric schemes which binarize the database signatures but not the query still enjoy the same two benefits but may provide superior accuracy. In this work, we propose two general asymmetric distances which are applicable to a wide variety of embedding techniques including Locality Sensitive Hashing (LSH), Locality Sensitive Binary Codes (LSBC), Spectral Hashing (SH), PCA Embedding (PCAE), PCA Embedding with random rotations (PCAE-RR), and PCA Embedding with iterative quantization (PCAE-ITQ). We experiment on four public benchmarks containing up to 1M images and show that the proposed asymmetric distances consistently lead to large improvements over the symmetric Hamming distance for all binary embedding techniques.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN (up) Medium  
  Area Expedition Conference  
  Notes DAG; 600.045; 605.203; 600.077 Approved no  
  Call Number Admin @ si @ GPG2014 Serial 2272  
Permanent link to this record
 

 
Author Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez edit   pdf
doi  openurl
  Title Domain Adaptation of Deformable Part-Based Models Type Journal Article
  Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 12 Pages 2367-2380  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract The accuracy of object classifiers can significantly drop when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, adapting the classifiers to the scenario in which they must operate is of paramount importance. We present novel domain adaptation (DA) methods for object detection. As proof of concept, we focus on adapting the state-of-the-art deformable part-based model (DPM) for pedestrian detection. We introduce an adaptive structural SVM (A-SSVM) that adapts a pre-learned classifier between different domains. By taking into account the inherent structure in feature space (e.g., the parts in a DPM), we propose a structure-aware A-SSVM (SA-SSVM). Neither A-SSVM nor SA-SSVM needs to revisit the source-domain training data to perform the adaptation. Rather, a low number of target-domain training examples (e.g., pedestrians) are used. To address the scenario where there are no target-domain annotated samples, we propose a self-adaptive DPM based on a self-paced learning (SPL) strategy and a Gaussian Process Regression (GPR). Two types of adaptation tasks are assessed: from both synthetic pedestrians and general persons (PASCAL VOC) to pedestrians imaged from an on-board camera. Results show that our proposals avoid accuracy drops as high as 15 points when comparing adapted and non-adapted detectors.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN (up) Medium  
  Area Expedition Conference  
  Notes ADAS; 600.057; 600.054; 601.217; 600.076 Approved no  
  Call Number ADAS @ adas @ XRV2014b Serial 2436  
Permanent link to this record
 

 
Author Santiago Segui; Michal Drozdzal; Ekaterina Zaytseva; Fernando Azpiroz; Petia Radeva; Jordi Vitria edit   pdf
doi  openurl
  Title Detection of wrinkle frames in endoluminal videos using betweenness centrality measures for images Type Journal Article
  Year 2014 Publication IEEE Transactions on Information Technology in Biomedicine Abbreviated Journal TITB  
  Volume 18 Issue 6 Pages 1831-1838  
  Keywords Wireless Capsule Endoscopy; Small Bowel Motility Dysfunction; Contraction Detection; Structured Prediction; Betweenness Centrality  
  Abstract Intestinal contractions are one of the most important events to diagnose motility pathologies of the small intestine. When visualized by wireless capsule endoscopy (WCE), the sequence of frames that represents a contraction is characterized by a clear wrinkle structure in the central frames that corresponds to the folding of the intestinal wall. In this paper we present a new method to robustly detect wrinkle frames in full WCE videos by using a new mid-level image descriptor that is based on a centrality measure proposed for graphs. We present an extended validation, carried out in a very large database, that shows that the proposed method achieves state of the art performance for this task.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN (up) Medium  
  Area Expedition Conference  
  Notes OR; MILAB; 600.046;MV Approved no  
  Call Number Admin @ si @ SDZ2014 Serial 2385  
Permanent link to this record
 

 
Author Lluis Pere de las Heras; Ahmed Sheraz; Marcus Liwicki; Ernest Valveny; Gemma Sanchez edit   pdf
doi  openurl
  Title Statistical Segmentation and Structural Recognition for Floor Plan Interpretation Type Journal Article
  Year 2014 Publication International Journal on Document Analysis and Recognition Abbreviated Journal IJDAR  
  Volume 17 Issue 3 Pages 221-237  
  Keywords  
  Abstract A generic method for floor plan analysis and interpretation is presented in this article. The method, which is mainly inspired by the way engineers draw and interpret floor plans, applies two recognition steps in a bottom-up manner. First, basic building blocks, i.e., walls, doors, and windows are detected using a statistical patch-based segmentation approach. Second, a graph is generated, and structural pattern recognition techniques are applied to further locate the main entities, i.e., rooms of the building. The proposed approach is able to analyze any type of floor plan regardless of the notation used. We have evaluated our method on different publicly available datasets of real architectural floor plans with different notations. The overall detection and recognition accuracy is about 95 %, which is significantly better than any other state-of-the-art method. Our approach is generic enough such that it could be easily adopted to the recognition and interpretation of any other printed machine-generated structured documents.  
  Address  
  Corporate Author Thesis  
  Publisher Springer Berlin Heidelberg Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1433-2833 ISBN (up) Medium  
  Area Expedition Conference  
  Notes DAG; ADAS; 600.076; 600.077 Approved no  
  Call Number HSL2014 Serial 2370  
Permanent link to this record
 

 
Author Palaiahnakote Shivakumara; Anjan Dutta; Chew Lim Tan; Umapada Pal edit  doi
openurl 
  Title Multi-oriented scene text detection in video based on wavelet and angle projection boundary growing Type Journal Article
  Year 2014 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume 72 Issue 1 Pages 515-539  
  Keywords  
  Abstract In this paper, we address two complex issues: 1) Text frame classification and 2) Multi-oriented text detection in video text frame. We first divide a video frame into 16 blocks and propose a combination of wavelet and median-moments with k-means clustering at the block level to identify probable text blocks. For each probable text block, the method applies the same combination of feature with k-means clustering over a sliding window running through the blocks to identify potential text candidates. We introduce a new idea of symmetry on text candidates in each block based on the observation that pixel distribution in text exhibits a symmetric pattern. The method integrates all blocks containing text candidates in the frame and then all text candidates are mapped on to a Sobel edge map of the original frame to obtain text representatives. To tackle the multi-orientation problem, we present a new method called Angle Projection Boundary Growing (APBG) which is an iterative algorithm and works based on a nearest neighbor concept. APBG is then applied on the text representatives to fix the bounding box for multi-oriented text lines in the video frame. Directional information is used to eliminate false positives. Experimental results on a variety of datasets such as non-horizontal, horizontal, publicly available data (Hua’s data) and ICDAR-03 competition data (camera images) show that the proposed method outperforms existing methods proposed for video and the state of the art methods for scene text as well.  
  Address  
  Corporate Author Thesis  
  Publisher Springer US Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1380-7501 ISBN (up) Medium  
  Area Expedition Conference  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ SDT2014 Serial 2357  
Permanent link to this record
 

 
Author Antonio Hernandez; Miguel Angel Bautista; Xavier Perez Sala; Victor Ponce; Sergio Escalera; Xavier Baro; Oriol Pujol; Cecilio Angulo edit   pdf
doi  openurl
  Title Probability-based Dynamic Time Warping and Bag-of-Visual-and-Depth-Words for Human Gesture Recognition in RGB-D Type Journal Article
  Year 2014 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 50 Issue 1 Pages 112-121  
  Keywords RGB-D; Bag-of-Words; Dynamic Time Warping; Human Gesture Recognition  
  Abstract PATREC5825
We present a methodology to address the problem of human gesture segmentation and recognition in video and depth image sequences. A Bag-of-Visual-and-Depth-Words (BoVDW) model is introduced as an extension of the Bag-of-Visual-Words (BoVW) model. State-of-the-art RGB and depth features, including a newly proposed depth descriptor, are analysed and combined in a late fusion form. The method is integrated in a Human Gesture Recognition pipeline, together with a novel probability-based Dynamic Time Warping (PDTW) algorithm which is used to perform prior segmentation of idle gestures. The proposed DTW variant uses samples of the same gesture category to build a Gaussian Mixture Model driven probabilistic model of that gesture class. Results of the whole Human Gesture Recognition pipeline in a public data set show better performance in comparison to both standard BoVW model and DTW approach.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN (up) Medium  
  Area Expedition Conference  
  Notes HuPBA;MV; 605.203 Approved no  
  Call Number Admin @ si @ HBP2014 Serial 2353  
Permanent link to this record
 

 
Author David Vazquez; Javier Marin; Antonio Lopez; Daniel Ponsa; David Geronimo edit   pdf
doi  openurl
  Title Virtual and Real World Adaptation for Pedestrian Detection Type Journal Article
  Year 2014 Publication IEEE Transactions on Pattern Analysis and Machine Intelligence Abbreviated Journal TPAMI  
  Volume 36 Issue 4 Pages 797-809  
  Keywords Domain Adaptation; Pedestrian Detection  
  Abstract Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0162-8828 ISBN (up) Medium  
  Area Expedition Conference  
  Notes ADAS; 600.057; 600.054; 600.076 Approved no  
  Call Number ADAS @ adas @ VML2014 Serial 2275  
Permanent link to this record
 

 
Author Laura Igual; Xavier Perez Sala; Sergio Escalera; Cecilio Angulo; Fernando De la Torre edit   pdf
url  doi
openurl 
  Title Continuous Generalized Procrustes Analysis Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue 2 Pages 659–671  
  Keywords Procrustes analysis; 2D shape model; Continuous approach  
  Abstract PR4883, PII: S0031-3203(13)00327-0
Two-dimensional shape models have been successfully applied to solve many problems in computer vision, such as object tracking, recognition, and segmentation. Typically, 2D shape models are learned from a discrete set of image landmarks (corresponding to projection of 3D points of an object), after applying Generalized Procustes Analysis (GPA) to remove 2D rigid transformations. However, the
standard GPA process suffers from three main limitations. Firstly, the 2D training samples do not necessarily cover a uniform sampling of all the 3D transformations of an object. This can bias the estimate of the shape model. Secondly, it can be computationally expensive to learn the shape model by sampling 3D transformations. Thirdly, standard GPA methods use only one reference shape, which can might be insufficient to capture large structural variability of some objects.
To address these drawbacks, this paper proposes continuous generalized Procrustes analysis (CGPA).
CGPA uses a continuous formulation that avoids the need to generate 2D projections from all the rigid 3D transformations. It builds an efficient (in space and time) non-biased 2D shape model from a set of 3D model of objects. A major challenge in CGPA is the need to integrate over the space of 3D rotations, especially when the rotations are parameterized with Euler angles. To address this problem, we introduce the use of the Haar measure. Finally, we extended CGPA to incorporate several reference shapes. Experimental results on synthetic and real experiments show the benefits of CGPA over GPA.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN (up) Medium  
  Area Expedition Conference  
  Notes OR; HuPBA; 605.203; 600.046;MILAB Approved no  
  Call Number Admin @ si @ IPE2014 Serial 2352  
Permanent link to this record
 

 
Author Marco Pedersoli; Jordi Gonzalez; Xu Hu; Xavier Roca edit   pdf
doi  openurl
  Title Toward Real-Time Pedestrian Detection Based on a Deformable Template Model Type Journal Article
  Year 2014 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 15 Issue 1 Pages 355-364  
  Keywords  
  Abstract Most advanced driving assistance systems already include pedestrian detection systems. Unfortunately, there is still a tradeoff between precision and real time. For a reliable detection, excellent precision-recall such a tradeoff is needed to detect as many pedestrians as possible while, at the same time, avoiding too many false alarms; in addition, a very fast computation is needed for fast reactions to dangerous situations. Recently, novel approaches based on deformable templates have been proposed since these show a reasonable detection performance although they are computationally too expensive for real-time performance. In this paper, we present a system for pedestrian detection based on a hierarchical multiresolution part-based model. The proposed system is able to achieve state-of-the-art detection accuracy due to the local deformations of the parts while exhibiting a speedup of more than one order of magnitude due to a fast coarse-to-fine inference technique. Moreover, our system explicitly infers the level of resolution available so that the detection of small examples is feasible with a very reduced computational cost. We conclude this contribution by presenting how a graphics processing unit-optimized implementation of our proposed system is suitable for real-time pedestrian detection in terms of both accuracy and speed.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN (up) Medium  
  Area Expedition Conference  
  Notes ISE; 601.213; 600.078 Approved no  
  Call Number PGH2014 Serial 2350  
Permanent link to this record
 

 
Author Marçal Rusiñol; Josep Llados edit  doi
openurl 
  Title Boosting the Handwritten Word Spotting Experience by Including the User in the Loop Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue 3 Pages 1063–1072  
  Keywords Handwritten word spotting; Query by example; Relevance feedback; Query fusion; Multidimensional scaling  
  Abstract In this paper, we study the effect of taking the user into account in a query-by-example handwritten word spotting framework. Several off-the-shelf query fusion and relevance feedback strategies have been tested in the handwritten word spotting context. The increase in terms of precision when the user is included in the loop is assessed using two datasets of historical handwritten documents and two baseline word spotting approaches both based on the bag-of-visual-words model. We finally present two alternative ways of presenting the results to the user that might be more attractive and suitable to the user's needs than the classic ranked list.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0031-3203 ISBN (up) Medium  
  Area Expedition Conference  
  Notes DAG; 600.045; 600.061; 600.077 Approved no  
  Call Number Admin @ si @ RuL2013 Serial 2343  
Permanent link to this record
 

 
Author Marçal Rusiñol; Lluis Pere de las Heras; Oriol Ramos Terrades edit   pdf
doi  openurl
  Title Flowchart Recognition for Non-Textual Information Retrieval in Patent Search Type Journal Article
  Year 2014 Publication Information Retrieval Abbreviated Journal IR  
  Volume 17 Issue 5-6 Pages 545-562  
  Keywords Flowchart recognition; Patent documents; Text/graphics separation; Raster-to-vector conversion; Symbol recognition  
  Abstract Relatively little research has been done on the topic of patent image retrieval and in general in most of the approaches the retrieval is performed in terms of a similarity measure between the query image and the images in the corpus. However, systems aimed at overcoming the semantic gap between the visual description of patent images and their conveyed concepts would be very helpful for patent professionals. In this paper we present a flowchart recognition method aimed at achieving a structured representation of flowchart images that can be further queried semantically. The proposed method was submitted to the CLEF-IP 2012 flowchart recognition task. We report the obtained results on this dataset.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1386-4564 ISBN (up) Medium  
  Area Expedition Conference  
  Notes DAG; 600.077 Approved no  
  Call Number Admin @ si @ RHR2013 Serial 2342  
Permanent link to this record
 

 
Author Juan Ramon Terven Salinas; Joaquin Salas; Bogdan Raducanu edit   pdf
url  doi
openurl 
  Title New Opportunities for Computer Vision-Based Assistive Technology Systems for the Visually Impaired Type Journal Article
  Year 2014 Publication Computer Abbreviated Journal COMP  
  Volume 47 Issue 4 Pages 52-58  
  Keywords  
  Abstract Computing advances and increased smartphone use gives technology system designers greater flexibility in exploiting computer vision to support visually impaired users. Understanding these users' needs will certainly provide insight for the development of improved usability of computing devices.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0018-9162 ISBN (up) Medium  
  Area Expedition Conference  
  Notes LAMP; Approved no  
  Call Number Admin @ si @ TSR2014a Serial 2317  
Permanent link to this record
 

 
Author Bogdan Raducanu; Fadi Dornaika edit   pdf
doi  openurl
  Title Embedding new observations via sparse-coding for non-linear manifold learning Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue 1 Pages 480-492  
  Keywords  
  Abstract Non-linear dimensionality reduction techniques are affected by two critical aspects: (i) the design of the adjacency graphs, and (ii) the embedding of new test data-the out-of-sample problem. For the first aspect, the proposed solutions, in general, were heuristically driven. For the second aspect, the difficulty resides in finding an accurate mapping that transfers unseen data samples into an existing manifold. Past works addressing these two aspects were heavily parametric in the sense that the optimal performance is only achieved for a suitable parameter choice that should be known in advance. In this paper, we demonstrate that the sparse representation theory not only serves for automatic graph construction as shown in recent works, but also represents an accurate alternative for out-of-sample embedding. Considering for a case study the Laplacian Eigenmaps, we applied our method to the face recognition problem. To evaluate the effectiveness of the proposed out-of-sample embedding, experiments are conducted using the K-nearest neighbor (KNN) and Kernel Support Vector Machines (KSVM) classifiers on six public face datasets. The experimental results show that the proposed model is able to achieve high categorization effectiveness as well as high consistency with non-linear embeddings/manifolds obtained in batch modes.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN (up) Medium  
  Area Expedition Conference  
  Notes LAMP; Approved no  
  Call Number Admin @ si @ RaD2013b Serial 2316  
Permanent link to this record
 

 
Author Volkmar Frinken; Andreas Fischer; Markus Baumgartner; Horst Bunke edit   pdf
doi  openurl
  Title Keyword spotting for self-training of BLSTM NN based handwriting recognition systems Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue 3 Pages 1073-1082  
  Keywords Document retrieval; Keyword spotting; Handwriting recognition; Neural networks; Semi-supervised learning  
  Abstract The automatic transcription of unconstrained continuous handwritten text requires well trained recognition systems. The semi-supervised paradigm introduces the concept of not only using labeled data but also unlabeled data in the learning process. Unlabeled data can be gathered at little or not cost. Hence it has the potential to reduce the need for labeling training data, a tedious and costly process. Given a weak initial recognizer trained on labeled data, self-training can be used to recognize unlabeled data and add words that were recognized with high confidence to the training set for re-training. This process is not trivial and requires great care as far as selecting the elements that are to be added to the training set is concerned. In this paper, we propose to use a bidirectional long short-term memory neural network handwritten recognition system for keyword spotting in order to select new elements. A set of experiments shows the high potential of self-training for bootstrapping handwriting recognition systems, both for modern and historical handwritings, and demonstrate the benefits of using keyword spotting over previously published self-training schemes.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN (up) Medium  
  Area Expedition Conference  
  Notes DAG; 600.077; 602.101 Approved no  
  Call Number Admin @ si @ FFB2014 Serial 2297  
Permanent link to this record
 

 
Author Miguel Angel Bautista; Sergio Escalera; Oriol Pujol edit   pdf
doi  openurl
  Title On the Design of an ECOC-Compliant Genetic Algorithm Type Journal Article
  Year 2014 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 47 Issue 2 Pages 865-884  
  Keywords  
  Abstract Genetic Algorithms (GA) have been previously applied to Error-Correcting Output Codes (ECOC) in state-of-the-art works in order to find a suitable coding matrix. Nevertheless, none of the presented techniques directly take into account the properties of the ECOC matrix. As a result the considered search space is unnecessarily large. In this paper, a novel Genetic strategy to optimize the ECOC coding step is presented. This novel strategy redefines the usual crossover and mutation operators in order to take into account the theoretical properties of the ECOC framework. Thus, it reduces the search space and lets the algorithm to converge faster. In addition, a novel operator that is able to enlarge the code in a smart way is introduced. The novel methodology is tested on several UCI datasets and four challenging computer vision problems. Furthermore, the analysis of the results done in terms of performance, code length and number of Support Vectors shows that the optimization process is able to find very efficient codes, in terms of the trade-off between classification performance and the number of classifiers. Finally, classification performance per dichotomizer results shows that the novel proposal is able to obtain similar or even better results while defining a more compact number of dichotomies and SVs compared to state-of-the-art approaches.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN (up) Medium  
  Area Expedition Conference  
  Notes HuPBA;MILAB Approved no  
  Call Number Admin @ si @ BEP2013 Serial 2254  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: