toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author (down) Katerine Diaz; Jesus Martinez del Rincon; Marçal Rusiñol; Aura Hernandez-Sabate edit   pdf
doi  openurl
  Title Feature Extraction by Using Dual-Generalized Discriminative Common Vectors Type Journal Article
  Year 2019 Publication Journal of Mathematical Imaging and Vision Abbreviated Journal JMIV  
  Volume 61 Issue 3 Pages 331-351  
  Keywords Online feature extraction; Generalized discriminative common vectors; Dual learning; Incremental learning; Decremental learning  
  Abstract In this paper, a dual online subspace-based learning method called dual-generalized discriminative common vectors (Dual-GDCV) is presented. The method extends incremental GDCV by exploiting simultaneously both the concepts of incremental and decremental learning for supervised feature extraction and classification. Our methodology is able to update the feature representation space without recalculating the full projection or accessing the previously processed training data. It allows both adding information and removing unnecessary data from a knowledge base in an efficient way, while retaining the previously acquired knowledge. The proposed method has been theoretically proved and empirically validated in six standard face recognition and classification datasets, under two scenarios: (1) removing and adding samples of existent classes, and (2) removing and adding new classes to a classification problem. Results show a considerable computational gain without compromising the accuracy of the model in comparison with both batch methodologies and other state-of-art adaptive methods.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; ADAS; 600.084; 600.118; 600.121; 600.129;IAM Approved no  
  Call Number Admin @ si @ DRR2019 Serial 3172  
Permanent link to this record
 

 
Author (down) Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Marçal Rusiñol; Francesc J. Ferri edit   pdf
doi  openurl
  Title Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction Type Journal Article
  Year 2018 Publication Journal of Mathematical Imaging and Vision Abbreviated Journal JMIV  
  Volume 60 Issue 4 Pages 512-524  
  Keywords  
  Abstract This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear
problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed, a first one based on the kernel trick (KT) and a second one based on the nonlinear projection trick (NPT) for even higher efficiency. Both methodologies
have been validated on four different image datasets containing faces, objects and handwritten digits, and compared against well known non-linear state-of-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes DAG; ADAS; 600.086; 600.130; 600.121; 600.118; 600.129;IAM Approved no  
  Call Number Admin @ si @ DMH2018a Serial 3062  
Permanent link to this record
 

 
Author (down) Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Debora Gil edit   pdf
doi  openurl
  Title Continuous head pose estimation using manifold subspace embedding and multivariate regression Type Journal Article
  Year 2018 Publication IEEE Access Abbreviated Journal ACCESS  
  Volume 6 Issue Pages 18325 - 18334  
  Keywords Head Pose estimation; HOG features; Generalized Discriminative Common Vectors; B-splines; Multiple linear regression  
  Abstract In this paper, a continuous head pose estimation system is proposed to estimate yaw and pitch head angles from raw facial images. Our approach is based on manifold learningbased methods, due to their promising generalization properties shown for face modelling from images. The method combines histograms of oriented gradients, generalized discriminative common vectors and continuous local regression to achieve successful performance. Our proposal was tested on multiple standard face datasets, as well as in a realistic scenario. Results show a considerable performance improvement and a higher consistence of our model in comparison with other state-of-art methods, with angular errors varying between 9 and 17 degrees.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2169-3536 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118;IAM Approved no  
  Call Number Admin @ si @ DMH2018b Serial 3091  
Permanent link to this record
 

 
Author (down) Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate edit   pdf
url  openurl
  Title Decremental generalized discriminative common vectors applied to images classification Type Journal Article
  Year 2017 Publication Knowledge-Based Systems Abbreviated Journal KBS  
  Volume 131 Issue Pages 46-57  
  Keywords Decremental learning; Generalized Discriminative Common Vectors; Feature extraction; Linear subspace methods; Classification  
  Abstract In this paper, a novel decremental subspace-based learning method called Decremental Generalized Discriminative Common Vectors method (DGDCV) is presented. The method makes use of the concept of decremental learning, which we introduce in the field of supervised feature extraction and classification. By efficiently removing unnecessary data and/or classes for a knowledge base, our methodology is able to update the model without recalculating the full projection or accessing to the previously processed training data, while retaining the previously acquired knowledge. The proposed method has been validated in 6 standard face recognition datasets, showing a considerable computational gain without compromising the accuracy of the model.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118; 600.121;IAM Approved no  
  Call Number Admin @ si @ DMH2017a Serial 3003  
Permanent link to this record
 

 
Author (down) Katerine Diaz; Francesc J. Ferri; W. Diaz edit  doi
openurl 
  Title Incremental Generalized Discriminative Common Vectors for Image Classification Type Journal Article
  Year 2015 Publication IEEE Transactions on Neural Networks and Learning Systems Abbreviated Journal TNNLS  
  Volume 26 Issue 8 Pages 1761 - 1775  
  Keywords  
  Abstract Subspace-based methods have become popular due to their ability to appropriately represent complex data in such a way that both dimensionality is reduced and discriminativeness is enhanced. Several recent works have concentrated on the discriminative common vector (DCV) method and other closely related algorithms also based on the concept of null space. In this paper, we present a generalized incremental formulation of the DCV methods, which allows the update of a given model by considering the addition of new examples even from unseen classes. Having efficient incremental formulations of well-behaved batch algorithms allows us to conveniently adapt previously trained classifiers without the need of recomputing them from scratch. The proposed generalized incremental method has been empirically validated in different case studies from different application domains (faces, objects, and handwritten digits) considering several different scenarios in which new data are continuously added at different rates starting from an initial model.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 2162-237X ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.076 Approved no  
  Call Number Admin @ si @ DFD2015 Serial 2547  
Permanent link to this record
 

 
Author (down) Katerine Diaz; Francesc J. Ferri; Aura Hernandez-Sabate edit   pdf
url  doi
openurl 
  Title An overview of incremental feature extraction methods based on linear subspaces Type Journal Article
  Year 2018 Publication Knowledge-Based Systems Abbreviated Journal KBS  
  Volume 145 Issue Pages 219-235  
  Keywords  
  Abstract With the massive explosion of machine learning in our day-to-day life, incremental and adaptive learning has become a major topic, crucial to keep up-to-date and improve classification models and their corresponding feature extraction processes. This paper presents a categorized overview of incremental feature extraction based on linear subspace methods which aim at incorporating new information to the already acquired knowledge without accessing previous data. Specifically, this paper focuses on those linear dimensionality reduction methods with orthogonal matrix constraints based on global loss function, due to the extensive use of their batch approaches versus other linear alternatives. Thus, we cover the approaches derived from Principal Components Analysis, Linear Discriminative Analysis and Discriminative Common Vector methods. For each basic method, its incremental approaches are differentiated according to the subspace model and matrix decomposition involved in the updating process. Besides this categorization, several updating strategies are distinguished according to the amount of data used to update and to the fact of considering a static or dynamic number of classes. Moreover, the specific role of the size/dimension ratio in each method is considered. Finally, computational complexity, experimental setup and the accuracy rates according to published results are compiled and analyzed, and an empirical evaluation is done to compare the best approach of each kind.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0950-7051 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118;IAM Approved no  
  Call Number Admin @ si @ DFH2018 Serial 3090  
Permanent link to this record
 

 
Author (down) Katerine Diaz; Aura Hernandez-Sabate; Antonio Lopez edit   pdf
doi  openurl
  Title A reduced feature set for driver head pose estimation Type Journal Article
  Year 2016 Publication Applied Soft Computing Abbreviated Journal ASOC  
  Volume 45 Issue Pages 98-107  
  Keywords Head pose estimation; driving performance evaluation; subspace based methods; linear regression  
  Abstract Evaluation of driving performance is of utmost importance in order to reduce road accident rate. Since driving ability includes visual-spatial and operational attention, among others, head pose estimation of the driver is a crucial indicator of driving performance. This paper proposes a new automatic method for coarse and fine head's yaw angle estimation of the driver. We rely on a set of geometric features computed from just three representative facial keypoints, namely the center of the eyes and the nose tip. With these geometric features, our method combines two manifold embedding methods and a linear regression one. In addition, the method has a confidence mechanism to decide if the classification of a sample is not reliable. The approach has been tested using the CMU-PIE dataset and our own driver dataset. Despite the very few facial keypoints required, the results are comparable to the state-of-the-art techniques. The low computational cost of the method and its robustness makes feasible to integrate it in massive consume devices as a real time application.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.085; 600.076;;IAM Approved no  
  Call Number Admin @ si @ DHL2016 Serial 2760  
Permanent link to this record
 

 
Author (down) Juan Jose Rubio; Takahiro Kashiwa; Teera Laiteerapong; Wenlong Deng; Kohei Nagai; Sergio Escalera; Kotaro Nakayama; Yutaka Matsuo; Helmut Prendinger edit  url
doi  openurl
  Title Multi-class structural damage segmentation using fully convolutional networks Type Journal Article
  Year 2019 Publication Computers in Industry Abbreviated Journal COMPUTIND  
  Volume 112 Issue Pages 103121  
  Keywords Bridge damage detection; Deep learning; Semantic segmentation  
  Abstract Structural Health Monitoring (SHM) has benefited from computer vision and more recently, Deep Learning approaches, to accurately estimate the state of deterioration of infrastructure. In our work, we test Fully Convolutional Networks (FCNs) with a dataset of deck areas of bridges for damage segmentation. We create a dataset for delamination and rebar exposure that has been collected from inspection records of bridges in Niigata Prefecture, Japan. The dataset consists of 734 images with three labels per image, which makes it the largest dataset of images of bridge deck damage. This data allows us to estimate the performance of our method based on regions of agreement, which emulates the uncertainty of in-field inspections. We demonstrate the practicality of FCNs to perform automated semantic segmentation of surface damages. Our model achieves a mean accuracy of 89.7% for delamination and 78.4% for rebar exposure, and a weighted F1 score of 81.9%.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj;MILAB;ADAS Approved no  
  Call Number Admin @ si @ RKL2019 Serial 3315  
Permanent link to this record
 

 
Author (down) Jose Manuel Alvarez; Theo Gevers; Ferran Diego; Antonio Lopez edit   pdf
doi  openurl
  Title Road Geometry Classification by Adaptative Shape Models Type Journal Article
  Year 2013 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 14 Issue 1 Pages 459-468  
  Keywords road detection  
  Abstract Vision-based road detection is important for different applications in transportation, such as autonomous driving, vehicle collision warning, and pedestrian crossing detection. Common approaches to road detection are based on low-level road appearance (e.g., color or texture) and neglect of the scene geometry and context. Hence, using only low-level features makes these algorithms highly depend on structured roads, road homogeneity, and lighting conditions. Therefore, the aim of this paper is to classify road geometries for road detection through the analysis of scene composition and temporal coherence. Road geometry classification is proposed by building corresponding models from training images containing prototypical road geometries. We propose adaptive shape models where spatial pyramids are steered by the inherent spatial structure of road images. To reduce the influence of lighting variations, invariant features are used. Large-scale experiments show that the proposed road geometry classifier yields a high recognition rate of 73.57% ± 13.1, clearly outperforming other state-of-the-art methods. Including road shape information improves road detection results over existing appearance-based methods. Finally, it is shown that invariant features and temporal information provide robustness against disturbing imaging conditions.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;ISE Approved no  
  Call Number Admin @ si @ AGD2013;; ADAS @ adas @ Serial 2269  
Permanent link to this record
 

 
Author (down) Jose Manuel Alvarez; Theo Gevers; Antonio Lopez edit   pdf
doi  openurl
  Title Learning photometric invariance for object detection Type Journal Article
  Year 2010 Publication International Journal of Computer Vision Abbreviated Journal IJCV  
  Volume 90 Issue 1 Pages 45-61  
  Keywords road detection  
  Abstract Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously.
Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time.
Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods
 
  Address  
  Corporate Author Thesis  
  Publisher Springer US Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0920-5691 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;ISE Approved no  
  Call Number ADAS @ adas @ AGL2010c Serial 1451  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: