Home | [21–30] << 31 32 33 34 35 36 37 38 39 40 >> [41–50] |
Records | |||||
---|---|---|---|---|---|
Author | Naveen Onkarappa; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Angel Sappa | ||||
Title | Cross-spectral Stereo Correspondence using Dense Flow Fields | Type | Conference Article | ||
Year | 2014 | Publication | 9th International Conference on Computer Vision Theory and Applications | Abbreviated Journal | |
Volume | 3 | Issue | Pages | 613-617 | |
Keywords | Cross-spectral Stereo Correspondence; Dense Optical Flow; Infrared and Visible Spectrum | ||||
Abstract | This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach. | ||||
Address | Lisboa; Portugal; January 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISAPP | ||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ OAV2014 | Serial | 2477 | ||
Permanent link to this record | |||||
Author | Miguel Oliveira; Victor Santos; Angel Sappa | ||||
Title | Multimodal Inverse Perspective Mapping | Type | Journal Article | ||
Year | 2015 | Publication | Information Fusion | Abbreviated Journal | IF |
Volume | 24 | Issue | Pages | 108–121 | |
Keywords | Inverse perspective mapping; Multimodal sensor fusion; Intelligent vehicles | ||||
Abstract | Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ OSS2015c | Serial | 2532 | ||
Permanent link to this record | |||||
Author | T. Mouats; N. Aouf; Angel Sappa; Cristhian A. Aguilera-Carrasco; Ricardo Toledo | ||||
Title | Multi-Spectral Stereo Odometry | Type | Journal Article | ||
Year | 2015 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 16 | Issue | 3 | Pages | 1210-1224 |
Keywords | Egomotion estimation; feature matching; multispectral odometry (MO); optical flow; stereo odometry; thermal imagery | ||||
Abstract | In this paper, we investigate the problem of visual odometry for ground vehicles based on the simultaneous utilization of multispectral cameras. It encompasses a stereo rig composed of an optical (visible) and thermal sensors. The novelty resides in the localization of the cameras as a stereo setup rather
than two monocular cameras of different spectrums. To the best of our knowledge, this is the first time such task is attempted. Log-Gabor wavelets at different orientations and scales are used to extract interest points from both images. These are then described using a combination of frequency and spatial information within the local neighborhood. Matches between the pairs of multimodal images are computed using the cosine similarity function based on the descriptors. Pyramidal Lucas–Kanade tracker is also introduced to tackle temporal feature matching within challenging sequences of the data sets. The vehicle egomotion is computed from the triangulated 3-D points corresponding to the matched features. A windowed version of bundle adjustment incorporating Gauss–Newton optimization is utilized for motion estimation. An outlier removal scheme is also included within the framework to deal with outliers. Multispectral data sets were generated and used as test bed. They correspond to real outdoor scenarios captured using our multimodal setup. Finally, detailed results validating the proposed strategy are illustrated. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1524-9050 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ MAS2015a | Serial | 2533 | ||
Permanent link to this record | |||||
Author | Mohammad Rouhani; E. Boyer; Angel Sappa | ||||
Title | Non-Rigid Registration meets Surface Reconstruction | Type | Conference Article | ||
Year | 2014 | Publication | International Conference on 3D Vision | Abbreviated Journal | |
Volume | Issue | Pages | 617-624 | ||
Keywords | |||||
Abstract | Non rigid registration is an important task in computer vision with many applications in shape and motion modeling. A fundamental step of the registration is the data association between the source and the target sets. Such association proves difficult in practice, due to the discrete nature of the information and its corruption by various types of noise, e.g. outliers and missing data. In this paper we investigate the benefit of the implicit representations for the non-rigid registration of 3D point clouds. First, the target points are described with small quadratic patches that are blended through partition of unity weighting. Then, the discrete association between the source and the target can be replaced by a continuous distance field induced by the interface. By combining this distance field with a proper deformation term, the registration energy can be expressed in a linear least square form that is easy and fast to solve. This significantly eases the registration by avoiding direct association between points. Moreover, a hierarchical approach can be easily implemented by employing coarse-to-fine representations. Experimental results are provided for point clouds from multi-view data sets. The qualitative and quantitative comparisons show the outperformance and robustness of our framework. %in presence of noise and outliers. | ||||
Address | Tokyo; Japan; December 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | 3DV | ||
Notes | ADAS; 600.055; 600.076 | Approved | no | ||
Call Number | Admin @ si @ RBS2014 | Serial | 2534 | ||
Permanent link to this record | |||||
Author | Naveen Onkarappa; Angel Sappa | ||||
Title | Laplacian Derivative based Regularization for Optical Flow Estimation in Driving Scenario | Type | Conference Article | ||
Year | 2013 | Publication | 15th International Conference on Computer Analysis of Images and Patterns | Abbreviated Journal | |
Volume | 8048 | Issue | Pages | 483-490 | |
Keywords | Optical flow; regularization; Driver Assistance Systems; Performance Evaluation | ||||
Abstract | Existing state of the art optical flow approaches, which are evaluated on standard datasets such as Middlebury, not necessarily have a similar performance when evaluated on driving scenarios. This drop on performance is due to several challenges arising on real scenarios during driving. Towards this direction, in this paper, we propose a modification to the regularization term in a variational optical flow formulation, that notably improves the results, specially in driving scenarios. The proposed modification consists on using the Laplacian derivatives of flow components in the regularization term instead of gradients of flow components. We show the improvements in results on a standard real image sequences dataset (KITTI). | ||||
Address | York; UK; August 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-40245-6 | Medium | |
Area | Expedition | Conference | CAIP | ||
Notes | ADAS; 600.055; 601.215 | Approved | no | ||
Call Number | Admin @ si @ OnS2013b | Serial | 2244 | ||
Permanent link to this record | |||||
Author | Naveen Onkarappa; Angel Sappa | ||||
Title | Synthetic sequences and ground-truth flow field generation for algorithm validation | Type | Journal Article | ||
Year | 2015 | Publication | Multimedia Tools and Applications | Abbreviated Journal | MTAP |
Volume | 74 | Issue | 9 | Pages | 3121-3135 |
Keywords | Ground-truth optical flow; Synthetic sequence; Algorithm validation | ||||
Abstract | Research in computer vision is advancing by the availability of good datasets that help to improve algorithms, validate results and obtain comparative analysis. The datasets can be real or synthetic. For some of the computer vision problems such as optical flow it is not possible to obtain ground-truth optical flow with high accuracy in natural outdoor real scenarios directly by any sensor, although it is possible to obtain ground-truth data of real scenarios in a laboratory setup with limited motion. In this difficult situation computer graphics offers a viable option for creating realistic virtual scenarios. In the current work we present a framework to design virtual scenes and generate sequences as well as ground-truth flow fields. Particularly, we generate a dataset containing sequences of driving scenarios. The sequences in the dataset vary in different speeds of the on-board vision system, different road textures, complex motion of vehicle and independent moving vehicles in the scene. This dataset enables analyzing and adaptation of existing optical flow methods, and leads to invention of new approaches particularly for driver assistance systems. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer US | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1380-7501 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.055; 601.215; 600.076 | Approved | no | ||
Call Number | Admin @ si @ OnS2014b | Serial | 2472 | ||
Permanent link to this record | |||||
Author | Naveen Onkarappa; Angel Sappa | ||||
Title | A Novel Space Variant Image Representation | Type | Journal Article | ||
Year | 2013 | Publication | Journal of Mathematical Imaging and Vision | Abbreviated Journal | JMIV |
Volume | 47 | Issue | 1-2 | Pages | 48-59 |
Keywords | Space-variant representation; Log-polar mapping; Onboard vision applications | ||||
Abstract | Traditionally, in machine vision images are represented using cartesian coordinates with uniform sampling along the axes. On the contrary, biological vision systems represent images using polar coordinates with non-uniform sampling. For various advantages provided by space-variant representations many researchers are interested in space-variant computer vision. In this direction the current work proposes a novel and simple space variant representation of images. The proposed representation is compared with the classical log-polar mapping. The log-polar representation is motivated by biological vision having the characteristic of higher resolution at the fovea and reduced resolution at the periphery. On the contrary to the log-polar, the proposed new representation has higher resolution at the periphery and lower resolution at the fovea. Our proposal is proved to be a better representation in navigational scenarios such as driver assistance systems and robotics. The experimental results involve analysis of optical flow fields computed on both proposed and log-polar representations. Additionally, an egomotion estimation application is also shown as an illustrative example. The experimental analysis comprises results from synthetic as well as real sequences. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer US | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0924-9907 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.055; 605.203; 601.215 | Approved | no | ||
Call Number | Admin @ si @ OnS2013a | Serial | 2243 | ||
Permanent link to this record | |||||
Author | Gemma Roig; Xavier Boix; R. de Nijs; Sebastian Ramos; K. Kühnlenz; Luc Van Gool | ||||
Title | Active MAP Inference in CRFs for Efficient Semantic Segmentation | Type | Conference Article | ||
Year | 2013 | Publication | 15th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 2312 - 2319 | ||
Keywords | Semantic Segmentation | ||||
Abstract | Most MAP inference algorithms for CRFs optimize an energy function knowing all the potentials. In this paper, we focus on CRFs where the computational cost of instantiating the potentials is orders of magnitude higher than MAP inference. This is often the case in semantic image segmentation, where most potentials are instantiated by slow classifiers fed with costly features. We introduce Active MAP inference 1) to on-the-fly select a subset of potentials to be instantiated in the energy function, leaving the rest of the parameters of the potentials unknown, and 2) to estimate the MAP labeling from such incomplete energy function. Results for semantic segmentation benchmarks, namely PASCAL VOC 2010 [5] and MSRC-21 [19], show that Active MAP inference achieves similar levels of accuracy but with major efficiency gains. | ||||
Address | Sydney; Australia; December 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1550-5499 | ISBN | Medium | ||
Area | Expedition | Conference | ICCV | ||
Notes | ADAS; 600.057 | Approved | no | ||
Call Number | ADAS @ adas @ RBN2013 | Serial | 2377 | ||
Permanent link to this record | |||||
Author | Javier Marin; David Vazquez; Antonio Lopez; Jaume Amores; Bastian Leibe | ||||
Title | Random Forests of Local Experts for Pedestrian Detection | Type | Conference Article | ||
Year | 2013 | Publication | 15th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 2592 - 2599 | ||
Keywords | ADAS; Random Forest; Pedestrian Detection | ||||
Abstract | Pedestrian detection is one of the most challenging tasks in computer vision, and has received a lot of attention in the last years. Recently, some authors have shown the advantages of using combinations of part/patch-based detectors in order to cope with the large variability of poses and the existence of partial occlusions. In this paper, we propose a pedestrian detection method that efficiently combines multiple local experts by means of a Random Forest ensemble. The proposed method works with rich block-based representations such as HOG and LBP, in such a way that the same features are reused by the multiple local experts, so that no extra computational cost is needed with respect to a holistic method. Furthermore, we demonstrate how to integrate the proposed approach with a cascaded architecture in order to achieve not only high accuracy but also an acceptable efficiency. In particular, the resulting detector operates at five frames per second using a laptop machine. We tested the proposed method with well-known challenging datasets such as Caltech, ETH, Daimler, and INRIA. The method proposed in this work consistently ranks among the top performers in all the datasets, being either the best method or having a small difference with the best one. | ||||
Address | Sydney; Australia; December 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1550-5499 | ISBN | Medium | ||
Area | Expedition | Conference | ICCV | ||
Notes | ADAS; 600.057; 600.054 | Approved | no | ||
Call Number | ADAS @ adas @ MVL2013 | Serial | 2333 | ||
Permanent link to this record | |||||
Author | David Vazquez; Javier Marin; Antonio Lopez; Daniel Ponsa; David Geronimo | ||||
Title | Virtual and Real World Adaptation for Pedestrian Detection | Type | Journal Article | ||
Year | 2014 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 36 | Issue | 4 | Pages | 797-809 |
Keywords | Domain Adaptation; Pedestrian Detection | ||||
Abstract | Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in realworld images?. Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the dataset shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.057; 600.054; 600.076 | Approved | no | ||
Call Number | ADAS @ adas @ VML2014 | Serial | 2275 | ||
Permanent link to this record | |||||
Author | Alejandro Gonzalez Alzate; Sebastian Ramos; David Vazquez; Antonio Lopez; Jaume Amores | ||||
Title | Spatiotemporal Stacked Sequential Learning for Pedestrian Detection | Type | Conference Article | ||
Year | 2015 | Publication | Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015 | Abbreviated Journal | |
Volume | Issue | Pages | 3-12 | ||
Keywords | SSL; Pedestrian Detection | ||||
Abstract | Pedestrian classifiers decide which image windows contain a pedestrian. In practice, such classifiers provide a relatively high response at neighbor windows overlapping a pedestrian, while the responses around potential false positives are expected to be lower. An analogous reasoning applies for image sequences. If there is a pedestrian located within a frame, the same pedestrian is expected to appear close to the same location in neighbor frames. Therefore, such a location has chances of receiving high classification scores during several frames, while false positives are expected to be more spurious. In this paper we propose to exploit such correlations for improving the accuracy of base pedestrian classifiers. In particular, we propose to use two-stage classifiers which not only rely on the image descriptors required by the base classifiers but also on the response of such base classifiers in a given spatiotemporal neighborhood. More specifically, we train pedestrian classifiers using a stacked sequential learning (SSL) paradigm. We use a new pedestrian dataset we have acquired from a car to evaluate our proposal at different frame rates. We also test on a well known dataset: Caltech. The obtained results show that our SSL proposal boosts detection accuracy significantly with a minimal impact on the computational cost. Interestingly, SSL improves more the accuracy at the most dangerous situations, i.e. when a pedestrian is close to the camera. | ||||
Address | Santiago de Compostela; España; June 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | ACDC | Expedition | Conference | IbPRIA | |
Notes | ADAS; 600.057; 600.054; 600.076 | Approved | no | ||
Call Number | GRV2015; ADAS @ adas @ GRV2015 | Serial | 2454 | ||
Permanent link to this record | |||||
Author | Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez | ||||
Title | Incremental Domain Adaptation of Deformable Part-based Models | Type | Conference Article | ||
Year | 2014 | Publication | 25th British Machine Vision Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Pedestrian Detection; Part-based models; Domain Adaptation | ||||
Abstract | Nowadays, classifiers play a core role in many computer vision tasks. The underlying assumption for learning classifiers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classifiers. However, in practice, there are different reasons that can break this constancy assumption. Accordingly, reusing existing classifiers by adapting them from the previous training environment (source domain) to the new testing one (target domain)
is an approach with increasing acceptance in the computer vision community. In this paper we focus on the domain adaptation of deformable part-based models (DPMs) for object detection. In particular, we focus on a relatively unexplored scenario, i.e. incremental domain adaptation for object detection assuming weak-labeling. Therefore, our algorithm is ready to improve existing source-oriented DPM-based detectors as soon as a little amount of labeled target-domain training data is available, and keeps improving as more of such data arrives in a continuous fashion. For achieving this, we follow a multiple instance learning (MIL) paradigm that operates in an incremental per-image basis. As proof of concept, we address the challenging scenario of adapting a DPM-based pedestrian detector trained with synthetic pedestrians to operate in real-world scenarios. The obtained results show that our incremental adaptive models obtain equally good accuracy results as the batch learned models, while being more flexible for handling continuously arriving target-domain data. |
||||
Address | Nottingham; uk; September 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | BMVA Press | Place of Publication | Editor | Valstar, Michel and French, Andrew and Pridmore, Tony | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | BMVC | ||
Notes | ADAS; 600.057; 600.054; 600.076 | Approved | no | ||
Call Number | XRV2014c; ADAS @ adas @ xrv2014c | Serial | 2455 | ||
Permanent link to this record | |||||
Author | Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez | ||||
Title | Domain Adaptation of Deformable Part-Based Models | Type | Journal Article | ||
Year | 2014 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 36 | Issue | 12 | Pages | 2367-2380 |
Keywords | Domain Adaptation; Pedestrian Detection | ||||
Abstract | The accuracy of object classifiers can significantly drop when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, adapting the classifiers to the scenario in which they must operate is of paramount importance. We present novel domain adaptation (DA) methods for object detection. As proof of concept, we focus on adapting the state-of-the-art deformable part-based model (DPM) for pedestrian detection. We introduce an adaptive structural SVM (A-SSVM) that adapts a pre-learned classifier between different domains. By taking into account the inherent structure in feature space (e.g., the parts in a DPM), we propose a structure-aware A-SSVM (SA-SSVM). Neither A-SSVM nor SA-SSVM needs to revisit the source-domain training data to perform the adaptation. Rather, a low number of target-domain training examples (e.g., pedestrians) are used. To address the scenario where there are no target-domain annotated samples, we propose a self-adaptive DPM based on a self-paced learning (SPL) strategy and a Gaussian Process Regression (GPR). Two types of adaptation tasks are assessed: from both synthetic pedestrians and general persons (PASCAL VOC) to pedestrians imaged from an on-board camera. Results show that our proposals avoid accuracy drops as high as 15 points when comparing adapted and non-adapted detectors. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 600.057; 600.054; 601.217; 600.076 | Approved | no | ||
Call Number | ADAS @ adas @ XRV2014b | Serial | 2436 | ||
Permanent link to this record | |||||
Author | Jiaolong Xu; Sebastian Ramos;David Vazquez; Antonio Lopez | ||||
Title | Cost-sensitive Structured SVM for Multi-category Domain Adaptation | Type | Conference Article | ||
Year | 2014 | Publication | 22nd International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 3886 - 3891 | ||
Keywords | Domain Adaptation; Pedestrian Detection | ||||
Abstract | Domain adaptation addresses the problem of accuracy drop that a classifier may suffer when the training data (source domain) and the testing data (target domain) are drawn from different distributions. In this work, we focus on domain adaptation for structured SVM (SSVM). We propose a cost-sensitive domain adaptation method for SSVM, namely COSS-SSVM. In particular, during the re-training of an adapted classifier based on target and source data, the idea that we explore consists in introducing a non-zero cost even for correctly classified source domain samples. Eventually, we aim to learn a more targetoriented classifier by not rewarding (zero loss) properly classified source-domain training samples. We assess the effectiveness of COSS-SSVM on multi-category object recognition. | ||||
Address | Stockholm; Sweden; August 2014 | ||||
Corporate Author | Thesis | ||||
Publisher | IEEE | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1051-4651 | ISBN | Medium | ||
Area | Expedition | Conference | ICPR | ||
Notes | ADAS; 600.057; 600.054; 601.217; 600.076 | Approved | no | ||
Call Number | ADAS @ adas @ XRV2014a | Serial | 2434 | ||
Permanent link to this record | |||||
Author | Joan Serrat; Felipe Lumbreras; Antonio Lopez | ||||
Title | Cost estimation of custom hoses from STL files and CAD drawings | Type | Journal Article | ||
Year | 2013 | Publication | Computers in Industry | Abbreviated Journal | COMPUTIND |
Volume | 64 | Issue | 3 | Pages | 299-309 |
Keywords | On-line quotation; STL format; Regression; Gaussian process | ||||
Abstract | We present a method for the cost estimation of custom hoses from CAD models. They can come in two formats, which are easy to generate: a STL file or the image of a CAD drawing showing several orthogonal projections. The challenges in either cases are, first, to obtain from them a high level 3D description of the shape, and second, to learn a regression function for the prediction of the manufacturing time, based on geometric features of the reconstructed shape. The chosen description is the 3D line along the medial axis of the tube and the diameter of the circular sections along it. In order to extract it from STL files, we have adapted RANSAC, a robust parametric fitting algorithm. As for CAD drawing images, we propose a new technique for 3D reconstruction from data entered on any number of orthogonal projections. The regression function is a Gaussian process, which does not constrain the function to adopt any specific form and is governed by just two parameters. We assess the accuracy of the manufacturing time estimation by k-fold cross validation on 171 STL file models for which the time is provided by an expert. The results show the feasibility of the method, whereby the relative error for 80% of the testing samples is below 15%. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.057; 600.054; 605.203 | Approved | no | ||
Call Number | Admin @ si @ SLL2013; ADAS @ adas @ | Serial | 2161 | ||
Permanent link to this record |