Home | [211–220] << 221 222 223 224 225 226 227 228 >> |
![]() |
Records | |||||
---|---|---|---|---|---|
Author | Hassan Ahmed Sial; Ramon Baldrich; Maria Vanrell; Dimitris Samaras | ||||
Title | Light Direction and Color Estimation from Single Image with Deep Regression | Type | Conference Article | ||
Year | 2020 | Publication | London Imaging Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract ![]() |
We present a method to estimate the direction and color of the scene light source from a single image. Our method is based on two main ideas: (a) we use a new synthetic dataset with strong shadow effects with similar constraints to the SID dataset; (b) we define a deep architecture trained on the mentioned dataset to estimate the direction and color of the scene light source. Apart from showing good performance on synthetic images, we additionally propose a preliminary procedure to obtain light positions of the Multi-Illumination dataset, and, in this way, we also prove that our trained model achieves good performance when it is applied to real scenes. | ||||
Address | Virtual; September 2020 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | LIM | ||
Notes | CIC; 600.118; 600.140; | Approved | no | ||
Call Number | Admin @ si @ SBV2020 | Serial | 3460 | ||
Permanent link to this record | |||||
Author | Hugo Bertiche; Meysam Madadi; Sergio Escalera | ||||
Title | PBNS: Physically Based Neural Simulation for Unsupervised Garment Pose Space Deformation | Type | Conference Article | ||
Year | 2021 | Publication | 14th ACM Siggraph Conference and exhibition on Computer Graphics and Interactive Techniques in Asia | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract ![]() |
We present a methodology to automatically obtain Pose Space Deformation (PSD) basis for rigged garments through deep learning. Classical approaches rely on Physically Based Simulations (PBS) to animate clothes. These are general solutions that, given a sufficiently fine-grained discretization of space and time, can achieve highly realistic results. However, they are computationally expensive and any scene modification prompts the need of re-simulation. Linear Blend Skinning (LBS) with PSD offers a lightweight alternative to PBS, though, it needs huge volumes of data to learn proper PSD. We propose using deep learning, formulated as an implicit PBS, to unsupervisedly learn realistic cloth Pose Space Deformations in a constrained scenario: dressed humans. Furthermore, we show it is possible to train these models in an amount of time comparable to a PBS of a few sequences. To the best of our knowledge, we are the first to propose a neural simulator for cloth.
While deep-based approaches in the domain are becoming a trend, these are data-hungry models. Moreover, authors often propose complex formulations to better learn wrinkles from PBS data. Supervised learning leads to physically inconsistent predictions that require collision solving to be used. Also, dependency on PBS data limits the scalability of these solutions, while their formulation hinders its applicability and compatibility. By proposing an unsupervised methodology to learn PSD for LBS models (3D animation standard), we overcome both of these drawbacks. Results obtained show cloth-consistency in the animated garments and meaningful pose-dependant folds and wrinkles. Our solution is extremely efficient, handles multiple layers of cloth, allows unsupervised outfit resizing and can be easily applied to any custom 3D avatar. |
||||
Address | Virtual; December 2020 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SIGGRAPH | ||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ BME2021b | Serial | 3641 | ||
Permanent link to this record | |||||
Author | Hugo Bertiche; Meysam Madadi; Sergio Escalera | ||||
Title | PBNS: Physically Based Neural Simulation for Unsupervised Garment Pose Space Deformation | Type | Journal Article | ||
Year | 2021 | Publication | ACM Transactions on Graphics | Abbreviated Journal | |
Volume | 40 | Issue | 6 | Pages | 1-14 |
Keywords | |||||
Abstract ![]() |
We present a methodology to automatically obtain Pose Space Deformation (PSD) basis for rigged garments through deep learning. Classical approaches rely on Physically Based Simulations (PBS) to animate clothes. These are general solutions that, given a sufficiently fine-grained discretization of space and time, can achieve highly realistic results. However, they are computationally expensive and any scene modification prompts the need of re-simulation. Linear Blend Skinning (LBS) with PSD offers a lightweight alternative to PBS, though, it needs huge volumes of data to learn proper PSD. We propose using deep learning, formulated as an implicit PBS, to unsupervisedly learn realistic cloth Pose Space Deformations in a constrained scenario: dressed humans. Furthermore, we show it is possible to train these models in an amount of time comparable to a PBS of a few sequences. To the best of our knowledge, we are the first to propose a neural simulator for cloth.
While deep-based approaches in the domain are becoming a trend, these are data-hungry models. Moreover, authors often propose complex formulations to better learn wrinkles from PBS data. Supervised learning leads to physically inconsistent predictions that require collision solving to be used. Also, dependency on PBS data limits the scalability of these solutions, while their formulation hinders its applicability and compatibility. By proposing an unsupervised methodology to learn PSD for LBS models (3D animation standard), we overcome both of these drawbacks. Results obtained show cloth-consistency in the animated garments and meaningful pose-dependant folds and wrinkles. Our solution is extremely efficient, handles multiple layers of cloth, allows unsupervised outfit resizing and can be easily applied to any custom 3D avatar. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ BME2021c | Serial | 3643 | ||
Permanent link to this record | |||||
Author | Hugo Bertiche; Meysam Madadi; Sergio Escalera | ||||
Title | Deep Parametric Surfaces for 3D Outfit Reconstruction from Single View Image | Type | Conference Article | ||
Year | 2021 | Publication | 16th IEEE International Conference on Automatic Face and Gesture Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1-8 | ||
Keywords | |||||
Abstract ![]() |
We present a methodology to retrieve analytical surfaces parametrized as a neural network. Previous works on 3D reconstruction yield point clouds, voxelized objects or meshes. Instead, our approach yields 2-manifolds in the euclidean space through deep learning. To this end, we implement a novel formulation for fully connected layers as parametrized manifolds that allows continuous predictions with differential geometry. Based on this property we propose a novel smoothness loss. Results on CLOTH3D++ dataset show the possibility to infer different topologies and the benefits of the smoothness term based on differential geometry. | ||||
Address | Virtual; December 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | FG | ||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ BME2021 | Serial | 3640 | ||
Permanent link to this record | |||||
Author | Cristina Cañero; Petia Radeva; Oriol Pujol; Ricardo Toledo; Debora Gil; J. Saludes; Juan J. Villanueva; B. Garcia del Blanco; J. Mauri; E. Fernandez-Nofrerias; J.A. Gomez-Hospital; E. Iraculis; J. Comin; C. Quiles; F. Jara; A. Cequier; E. Esplugas | ||||
Title | Optimal Stent Implantation: Three-dimensional Evaluation of the Mutual Position of Stent and Vessel via Intracoronary Ecography | Type | Conference Article | ||
Year | 1999 | Publication | Proceedings of International Conference on Computer in Cardiology (CIC´99) | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract ![]() |
We present a new automatic technique to visualize and quantify the mutual position between the stent and the vessel wall by considering their three-dimensional reconstruction. Two deformable generalized cylinders adapt to the image features in all IVUS planes corresponding to the vessel wall and the stent in order to reconstruct the boundaries of the stent and the vessel in space. The image features that characterize the stent and the vessel wall are determined in terms of edge and ridge image detectors taking into account the gray level of the image pixels. We show that the 30 reconstruction by deformable cylinders is accurate and robust due to the spatial data coherence in the considered volumetric IVUS image. The main clinic utility of the stent and vessel reconstruction by deformable’ cylinders consists of its possibility to visualize and to assess the optimal stent introduction. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; RV; IAM; ADAS; HuPBA | Approved | no | ||
Call Number | IAM @ iam @ CRP1999a | Serial | 1491 | ||
Permanent link to this record | |||||
Author | Minesh Mathew; Dimosthenis Karatzas; C.V. Jawahar | ||||
Title | DocVQA: A Dataset for VQA on Document Images | Type | Conference Article | ||
Year | 2021 | Publication | IEEE Winter Conference on Applications of Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 2200-2209 | ||
Keywords | |||||
Abstract ![]() |
We present a new dataset for Visual Question Answering (VQA) on document images called DocVQA. The dataset consists of 50,000 questions defined on 12,000+ document images. Detailed analysis of the dataset in comparison with similar datasets for VQA and reading comprehension is presented. We report several baseline results by adopting existing VQA and reading comprehension models. Although the existing models perform reasonably well on certain types of questions, there is large performance gap compared to human performance (94.36% accuracy). The models need to improve specifically on questions where understanding structure of the document is crucial. The dataset, code and leaderboard are available at docvqa. org | ||||
Address | Virtual; January 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WACV | ||
Notes | DAG; 600.121 | Approved | no | ||
Call Number | Admin @ si @ MKJ2021 | Serial | 3498 | ||
Permanent link to this record | |||||
Author | Albert Gordo; Florent Perronnin; Ernest Valveny | ||||
Title | Large-scale document image retrieval and classification with runlength histograms and binary embeddings | Type | Journal Article | ||
Year | 2013 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 46 | Issue | 7 | Pages | 1898-1905 |
Keywords | visual document descriptor; compression; large-scale; retrieval; classification | ||||
Abstract ![]() |
We present a new document image descriptor based on multi-scale runlength
histograms. This descriptor does not rely on layout analysis and can be computed efficiently. We show how this descriptor can achieve state-of-theart results on two very different public datasets in classification and retrieval tasks. Moreover, we show how we can compress and binarize these descriptors to make them suitable for large-scale applications. We can achieve state-ofthe- art results in classification using binary descriptors of as few as 16 to 64 bits. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0031-3203 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; 600.042; 600.045; 605.203 | Approved | no | ||
Call Number | Admin @ si @ GPV2013 | Serial | 2306 | ||
Permanent link to this record | |||||
Author | Muhammad Muzzamil Luqman; Josep Llados; Jean-Yves Ramel; Thierry Brouard | ||||
Title | A Fuzzy-Interval Based Approach For Explicit Graph Embedding, Recognizing Patterns in Signals, Speech, Images and Video | Type | Conference Article | ||
Year | 2010 | Publication | 20th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | 6388 | Issue | Pages | 93–98 | |
Keywords | |||||
Abstract ![]() |
We present a new method for explicit graph embedding. Our algorithm extracts a feature vector for an undirected attributed graph. The proposed feature vector encodes details about the number of nodes, number of edges, node degrees, the attributes of nodes and the attributes of edges in the graph. The first two features are for the number of nodes and the number of edges. These are followed by w features for node degrees, m features for k node attributes and n features for l edge attributes — which represent the distribution of node degrees, node attribute values and edge attribute values, and are obtained by defining (in an unsupervised fashion), fuzzy-intervals over the list of node degrees, node attributes and edge attributes. Experimental results are provided for sample data of ICPR2010 contest GEPR. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer, Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-17710-1 | Medium | |
Area | Expedition | Conference | ICPR | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ LLR2010 | Serial | 1459 | ||
Permanent link to this record | |||||
Author | Muhammad Muzzamil Luqman; Thierry Brouard; Jean-Yves Ramel; Josep Llados | ||||
Title | Vers une approche foue of encapsulation de graphes: application a la reconnaissance de symboles | Type | Conference Article | ||
Year | 2010 | Publication | Colloque International Francophone sur l'Écrit et le Document | Abbreviated Journal | |
Volume | Issue | Pages | 169-184 | ||
Keywords | Fuzzy interval; Graph embedding; Bayesian network; Symbol recognition | ||||
Abstract ![]() |
We present a new methodology for symbol recognition, by employing a structural approach for representing visual associations in symbols and a statistical classifier for recognition. A graphic symbol is vectorized, its topological and geometrical details are encoded by an attributed relational graph and a signature is computed for it. Data adapted fuzzy intervals have been introduced for addressing the sensitivity of structural representations to noise. The joint probability distribution of signatures is encoded by a Bayesian network, which serves as a mechanism for pruning irrelevant features and choosing a subset of interesting features from structural signatures of underlying symbol set, and is deployed in a supervised learning scenario for recognizing query symbols. Experimental results on pre-segmented 2D linear architectural and electronic symbols from GREC databases are presented. | ||||
Address | Sousse, Tunisia | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CIFED | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ LBR2010a | Serial | 1293 | ||
Permanent link to this record | |||||
Author | Victor Ponce; Sergio Escalera; Marc Perez; Oriol Janes; Xavier Baro | ||||
Title | Non-Verbal Communication Analysis in Victim-Offender Mediations | Type | Journal Article | ||
Year | 2015 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 67 | Issue | 1 | Pages | 19-27 |
Keywords | Victim–Offender Mediation; Multi-modal human behavior analysis; Face and gesture recognition; Social signal processing; Computer vision; Machine learning | ||||
Abstract ![]() |
We present a non-invasive ambient intelligence framework for the semi-automatic analysis of non-verbal communication applied to the restorative justice field. We propose the use of computer vision and social signal processing technologies in real scenarios of Victim–Offender Mediations, applying feature extraction techniques to multi-modal audio-RGB-depth data. We compute a set of behavioral indicators that define communicative cues from the fields of psychology and observational methodology. We test our methodology on data captured in real Victim–Offender Mediation sessions in Catalonia. We define the ground truth based on expert opinions when annotating the observed social responses. Using different state of the art binary classification approaches, our system achieves recognition accuracies of 86% when predicting satisfaction, and 79% when predicting both agreement and receptivity. Applying a regression strategy, we obtain a mean deviation for the predictions between 0.5 and 0.7 in the range [1–5] for the computed social signals. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MV | Approved | no | ||
Call Number | Admin @ si @ PEP2015 | Serial | 2583 | ||
Permanent link to this record | |||||
Author | Wenjuan Gong; Jordi Gonzalez; Xavier Roca | ||||
Title | Human Action Recognition based on Estimated Weak Poses | Type | Journal Article | ||
Year | 2012 | Publication | EURASIP Journal on Advances in Signal Processing | Abbreviated Journal | EURASIPJ |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract ![]() |
We present a novel method for human action recognition (HAR) based on estimated poses from image sequences. We use 3D human pose data as additional information and propose a compact human pose representation, called a weak pose, in a low-dimensional space while still keeping the most discriminative information for a given pose. With predicted poses from image features, we map the problem from image feature space to pose space, where a Bag of Poses (BOP) model is learned for the final goal of HAR. The BOP model is a modified version of the classical bag of words pipeline by building the vocabulary based on the most representative weak poses for a given action. Compared with the standard k-means clustering, our vocabulary selection criteria is proven to be more efficient and robust against the inherent challenges of action recognition. Moreover, since for action recognition the ordering of the poses is discriminative, the BOP model incorporates temporal information: in essence, groups of consecutive poses are considered together when computing the vocabulary and assignment. We tested our method on two well-known datasets: HumanEva and IXMAS, to demonstrate that weak poses aid to improve action recognition accuracies. The proposed method is scene-independent and is comparable with the state-of-art method. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ GGR2012 | Serial | 2003 | ||
Permanent link to this record | |||||
Author | Fares Alnajar; Theo Gevers; Roberto Valenti; Sennay Ghebreab | ||||
Title | Calibration-free Gaze Estimation using Human Gaze Patterns | Type | Conference Article | ||
Year | 2013 | Publication | 15th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 137-144 | ||
Keywords | |||||
Abstract ![]() |
We present a novel method to auto-calibrate gaze estimators based on gaze patterns obtained from other viewers. Our method is based on the observation that the gaze patterns of humans are indicative of where a new viewer will look at [12]. When a new viewer is looking at a stimulus, we first estimate a topology of gaze points (initial gaze points). Next, these points are transformed so that they match the gaze patterns of other humans to find the correct gaze points. In a flexible uncalibrated setup with a web camera and no chin rest, the proposed method was tested on ten subjects and ten images. The method estimates the gaze points after looking at a stimulus for a few seconds with an average accuracy of 4.3 im. Although the reported performance is lower than what could be achieved with dedicated hardware or calibrated setup, the proposed method still provides a sufficient accuracy to trace the viewer attention. This is promising considering the fact that auto-calibration is done in a flexible setup , without the use of a chin rest, and based only on a few seconds of gaze initialization data. To the best of our knowledge, this is the first work to use human gaze patterns in order to auto-calibrate gaze estimators. | ||||
Address | Sydney | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ AGV2013 | Serial | 2365 | ||
Permanent link to this record | |||||
Author | Hugo Bertiche; Meysam Madadi; Emilio Tylson; Sergio Escalera | ||||
Title | DeePSD: Automatic Deep Skinning And Pose Space Deformation For 3D Garment Animation | Type | Conference Article | ||
Year | 2021 | Publication | 19th IEEE International Conference on Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | 5471-5480 | ||
Keywords | |||||
Abstract ![]() |
We present a novel solution to the garment animation problem through deep learning. Our contribution allows animating any template outfit with arbitrary topology and geometric complexity. Recent works develop models for garment edition, resizing and animation at the same time by leveraging the support body model (encoding garments as body homotopies). This leads to complex engineering solutions that suffer from scalability, applicability and compatibility. By limiting our scope to garment animation only, we are able to propose a simple model that can animate any outfit, independently of its topology, vertex order or connectivity. Our proposed architecture maps outfits to animated 3D models into the standard format for 3D animation (blend weights and blend shapes matrices), automatically providing of compatibility with any graphics engine. We also propose a methodology to complement supervised learning with an unsupervised physically based learning that implicitly solves collisions and enhances cloth quality. | ||||
Address | Virtual; October 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCV | ||
Notes | HUPBA; no menciona | Approved | no | ||
Call Number | Admin @ si @ BMT2021 | Serial | 3606 | ||
Permanent link to this record | |||||
Author | Reza Azad; Afshin Bozorgpour; Maryam Asadi-Aghbolaghi; Dorit Merhof; Sergio Escalera | ||||
Title | Deep Frequency Re-Calibration U-Net for Medical Image Segmentation | Type | Conference Article | ||
Year | 2021 | Publication | IEEE/CVF International Conference on Computer Vision Workshops | Abbreviated Journal | |
Volume | Issue | Pages | 3274-3283 | ||
Keywords | |||||
Abstract ![]() |
We present a novel solution to the garment animation problem through deep learning. Our contribution allows animating any template outfit with arbitrary topology and geometric complexity. Recent works develop models for garment edition, resizing and animation at the same time by leveraging the support body model (encoding garments as body homotopies). This leads to complex engineering solutions that suffer from scalability, applicability and compatibility. By limiting our scope to garment animation only, we are able to propose a simple model that can animate any outfit, independently of its topology, vertex order or connectivity. Our proposed architecture maps outfits to animated 3D models into the standard format for 3D animation (blend weights and blend shapes matrices), automatically providing of compatibility with any graphics engine. We also propose a methodology to complement supervised learning with an unsupervised physically based learning that implicitly solves collisions and enhances cloth quality. | ||||
Address | VIRTUAL; October 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW | ||
Notes | HUPBA; no proj | Approved | no | ||
Call Number | Admin @ si @ ABA2021 | Serial | 3645 | ||
Permanent link to this record | |||||
Author | German Ros; J. Guerrero; Angel Sappa; Antonio Lopez | ||||
Title | VSLAM pose initialization via Lie groups and Lie algebras optimization | Type | Conference Article | ||
Year | 2013 | Publication | Proceedings of IEEE International Conference on Robotics and Automation | Abbreviated Journal | |
Volume | Issue | Pages | 5740 - 5747 | ||
Keywords | SLAM | ||||
Abstract ![]() |
We present a novel technique for estimating initial 3D poses in the context of localization and Visual SLAM problems. The presented approach can deal with noise, outliers and a large amount of input data and still performs in real time in a standard CPU. Our method produces solutions with an accuracy comparable to those produced by RANSAC but can be much faster when the percentage of outliers is high or for large amounts of input data. On the current work we propose to formulate the pose estimation as an optimization problem on Lie groups, considering their manifold structure as well as their associated Lie algebras. This allows us to perform a fast and simple optimization at the same time that conserve all the constraints imposed by the Lie group SE(3). Additionally, we present several key design concepts related with the cost function and its Jacobian; aspects that are critical for the good performance of the algorithm. | ||||
Address | Karlsruhe; Germany; May 2013 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1050-4729 | ISBN | 978-1-4673-5641-1 | Medium | |
Area | Expedition | Conference | ICRA | ||
Notes | ADAS; 600.054; 600.055; 600.057 | Approved | no | ||
Call Number | Admin @ si @ RGS2013a; ADAS @ adas @ | Serial | 2225 | ||
Permanent link to this record |