|
Records |
Links |
|
Author |
Jose Luis Gomez; Gabriel Villalonga; Antonio Lopez |
![goto web page url](http://refbase.cvc.uab.es/img/www.gif)
|
|
Title |
Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models |
Type |
Journal Article |
|
Year ![sorted by Year field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
2023 |
Publication |
Sensors – Special Issue on “Machine Learning for Autonomous Driving Perception and Prediction” |
Abbreviated Journal |
SENS |
|
|
Volume |
23 |
Issue |
2 |
Pages |
621 |
|
|
Keywords |
Domain adaptation; semi-supervised learning; Semantic segmentation; Autonomous driving |
|
|
Abstract |
Semantic image segmentation is a central and challenging task in autonomous driving, addressed by training deep models. Since this training draws to a curse of human-based image labeling, using synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies to address an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic
segmentation models. It consists of a self-training stage, which provides two domain-adapted models, and a model collaboration loop for the mutual improvement of these two models. These models are then used to provide the final semantic segmentation labels (pseudo-labels) for the real-world images. The overall
procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for on-board semantic segmentation. Our
procedure shows improvements ranging from ∼13 to ∼26 mIoU points over baselines, so establishing new state-of-the-art results. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ GVL2023 |
Serial |
3705 |
|
Permanent link to this record |
|
|
|
|
Author |
M. Altillawi; S. Li; S.M. Prakhya; Z. Liu; Joan Serrat |
![goto web page (via DOI) doi](http://refbase.cvc.uab.es/img/doi.gif)
|
|
Title |
Implicit Learning of Scene Geometry From Poses for Global Localization |
Type |
Journal Article |
|
Year ![sorted by Year field, ascending order (up)](http://refbase.cvc.uab.es/img/sort_asc.gif) |
2024 |
Publication |
IEEE Robotics and Automation Letters |
Abbreviated Journal |
ROBOTAUTOMLET |
|
|
Volume |
9 |
Issue |
2 |
Pages |
955-962 |
|
|
Keywords |
Localization; Localization and mapping; Deep learning for visual perception; Visual learning |
|
|
Abstract |
Global visual localization estimates the absolute pose of a camera using a single image, in a previously mapped area. Obtaining the pose from a single image enables many robotics and augmented/virtual reality applications. Inspired by latest advances in deep learning, many existing approaches directly learn and regress 6 DoF pose from an input image. However, these methods do not fully utilize the underlying scene geometry for pose regression. The challenge in monocular relocalization is the minimal availability of supervised training data, which is just the corresponding 6 DoF poses of the images. In this letter, we propose to utilize these minimal available labels (i.e., poses) to learn the underlying 3D geometry of the scene and use the geometry to estimate the 6 DoF camera pose. We present a learning method that uses these pose labels and rigid alignment to learn two 3D geometric representations ( X, Y, Z coordinates ) of the scene, one in camera coordinate frame and the other in global coordinate frame. Given a single image, it estimates these two 3D scene representations, which are then aligned to estimate a pose that matches the pose label. This formulation allows for the active inclusion of additional learning constraints to minimize 3D alignment errors between the two 3D scene representations, and 2D re-projection errors between the 3D global scene representation and 2D image pixels, resulting in improved localization accuracy. During inference, our model estimates the 3D scene geometry in camera and global frames and aligns them rigidly to obtain pose in real-time. We evaluate our work on three common visual localization datasets, conduct ablation studies, and show that our method exceeds state-of-the-art regression methods' pose accuracy on all datasets. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
2377-3766 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3857 |
|
Permanent link to this record |