%0 Journal Article %T Multimodal Inverse Perspective Mapping %A Miguel Oliveira %A Victor Santos %A Angel Sappa %J Information Fusion %D 2015 %V 24 %F Miguel Oliveira2015 %O ADAS; 600.055; 600.076 %O exported from refbase (http://refbase.cvc.uab.es/show.php?record=2532), last updated on Mon, 27 Jun 2016 17:36:28 +0200 %X Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints. %K Inverse perspective mapping %K Multimodal sensor fusion %K Intelligent vehicles %U http://dx.doi.org/10.1016/j.inffus.2014.09.003 %P 108–121