PT Journal AU Miguel Oliveira Victor Santos Angel Sappa TI Multimodal Inverse Perspective Mapping SO Information Fusion JI IF PY 2015 BP 108–121 VL 24 DI 10.1016/j.inffus.2014.09.003 DE Inverse perspective mapping; Multimodal sensor fusion; Intelligent vehicles AB Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints. ER