PT Journal AU Alejandro Gonzalez Alzate David Vazquez Antonio Lopez Jaume Amores TI On-Board Object Detection: Multicue, Multimodal, and Multiview Random Forest of Local Experts SO IEEE Transactions on cybernetics JI Cyber PY 2017 BP 3980 EP 3990 VL 47 IS 11 DI 10.1109/TCYB.2016.2593940 DE Multicue; multimodal; multiview; object detection AB Despite recent significant advances, object detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities, and a strong multiview (MV) classifier that accounts for different object views and poses. In this paper, we provide an extensive evaluation that gives insight into how each of these aspects (multicue, multimodality, and strong MV classifier) affect accuracy both individually and when integrated together. In the multimodality component, we explore the fusion of RGB and depth maps obtained by high-definition light detection and ranging, a type of modality that is starting to receive increasing attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the accuracy, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. ER