|
Meysam Madadi, Sergio Escalera, Jordi Gonzalez, Xavier Roca, & Felipe Lumbreras. (2015). Multi-part body segmentation based on depth maps for soft biometry analysis. PRL - Pattern Recognition Letters, 56, 14–21.
Abstract: This paper presents a novel method extracting biometric measures using depth sensors. Given a multi-part labeled training data, a new subject is aligned to the best model of the dataset, and soft biometrics such as lengths or circumference sizes of limbs and body are computed. The process is performed by training relevant pose clusters, defining a representative model, and fitting a 3D shape context descriptor within an iterative matching procedure. We show robust measures by applying orthogonal plates to body hull. We test our approach in a novel full-body RGB-Depth data set, showing accurate estimation of soft biometrics and better segmentation accuracy in comparison with random forest approach without requiring large training data.
Keywords: 3D shape context; 3D point cloud alignment; Depth maps; Human body segmentation; Soft biometry analysis
|
|
|
Angel Sappa, P. Carvajal, Cristhian A. Aguilera-Carrasco, Miguel Oliveira, Dennis Romero, & Boris X. Vintimilla. (2016). Wavelet based visible and infrared image fusion: a comparative study. SENS - Sensors, 16(6), 1–15.
Abstract: This paper evaluates different wavelet-based cross-spectral image fusion strategies adopted to merge visible and infrared images. The objective is to find the best setup independently of the evaluation metric used to measure the performance. Quantitative performance results are obtained with state of the art approaches together with adaptations proposed in the current work. The options evaluated in the current work result from the combination of different setups in the wavelet image decomposition stage together with different fusion strategies for the final merging stage that generates the resulting representation. Most of the approaches evaluate results according to the application for which they are intended for. Sometimes a human observer is selected to judge the quality of the obtained results. In the current work, quantitative values are considered in order to find correlations between setups and performance of obtained results; these correlations can be used to define a criteria for selecting the best fusion strategy for a given pair of cross-spectral images. The whole procedure is evaluated with a large set of correctly registered visible and infrared image pairs, including both Near InfraRed (NIR) and Long Wave InfraRed (LWIR).
Keywords: Image fusion; fusion evaluation metrics; visible and infrared imaging; discrete wavelet transform
|
|
|
Yi Xiao, Felipe Codevilla, Akhil Gurram, Onay Urfalioglu, & Antonio Lopez. (2020). Multimodal end-to-end autonomous driving. TITS - IEEE Transactions on Intelligent Transportation Systems, , 1–11.
Abstract: A crucial component of an autonomous vehicle (AV) is the artificial intelligence (AI) is able to drive towards a desired destination. Today, there are different paradigms addressing the development of AI drivers. On the one hand, we find modular pipelines, which divide the driving task into sub-tasks such as perception and maneuver planning and control. On the other hand, we find end-to-end driving approaches that try to learn a direct mapping from input raw sensor data to vehicle control signals. The later are relatively less studied, but are gaining popularity since they are less demanding in terms of sensor data annotation. This paper focuses on end-to-end autonomous driving. So far, most proposals relying on this paradigm assume RGB images as input sensor data. However, AVs will not be equipped only with cameras, but also with active sensors providing accurate depth information (e.g., LiDARs). Accordingly, this paper analyses whether combining RGB and depth modalities, i.e. using RGBD data, produces better end-to-end AI drivers than relying on a single modality. We consider multimodality based on early, mid and late fusion schemes, both in multisensory and single-sensor (monocular depth estimation) settings. Using the CARLA simulator and conditional imitation learning (CIL), we show how, indeed, early fusion multimodality outperforms single-modality.
|
|
|
Felipe Lumbreras, & Joan Serrat. (1996). Segmentation of petrographical images of marbles. Computers and Geosciences. 22(5):547–558, .
|
|
|
A.F. Sole, S. Ngan, G. Sapiro, X. Hu, & Antonio Lopez. (2001). Anisotropic 2-D and 3-D Averaging of fMRI Signals. IEEE Transactions on Medical Imaging, 20(2): 86–93 (IF: 3.142), .
|
|