|
Zhijie Fang and Antonio Lopez. 2018. Is the Pedestrian going to Cross? Answering by 2D Pose Estimation. IEEE Intelligent Vehicles Symposium.1271–1276.
Abstract: Our recent work suggests that, thanks to nowadays powerful CNNs, image-based 2D pose estimation is a promising cue for determining pedestrian intentions such as crossing the road in the path of the ego-vehicle, stopping before entering the road, and starting to walk or bending towards the road. This statement is based on the results obtained on non-naturalistic sequences (Daimler dataset), i.e. in sequences choreographed specifically for performing the study. Fortunately, a new publicly available dataset (JAAD) has appeared recently to allow developing methods for detecting pedestrian intentions in naturalistic driving conditions; more specifically, for addressing the relevant question is the pedestrian going to cross? Accordingly, in this paper we use JAAD to assess the usefulness of 2D pose estimation for answering such a question. We combine CNN-based pedestrian detection, tracking and pose estimation to predict the crossing action from monocular images. Overall, the proposed pipeline provides new state-ofthe-art results.
|
|
|
Akhil Gurram, Onay Urfalioglu, Ibrahim Halfaoui, Fahd Bouzaraa and Antonio Lopez. 2018. Monocular Depth Estimation by Learning from Heterogeneous Datasets. IEEE Intelligent Vehicles Symposium.2176–2181.
Abstract: Depth estimation provides essential information to perform autonomous driving and driver assistance. Especially, Monocular Depth Estimation is interesting from a practical point of view, since using a single camera is cheaper than many other options and avoids the need for continuous calibration strategies as required by stereo-vision approaches. State-of-the-art methods for Monocular Depth Estimation are based on Convolutional Neural Networks (CNNs). A promising line of work consists of introducing additional semantic information about the traffic scene when training CNNs for depth estimation. In practice, this means that the depth data used for CNN training is complemented with images having pixel-wise semantic labels, which usually are difficult to annotate (eg crowded urban images). Moreover, so far it is common practice to assume that the same raw training data is associated with both types of ground truth, ie, depth and semantic labels. The main contribution of this paper is to show that this hard constraint can be circumvented, ie, that we can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets. In order to illustrate the benefits of our approach, we combine KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on Monocular Depth Estimation.
|
|
|
Santi Puch, Irina Sanchez, Aura Hernandez-Sabate, Gemma Piella and Vesna Prckovska. 2018. Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation. International MICCAI Brainlesion Workshop.393–405. (LNCS.)
Abstract: In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge.
Keywords: Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution
|
|
|
German Ros, Angel Sappa, Daniel Ponsa and Antonio Lopez. 2012. Visual SLAM for Driverless Cars: A Brief Survey. IEEE Workshop on Navigation, Perception, Accurate Positioning and Mapping for Intelligent Vehicles.
|
|
|
Naveen Onkarappa and Angel Sappa. 2012. An Empirical Study on Optical Flow Accuracy Depending on Vehicle Speed. IEEE Intelligent Vehicles Symposium. IEEE Xplore, 1138–1143.
Abstract: Driver assistance and safety systems are getting attention nowadays towards automatic navigation and safety. Optical flow as a motion estimation technique has got major roll in making these systems a reality. Towards this, in the current paper, the suitability of polar representation for optical flow estimation in such systems is demonstrated. Furthermore, the influence of individual regularization terms on the accuracy of optical flow on image sequences of different speeds is empirically evaluated. Also a new synthetic dataset of image sequences with different speeds is generated along with the ground-truth optical flow.
|
|
|
Miguel Oliveira, Angel Sappa and V. Santos. 2012. Color Correction for Onboard Multi-camera Systems using 3D Gaussian Mixture Models. IEEE Intelligent Vehicles Symposium. IEEE Xplore, 299–303.
Abstract: The current paper proposes a novel color correction approach for onboard multi-camera systems. It works by segmenting the given images into several regions. A probabilistic segmentation framework, using 3D Gaussian Mixture Models, is proposed. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. An image data set of road scenarios is used to establish a performance comparison of the proposed method with other seven well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.
|
|
|
Jose Carlos Rubio, Joan Serrat and Antonio Lopez. 2012. Multiple target tracking and identity linking under split, merge and occlusion of targets and observations. 1st International Conference on Pattern Recognition Applications and Methods.
|
|
|
Ferran Diego, G.D. Evangelidis and Joan Serrat. 2012. Night-time outdoor surveillance by mobile cameras. 1st International Conference on Pattern Recognition Applications and Methods.365–371.
Abstract: This paper addresses the problem of video surveillance by mobile cameras. We present a method that allows online change detection in night-time outdoor surveillance. Because of the camera movement, background frames are not available and must be “localized” in former sequences and registered with the current frames. To this end, we propose a Frame Localization And Registration (FLAR) approach that solves the problem efficiently. Frames of former sequences define a database which is queried by current frames in turn. To quickly retrieve nearest neighbors, database is indexed through a visual dictionary method based on the SURF descriptor. Furthermore, the frame localization is benefited by a temporal filter that exploits the temporal coherence of videos. Next, the recently proposed ECC alignment scheme is used to spatially register the synchronized frames. Finally, change detection methods apply to aligned frames in order to mark suspicious areas. Experiments with real night sequences recorded by in-vehicle cameras demonstrate the performance of the proposed method and verify its efficiency and effectiveness against other methods.
|
|
|
Miguel Oliveira, V.Santos and Angel Sappa. 2012. Short term path planning using a multiple hypothesis evaluation approach for an autonomous driving competition. IEEE 4th Workshop on Planning, Perception and Navigation for Intelligent Vehicles.
|
|
|
David Vazquez, Antonio Lopez, Daniel Ponsa and Javier Marin. 2011. Virtual Worlds and Active Learning for Human Detection. 13th International Conference on Multimodal Interaction. New York, NY, USA, USA, ACM DL, 393–400.
Abstract: Image based human detection is of paramount interest due to its potential applications in fields such as advanced driving assistance, surveillance and media analysis. However, even detecting non-occluded standing humans remains a challenge of intensive research. The most promising human detectors rely on classifiers developed in the discriminative paradigm, i.e., trained with labelled samples. However, labeling is a manual intensive step, especially in cases like human detection where it is necessary to provide at least bounding boxes framing the humans for training. To overcome such problem, some authors have proposed the use of a virtual world where the labels of the different objects are obtained automatically. This means that the human models (classifiers) are learnt using the appearance of rendered images, i.e., using realistic computer graphics. Later, these models are used for human detection in images of the real world. The results of this technique are surprisingly good. However, these are not always as good as the classical approach of training and testing with data coming from the same camera, or similar ones. Accordingly, in this paper we address the challenge of using a virtual world for gathering (while playing a videogame) a large amount of automatically labelled samples (virtual humans and background) and then training a classifier that performs equal, in real-world images, than the one obtained by equally training from manually labelled real-world samples. For doing that, we cast the problem as one of domain adaptation. In doing so, we assume that a small amount of manually labelled samples from real-world images is required. To collect these labelled samples we propose a non-standard active learning technique. Therefore, ultimately our human model is learnt by the combination of virtual and real world labelled samples (Fig. 1), which has not been done before. We present quantitative results showing that this approach is valid.
Keywords: Pedestrian Detection; Human detection; Virtual; Domain Adaptation; Active Learning
|
|