|
Idoia Ruiz, Lorenzo Porzi, Samuel Rota Bulo, Peter Kontschieder and Joan Serrat. 2021. Weakly Supervised Multi-Object Tracking and Segmentation. IEEE Winter Conference on Applications of Computer Vision Workshops.125–133.
Abstract: We introduce the problem of weakly supervised MultiObject Tracking and Segmentation, i.e. joint weakly supervised instance segmentation and multi-object tracking, in which we do not provide any kind of mask annotation.
To address it, we design a novel synergistic training strategy by taking advantage of multi-task learning, i.e. classification and tracking tasks guide the training of the unsupervised instance segmentation. For that purpose, we extract weak foreground localization information, provided by
Grad-CAM heatmaps, to generate a partial ground truth to learn from. Additionally, RGB image level information is employed to refine the mask prediction at the edges of the
objects. We evaluate our method on KITTI MOTS, the most representative benchmark for this task, reducing the performance gap on the MOTSP metric between the fully supervised and weakly supervised approach to just 12% and 12.7 % for cars and pedestrians, respectively.
|
|
|
German Ros, Sebastian Ramos, Manuel Granados, Amir Bakhtiary, David Vazquez and Antonio Lopez. 2015. Vision-based Offline-Online Perception Paradigm for Autonomous Driving. IEEE Winter Conference on Applications of Computer Vision.231–238.
Abstract: Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in realtime. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.
Keywords: Autonomous Driving; Scene Understanding; SLAM; Semantic Segmentation
|
|
|
Daniel Hernandez, Antonio Espinosa, David Vazquez, Antonio Lopez and Juan Carlos Moure. 2017. GPU-accelerated real-time stixel computation. IEEE Winter Conference on Applications of Computer Vision.1054–1062.
Abstract: The Stixel World is a medium-level, compact representation of road scenes that abstracts millions of disparity pixels into hundreds or thousands of stixels. The goal of this work is to implement and evaluate a complete multi-stixel estimation pipeline on an embedded, energyefficient, GPU-accelerated device. This work presents a full GPU-accelerated implementation of stixel estimation that produces reliable results at 26 frames per second (real-time) on the Tegra X1 for disparity images of 1024×440 pixels and stixel widths of 5 pixels, and achieves more than 400 frames per second on a high-end Titan X GPU card.
Keywords: Autonomous Driving; GPU; Stixel
|
|
|
G.D. Evangelidis, Ferran Diego, Joan Serrat and Antonio Lopez. 2011. Slice Matching for Accurate Spatio-Temporal Alignment. In ICCV Workshop on Visual Surveillance.
Abstract: Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works.
Keywords: video alignment
|
|
|
Chris Bahnsen, David Vazquez, Antonio Lopez and Thomas B. Moeslund. 2019. Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data. 14th International Conference on Computer Vision Theory and Applications.123–130.
Abstract: Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed frames. The results on SYNTHIA show an 8% increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance.
Keywords: Rain Removal; Traffic Surveillance; Image Denoising
|
|
|
Fadi Dornaika and Angel Sappa. 2007. Improving Appearance-Based 3D Face Tracking Using Sparse Stereo Data. In J. Braz, A.R., H. Araujo and J. Jorge,, ed. Advances in Computer Graphics and Computer Vision,. Springer Verlag, 354–366.
|
|
|
Diego Cheda, Daniel Ponsa and Antonio Lopez. 2012. Monocular Depth-based Background Estimation. 7th International Conference on Computer Vision Theory and Applications.323–328.
Abstract: In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences.
|
|
|
Patricia Marquez, Debora Gil, R.Mester and Aura Hernandez-Sabate. 2014. Local Analysis of Confidence Measures for Optical Flow Quality Evaluation. 9th International Conference on Computer Vision Theory and Applications.450–457.
Abstract: Optical Flow (OF) techniques facing the complexity of real sequences have been developed in the last years. Even using the most appropriate technique for our specific problem, at some points the output flow might fail to achieve the minimum error required for the system. Confidence measures computed from either input data or OF output should discard those points where OF is not accurate enough for its further use. It follows that evaluating the capabilities of a confidence measure for bounding OF error is as important as the definition
itself. In this paper we analyze different confidence measures and point out their advantages and limitations for their use in real world settings. We also explore the agreement with current tools for their evaluation of confidence measures performance.
Keywords: Optical Flow; Confidence Measure; Performance Evaluation.
|
|
|
P. Ricaurte, C. Chilan, Cristhian A. Aguilera-Carrasco, Boris X. Vintimilla and Angel Sappa. 2014. Performance Evaluation of Feature Point Descriptors in the Infrared Domain. 9th International Conference on Computer Vision Theory and Applications.545–550.
Abstract: This paper presents a comparative evaluation of classical feature point descriptors when they are used in the long-wave infrared spectral band. Robustness to changes in rotation, scaling, blur, and additive noise are evaluated using a state of the art framework. Statistical results using an outdoor image data set are presented together with a discussion about the differences with respect to the results obtained when images from the visible spectrum are considered.
Keywords: Infrared Imaging; Feature Point Descriptors
|
|
|
Naveen Onkarappa, Cristhian A. Aguilera-Carrasco, Boris X. Vintimilla and Angel Sappa. 2014. Cross-spectral Stereo Correspondence using Dense Flow Fields. 9th International Conference on Computer Vision Theory and Applications.613–617.
Abstract: This manuscript addresses the cross-spectral stereo correspondence problem. It proposes the usage of a dense flow field based representation instead of the original cross-spectral images, which have a low correlation. In this way, working in the flow field space, classical cost functions can be used as similarity measures. Preliminary experimental results on urban environments have been obtained showing the validity of the proposed approach.
Keywords: Cross-spectral Stereo Correspondence; Dense Optical Flow; Infrared and Visible Spectrum
|
|