|
Jose Manuel Alvarez, Felipe Lumbreras, Theo Gevers and Antonio Lopez. 2010. Geographic Information for vision-based Road Detection. IEEE Intelligent Vehicles Symposium.621–626.
Abstract: Road detection is a vital task for the development of autonomous vehicles. The knowledge of the free road surface ahead of the target vehicle can be used for autonomous driving, road departure warning, as well as to support advanced driver assistance systems like vehicle or pedestrian detection. Using vision to detect the road has several advantages in front of other sensors: richness of features, easy integration, low cost or low power consumption. Common vision-based road detection approaches use low-level features (such as color or texture) as visual cues to group pixels exhibiting similar properties. However, it is difficult to foresee a perfect clustering algorithm since roads are in outdoor scenarios being imaged from a mobile platform. In this paper, we propose a novel high-level approach to vision-based road detection based on geographical information. The key idea of the algorithm is exploiting geographical information to provide a rough detection of the road. Then, this segmentation is refined at low-level using color information to provide the final result. The results presented show the validity of our approach.
Keywords: road detection
|
|
|
Mohamed Ramzy Ibrahim, Robert Benavente, Daniel Ponsa and Felipe Lumbreras. 2024. SWViT-RRDB: Shifted Window Vision Transformer Integrating Residual in Residual Dense Block for Remote Sensing Super-Resolution. 19th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications.
Abstract: Remote sensing applications, impacted by acquisition season and sensor variety, require high-resolution images. Transformer-based models improve satellite image super-resolution but are less effective than convolutional neural networks (CNNs) at extracting local details, crucial for image clarity. This paper introduces SWViT-RRDB, a new deep learning model for satellite imagery super-resolution. The SWViT-RRDB, combining transformer with convolution and attention blocks, overcomes the limitations of existing models by better representing small objects in satellite images. In this model, a pipeline of residual fusion group (RFG) blocks is used to combine the multi-headed self-attention (MSA) with residual in residual dense block (RRDB). This combines global and local image data for better super-resolution. Additionally, an overlapping cross-attention block (OCAB) is used to enhance fusion and allow interaction between neighboring pixels to maintain long-range pixel dependencies across the image. The SWViT-RRDB model and its larger variants outperform state-of-the-art (SoTA) models on two different satellite datasets in terms of PSNR and SSIM.
|
|
|
Hugo Berti, Angel Sappa and Osvaldo Agamennoni. 2007. Autonomous robot navigation with a global and asymptotic convergence. IEEE International Conference on Robotics and Automation.2712–2717.
|
|
|
Diego Cheda, Daniel Ponsa and Antonio Lopez. 2012. Monocular Depth-based Background Estimation. 7th International Conference on Computer Vision Theory and Applications.323–328.
Abstract: In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences.
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat and Antonio Lopez. 2006. Factorization with Missing and Noisy Data. 6th International Conference on Computational Science.555–562.
|
|
|
Cristhian A. Aguilera-Carrasco, Angel Sappa and Ricardo Toledo. 2015. LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations. 22th IEEE International Conference on Image Processing.178–181.
|
|
|
Fahad Shahbaz Khan, Muhammad Anwer Rao, Joost Van de Weijer, Andrew Bagdanov, Maria Vanrell and Antonio Lopez. 2012. Color Attributes for Object Detection. 25th IEEE Conference on Computer Vision and Pattern Recognition. IEEE Xplore, 3306–3313.
Abstract: State-of-the-art object detectors typically use shape information as a low level feature representation to capture the local structure of an object. This paper shows that early fusion of shape and color, as is popular in image classification,
leads to a significant drop in performance for object detection. Moreover, such approaches also yields suboptimal results for object categories with varying importance of color and shape.
In this paper we propose the use of color attributes as an explicit color representation for object detection. Color attributes are compact, computationally efficient, and when combined with traditional shape features provide state-ofthe-
art results for object detection. Our method is tested on the PASCAL VOC 2007 and 2009 datasets and results clearly show that our method improves over state-of-the-art techniques despite its simplicity. We also introduce a new dataset consisting of cartoon character images in which color plays a pivotal role. On this dataset, our approach yields a significant gain of 14% in mean AP over conventional state-of-the-art methods.
Keywords: pedestrian detection
|
|
|
Jose Carlos Rubio, Joan Serrat and Antonio Lopez. 2012. Unsupervised co-segmentation through region matching. 25th IEEE Conference on Computer Vision and Pattern Recognition. IEEE Xplore, 749–756.
Abstract: Co-segmentation is defined as jointly partitioning multiple images depicting the same or similar object, into foreground and background. Our method consists of a multiple-scale multiple-image generative model, which jointly estimates the foreground and background appearance distributions from several images, in a non-supervised manner. In contrast to other co-segmentation methods, our approach does not require the images to have similar foregrounds and different backgrounds to function properly. Region matching is applied to exploit inter-image information by establishing correspondences between the common objects that appear in the scene. Moreover, computing many-to-many associations of regions allow further applications, like recognition of object parts across images. We report results on iCoseg, a challenging dataset that presents extreme variability in camera viewpoint, illumination and object deformations and poses. We also show that our method is robust against large intra-class variability in the MSRC database.
|
|
|
Chris Bahnsen, David Vazquez, Antonio Lopez and Thomas B. Moeslund. 2019. Learning to Remove Rain in Traffic Surveillance by Using Synthetic Data. 14th International Conference on Computer Vision Theory and Applications.123–130.
Abstract: Rainfall is a problem in automated traffic surveillance. Rain streaks occlude the road users and degrade the overall visibility which in turn decrease object detection performance. One way of alleviating this is by artificially removing the rain from the images. This requires knowledge of corresponding rainy and rain-free images. Such images are often produced by overlaying synthetic rain on top of rain-free images. However, this method fails to incorporate the fact that rain fall in the entire three-dimensional volume of the scene. To overcome this, we introduce training data from the SYNTHIA virtual world that models rain streaks in the entirety of a scene. We train a conditional Generative Adversarial Network for rain removal and apply it on traffic surveillance images from SYNTHIA and the AAU RainSnow datasets. To measure the applicability of the rain-removed images in a traffic surveillance context, we run the YOLOv2 object detection algorithm on the original and rain-removed frames. The results on SYNTHIA show an 8% increase in detection accuracy compared to the original rain image. Interestingly, we find that high PSNR or SSIM scores do not imply good object detection performance.
Keywords: Rain Removal; Traffic Surveillance; Image Denoising
|
|
|
Naveen Onkarappa and Angel Sappa. 2010. On-Board Monocular Vision System Pose Estimation through a Dense Optical Flow. 7th International Conference on Image Analysis and Recognition. Springer Berlin Heidelberg, 230–239. (LNCS.)
Abstract: This paper presents a robust technique for estimating on-board monocular vision system pose. The proposed approach is based on a dense optical flow that is robust against shadows, reflections and illumination changes. A RANSAC based scheme is used to cope with the outliers in the optical flow. The proposed technique is intended to be used in driver assistance systems for applications such as obstacle or pedestrian detection. Experimental results on different scenarios, both from synthetic and real sequences, shows usefulness of the proposed approach.
|
|