|
Records |
Links |
|
Author |
Alejandro Gonzalez Alzate; Gabriel Villalonga; Jiaolong Xu; David Vazquez; Jaume Amores; Antonio Lopez |
|
|
Title |
Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection |
Type |
Conference Article |
|
Year |
2015 |
Publication |
IEEE Intelligent Vehicles Symposium IV2015 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
356-361 |
|
|
Keywords |
Pedestrian Detection |
|
|
Abstract |
Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy. |
|
|
Address |
Seoul; Corea; June 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
ACDC |
Expedition |
|
Conference |
IV |
|
|
Notes |
ADAS; 600.076; 600.057; 600.054 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ GVX2015 |
Serial |
2625 |
|
Permanent link to this record |
|
|
|
|
Author |
Zhijie Fang; Antonio Lopez |
|
|
Title |
Is the Pedestrian going to Cross? Answering by 2D Pose Estimation |
Type |
Conference Article |
|
Year |
2018 |
Publication |
IEEE Intelligent Vehicles Symposium |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1271 - 1276 |
|
|
Keywords |
|
|
|
Abstract |
Our recent work suggests that, thanks to nowadays powerful CNNs, image-based 2D pose estimation is a promising cue for determining pedestrian intentions such as crossing the road in the path of the ego-vehicle, stopping before entering the road, and starting to walk or bending towards the road. This statement is based on the results obtained on non-naturalistic sequences (Daimler dataset), i.e. in sequences choreographed specifically for performing the study. Fortunately, a new publicly available dataset (JAAD) has appeared recently to allow developing methods for detecting pedestrian intentions in naturalistic driving conditions; more specifically, for addressing the relevant question is the pedestrian going to cross? Accordingly, in this paper we use JAAD to assess the usefulness of 2D pose estimation for answering such a question. We combine CNN-based pedestrian detection, tracking and pose estimation to predict the crossing action from monocular images. Overall, the proposed pipeline provides new state-ofthe-art results. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IV |
|
|
Notes |
ADAS; 600.124; 600.116; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ FaL2018 |
Serial |
3181 |
|
Permanent link to this record |
|
|
|
|
Author |
Akhil Gurram; Onay Urfalioglu; Ibrahim Halfaoui; Fahd Bouzaraa; Antonio Lopez |
|
|
Title |
Monocular Depth Estimation by Learning from Heterogeneous Datasets |
Type |
Conference Article |
|
Year |
2018 |
Publication |
IEEE Intelligent Vehicles Symposium |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2176 - 2181 |
|
|
Keywords |
|
|
|
Abstract |
Depth estimation provides essential information to perform autonomous driving and driver assistance. Especially, Monocular Depth Estimation is interesting from a practical point of view, since using a single camera is cheaper than many other options and avoids the need for continuous calibration strategies as required by stereo-vision approaches. State-of-the-art methods for Monocular Depth Estimation are based on Convolutional Neural Networks (CNNs). A promising line of work consists of introducing additional semantic information about the traffic scene when training CNNs for depth estimation. In practice, this means that the depth data used for CNN training is complemented with images having pixel-wise semantic labels, which usually are difficult to annotate (eg crowded urban images). Moreover, so far it is common practice to assume that the same raw training data is associated with both types of ground truth, ie, depth and semantic labels. The main contribution of this paper is to show that this hard constraint can be circumvented, ie, that we can train CNNs for depth estimation by leveraging the depth and semantic information coming from heterogeneous datasets. In order to illustrate the benefits of our approach, we combine KITTI depth and Cityscapes semantic segmentation datasets, outperforming state-of-the-art results on Monocular Depth Estimation. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IV |
|
|
Notes |
ADAS; 600.124; 600.116; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GUH2018 |
Serial |
3183 |
|
Permanent link to this record |
|
|
|
|
Author |
German Ros; Angel Sappa; Daniel Ponsa; Antonio Lopez |
|
|
Title |
Visual SLAM for Driverless Cars: A Brief Survey |
Type |
Conference Article |
|
Year |
2012 |
Publication |
IEEE Workshop on Navigation, Perception, Accurate Positioning and Mapping for Intelligent Vehicles |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
SLAM |
|
|
Abstract |
|
|
|
Address |
Alcalá de Henares |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IVW |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RSP2012; ADAS @ adas |
Serial |
2019 |
|
Permanent link to this record |
|
|
|
|
Author |
Eugenio Alcala; Laura Sellart; Vicenc Puig; Joseba Quevedo; Jordi Saludes; David Vazquez; Antonio Lopez |
|
|
Title |
Comparison of two non-linear model-based control strategies for autonomous vehicles |
Type |
Conference Article |
|
Year |
2016 |
Publication |
24th Mediterranean Conference on Control and Automation |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
846-851 |
|
|
Keywords |
Autonomous Driving; Control |
|
|
Abstract |
This paper presents the comparison of two nonlinear model-based control strategies for autonomous cars. A control oriented model of vehicle based on a bicycle model is used. The two control strategies use a model reference approach. Using this approach, the error dynamics model is developed. Both controllers receive as input the longitudinal, lateral and orientation errors generating as control outputs the steering angle and the velocity of the vehicle. The first control approach is based on a non-linear control law that is designed by means of the Lyapunov direct approach. The second approach is based on a sliding mode-control that defines a set of sliding surfaces over which the error trajectories will converge. The main advantage of the sliding-control technique is the robustness against non-linearities and parametric uncertainties in the model. However, the main drawback of first order sliding mode is the chattering, so it has been implemented a high order sliding mode control. To test and compare the proposed control strategies, different path following scenarios are used in simulation. |
|
|
Address |
Athens; Greece; June 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MED |
|
|
Notes |
ADAS; 600.085; 600.082; 600.076 |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ ASP2016 |
Serial |
2750 |
|
Permanent link to this record |
|
|
|
|
Author |
Santi Puch; Irina Sanchez; Aura Hernandez-Sabate; Gemma Piella; Vesna Prckovska |
|
|
Title |
Global Planar Convolutions for Improved Context Aggregation in Brain Tumor Segmentation |
Type |
Conference Article |
|
Year |
2018 |
Publication |
International MICCAI Brainlesion Workshop |
Abbreviated Journal |
|
|
|
Volume |
11384 |
Issue |
|
Pages |
393-405 |
|
|
Keywords |
Brain tumors; 3D fully-convolutional CNN; Magnetic resonance imaging; Global planar convolution |
|
|
Abstract |
In this work, we introduce the Global Planar Convolution module as a building-block for fully-convolutional networks that aggregates global information and, therefore, enhances the context perception capabilities of segmentation networks in the context of brain tumor segmentation. We implement two baseline architectures (3D UNet and a residual version of 3D UNet, ResUNet) and present a novel architecture based on these two architectures, ContextNet, that includes the proposed Global Planar Convolution module. We show that the addition of such module eliminates the need of building networks with several representation levels, which tend to be over-parametrized and to showcase slow rates of convergence. Furthermore, we provide a visual demonstration of the behavior of GPC modules via visualization of intermediate representations. We finally participate in the 2018 edition of the BraTS challenge with our best performing models, that are based on ContextNet, and report the evaluation scores on the validation and the test sets of the challenge. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MICCAIW |
|
|
Notes |
ADAS; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PSH2018 |
Serial |
3251 |
|
Permanent link to this record |
|
|
|
|
Author |
M. Cruz; Cristhian A. Aguilera-Carrasco; Boris X. Vintimilla; Ricardo Toledo; Angel Sappa |
|
|
Title |
Cross-spectral image registration and fusion: an evaluation study |
Type |
Conference Article |
|
Year |
2015 |
Publication |
2nd International Conference on Machine Vision and Machine Learning |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
multispectral imaging; image registration; data fusion; infrared and visible spectra |
|
|
Abstract |
This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different
spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented. |
|
|
Address |
Barcelona; July 2015 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
MVML |
|
|
Notes |
ADAS; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ CAV2015 |
Serial |
2629 |
|
Permanent link to this record |
|
|
|
|
Author |
Diego Porres |
|
|
Title |
Discriminator Synthesis: On reusing the other half of Generative Adversarial Networks |
Type |
Conference Article |
|
Year |
2021 |
Publication |
Machine Learning for Creativity and Design, Neurips Workshop |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Generative Adversarial Networks have long since revolutionized the world of computer vision and, tied to it, the world of art. Arduous efforts have gone into fully utilizing and stabilizing training so that outputs of the Generator network have the highest possible fidelity, but little has gone into using the Discriminator after training is complete. In this work, we propose to use the latter and show a way to use the features it has learned from the training dataset to both alter an image and generate one from scratch. We name this method Discriminator Dreaming, and the full code can be found at this https URL. |
|
|
Address |
Virtual; December 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
NEURIPSW |
|
|
Notes |
ADAS; 601.365 |
Approved |
no |
|
|
Call Number |
Admin @ si @ Por2021 |
Serial |
3597 |
|
Permanent link to this record |
|
|
|
|
Author |
Jiaolong Xu; Sebastian Ramos; Xu Hu; David Vazquez; Antonio Lopez |
|
|
Title |
Multi-task Bilinear Classifiers for Visual Domain Adaptation |
Type |
Conference Article |
|
Year |
2013 |
Publication |
Advances in Neural Information Processing Systems Workshop |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Domain Adaptation; Pedestrian Detection; ADAS |
|
|
Abstract |
We propose a method that aims to lessen the significant accuracy degradation
that a discriminative classifier can suffer when it is trained in a specific domain (source domain) and applied in a different one (target domain). The principal reason for this degradation is the discrepancies in the distribution of the features that feed the classifier in different domains. Therefore, we propose a domain adaptation method that maps the features from the different domains into a common subspace and learns a discriminative domain-invariant classifier within it. Our algorithm combines bilinear classifiers and multi-task learning for domain adaptation.
The bilinear classifier encodes the feature transformation and classification
parameters by a matrix decomposition. In this way, specific feature transformations for multiple domains and a shared classifier are jointly learned in a multi-task learning framework. Focusing on domain adaptation for visual object detection, we apply this method to the state-of-the-art deformable part-based model for cross domain pedestrian detection. Experimental results show that our method significantly avoids the domain drift and improves the accuracy when compared to several baselines. |
|
|
Address |
Lake Tahoe; Nevada; USA; December 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
NIPSW |
|
|
Notes |
ADAS; 600.054; 600.057; 601.217;ISE |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ XRH2013 |
Serial |
2340 |
|
Permanent link to this record |
|
|
|
|
Author |
Guim Perarnau; Joost Van de Weijer; Bogdan Raducanu; Jose Manuel Alvarez |
|
|
Title |
Invertible conditional gans for image editing |
Type |
Conference Article |
|
Year |
2016 |
Publication |
30th Annual Conference on Neural Information Processing Systems Worshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Generative Adversarial Networks (GANs) have recently demonstrated to successfully approximate complex data distributions. A relevant extension of this model is conditional GANs (cGANs), where the introduction of external information allows to determine specific representations of the generated images. In this work, we evaluate encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation. This allows, for example, to reconstruct and modify real images of faces conditioning on arbitrary attributes.
Additionally, we evaluate the design of cGANs. The combination of an encoder
with a cGAN, which we call Invertible cGAN (IcGAN), enables to re-generate real
images with deterministic complex modifications. |
|
|
Address |
Barcelona; Spain; December 2016 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
NIPSW |
|
|
Notes |
LAMP; ADAS; 600.068 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PWR2016 |
Serial |
2906 |
|
Permanent link to this record |