toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
  Records Links
Author Yi Xiao; Felipe Codevilla; Akhil Gurram; Onay Urfalioglu; Antonio Lopez edit   pdf
url  doi
openurl 
  Title (down) Multimodal end-to-end autonomous driving Type Journal Article
  Year 2020 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume Issue Pages 1-11  
  Keywords  
  Abstract A crucial component of an autonomous vehicle (AV) is the artificial intelligence (AI) is able to drive towards a desired destination. Today, there are different paradigms addressing the development of AI drivers. On the one hand, we find modular pipelines, which divide the driving task into sub-tasks such as perception and maneuver planning and control. On the other hand, we find end-to-end driving approaches that try to learn a direct mapping from input raw sensor data to vehicle control signals. The later are relatively less studied, but are gaining popularity since they are less demanding in terms of sensor data annotation. This paper focuses on end-to-end autonomous driving. So far, most proposals relying on this paradigm assume RGB images as input sensor data. However, AVs will not be equipped only with cameras, but also with active sensors providing accurate depth information (e.g., LiDARs). Accordingly, this paper analyses whether combining RGB and depth modalities, i.e. using RGBD data, produces better end-to-end AI drivers than relying on a single modality. We consider multimodality based on early, mid and late fusion schemes, both in multisensory and single-sensor (monocular depth estimation) settings. Using the CARLA simulator and conditional imitation learning (CIL), we show how, indeed, early fusion multimodality outperforms single-modality.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS Approved no  
  Call Number Admin @ si @ XCG2020 Serial 3490  
Permanent link to this record
 

 
Author Sergio Vera; Debora Gil; Antonio Lopez; Miguel Angel Gonzalez Ballester edit   pdf
url  openurl
  Title (down) Multilocal Creaseness Measure Type Journal
  Year 2012 Publication The Insight Journal Abbreviated Journal IJ  
  Volume Issue Pages  
  Keywords Ridges, Valley, Creaseness, Structure Tensor, Skeleton,  
  Abstract This document describes the implementation using the Insight Toolkit of an algorithm for detecting creases (ridges and valleys) in N-dimensional images, based on the Local Structure Tensor of the image. In addition to the filter used to calculate the creaseness image, a filter for the computation of the structure tensor is also included in this submission.  
  Address  
  Corporate Author Alma IT Systems Thesis  
  Publisher Place of Publication Editor  
  Language english Summary Language english Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes IAM;ADAS; Approved no  
  Call Number IAM @ iam @ VGL2012 Serial 1840  
Permanent link to this record
 

 
Author T. Mouats; N. Aouf; Angel Sappa; Cristhian A. Aguilera-Carrasco; Ricardo Toledo edit  doi
openurl 
  Title (down) Multi-Spectral Stereo Odometry Type Journal Article
  Year 2015 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 16 Issue 3 Pages 1210-1224  
  Keywords Egomotion estimation; feature matching; multispectral odometry (MO); optical flow; stereo odometry; thermal imagery  
  Abstract In this paper, we investigate the problem of visual odometry for ground vehicles based on the simultaneous utilization of multispectral cameras. It encompasses a stereo rig composed of an optical (visible) and thermal sensors. The novelty resides in the localization of the cameras as a stereo setup rather
than two monocular cameras of different spectrums. To the best of our knowledge, this is the first time such task is attempted. Log-Gabor wavelets at different orientations and scales are used to extract interest points from both images. These are then described using a combination of frequency and spatial information within the local neighborhood. Matches between the pairs of multimodal images are computed using the cosine similarity function based
on the descriptors. Pyramidal Lucas–Kanade tracker is also introduced to tackle temporal feature matching within challenging sequences of the data sets. The vehicle egomotion is computed from the triangulated 3-D points corresponding to the matched features. A windowed version of bundle adjustment incorporating
Gauss–Newton optimization is utilized for motion estimation. An outlier removal scheme is also included within the framework to deal with outliers. Multispectral data sets were generated and used as test bed. They correspond to real outdoor scenarios captured using our multimodal setup. Finally, detailed results validating the proposed strategy are illustrated.
 
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 1524-9050 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.055; 600.076 Approved no  
  Call Number Admin @ si @ MAS2015a Serial 2533  
Permanent link to this record
 

 
Author Meysam Madadi; Sergio Escalera; Jordi Gonzalez; Xavier Roca; Felipe Lumbreras edit  doi
openurl 
  Title (down) Multi-part body segmentation based on depth maps for soft biometry analysis Type Journal Article
  Year 2015 Publication Pattern Recognition Letters Abbreviated Journal PRL  
  Volume 56 Issue Pages 14-21  
  Keywords 3D shape context; 3D point cloud alignment; Depth maps; Human body segmentation; Soft biometry analysis  
  Abstract This paper presents a novel method extracting biometric measures using depth sensors. Given a multi-part labeled training data, a new subject is aligned to the best model of the dataset, and soft biometrics such as lengths or circumference sizes of limbs and body are computed. The process is performed by training relevant pose clusters, defining a representative model, and fitting a 3D shape context descriptor within an iterative matching procedure. We show robust measures by applying orthogonal plates to body hull. We test our approach in a novel full-body RGB-Depth data set, showing accurate estimation of soft biometrics and better segmentation accuracy in comparison with random forest approach without requiring large training data.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; ISE; ADAS; 600.076;600.049; 600.063; 600.054; 302.018;MILAB Approved no  
  Call Number Admin @ si @ MEG2015 Serial 2588  
Permanent link to this record
 

 
Author Juan Jose Rubio; Takahiro Kashiwa; Teera Laiteerapong; Wenlong Deng; Kohei Nagai; Sergio Escalera; Kotaro Nakayama; Yutaka Matsuo; Helmut Prendinger edit  url
doi  openurl
  Title (down) Multi-class structural damage segmentation using fully convolutional networks Type Journal Article
  Year 2019 Publication Computers in Industry Abbreviated Journal COMPUTIND  
  Volume 112 Issue Pages 103121  
  Keywords Bridge damage detection; Deep learning; Semantic segmentation  
  Abstract Structural Health Monitoring (SHM) has benefited from computer vision and more recently, Deep Learning approaches, to accurately estimate the state of deterioration of infrastructure. In our work, we test Fully Convolutional Networks (FCNs) with a dataset of deck areas of bridges for damage segmentation. We create a dataset for delamination and rebar exposure that has been collected from inspection records of bridges in Niigata Prefecture, Japan. The dataset consists of 734 images with three labels per image, which makes it the largest dataset of images of bridge deck damage. This data allows us to estimate the performance of our method based on regions of agreement, which emulates the uncertainty of in-field inspections. We demonstrate the practicality of FCNs to perform automated semantic segmentation of surface damages. Our model achieves a mean accuracy of 89.7% for delamination and 78.4% for rebar exposure, and a weighted F1 score of 81.9%.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA; no proj;MILAB;ADAS Approved no  
  Call Number Admin @ si @ RKL2019 Serial 3315  
Permanent link to this record
 

 
Author Angel Sappa; Cristhian A. Aguilera-Carrasco; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo edit   pdf
doi  openurl
  Title (down) Monocular visual odometry: A cross-spectral image fusion based approach Type Journal Article
  Year 2016 Publication Robotics and Autonomous Systems Abbreviated Journal RAS  
  Volume 85 Issue Pages 26-36  
  Keywords Monocular visual odometry; LWIR-RGB cross-spectral imaging; Image fusion  
  Abstract This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is empirically obtained by means of a mutual information based evaluation metric. The objective is to have a flexible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odometry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier B.V. Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS;600.086; 600.076 Approved no  
  Call Number Admin @ si @SAC2016 Serial 2811  
Permanent link to this record
 

 
Author Akhil Gurram; Ahmet Faruk Tuna; Fengyi Shen; Onay Urfalioglu; Antonio Lopez edit   pdf
doi  openurl
  Title (down) Monocular Depth Estimation through Virtual-world Supervision and Real-world SfM Self-Supervision Type Journal Article
  Year 2021 Publication IEEE Transactions on Intelligent Transportation Systems Abbreviated Journal TITS  
  Volume 23 Issue 8 Pages 12738-12751  
  Keywords  
  Abstract Depth information is essential for on-board perception in autonomous driving and driver assistance. Monocular depth estimation (MDE) is very appealing since it allows for appearance and depth being on direct pixelwise correspondence without further calibration. Best MDE models are based on Convolutional Neural Networks (CNNs) trained in a supervised manner, i.e., assuming pixelwise ground truth (GT). Usually, this GT is acquired at training time through a calibrated multi-modal suite of sensors. However, also using only a monocular system at training time is cheaper and more scalable. This is possible by relying on structure-from-motion (SfM) principles to generate self-supervision. Nevertheless, problems of camouflaged objects, visibility changes, static-camera intervals, textureless areas, and scale ambiguity, diminish the usefulness of such self-supervision. In this paper, we perform monocular depth estimation by virtual-world supervision (MonoDEVS) and real-world SfM self-supervision. We compensate the SfM self-supervision limitations by leveraging virtual-world images with accurate semantic and depth supervision and addressing the virtual-to-real domain gap. Our MonoDEVSNet outperforms previous MDE CNNs trained on monocular and even stereo sequences.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ GTS2021 Serial 3598  
Permanent link to this record
 

 
Author Hannes Mueller; Andre Groeger; Jonathan Hersh; Andrea Matranga; Joan Serrat edit   pdf
url  doi
openurl 
  Title (down) Monitoring war destruction from space using machine learning Type Journal Article
  Year 2021 Publication Proceedings of the National Academy of Sciences of the United States of America Abbreviated Journal PNAS  
  Volume 118 Issue 23 Pages e2025400118  
  Keywords  
  Abstract Existing data on building destruction in conflict zones rely on eyewitness reports or manual detection, which makes it generally scarce, incomplete, and potentially biased. This lack of reliable data imposes severe limitations for media reporting, humanitarian relief efforts, human-rights monitoring, reconstruction initiatives, and academic studies of violent conflict. This article introduces an automated method of measuring destruction in high-resolution satellite images using deep-learning techniques combined with label augmentation and spatial and temporal smoothing, which exploit the underlying spatial and temporal structure of destruction. As a proof of concept, we apply this method to the Syrian civil war and reconstruct the evolution of damage in major cities across the country. Our approach allows generating destruction data with unprecedented scope, resolution, and frequency—and makes use of the ever-higher frequency at which satellite imagery becomes available.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.118 Approved no  
  Call Number Admin @ si @ MGH2021 Serial 3584  
Permanent link to this record
 

 
Author Jaume Amores edit   pdf
doi  openurl
  Title (down) MILDE: multiple instance learning by discriminative embedding Type Journal Article
  Year 2015 Publication Knowledge and Information Systems Abbreviated Journal KAIS  
  Volume 42 Issue 2 Pages 381-407  
  Keywords Multi-instance learning; Codebook; Bag of words  
  Abstract While the objective of the standard supervised learning problem is to classify feature vectors, in the multiple instance learning problem, the objective is to classify bags, where each bag contains multiple feature vectors. This represents a generalization of the standard problem, and this generalization becomes necessary in many real applications such as drug activity prediction, content-based image retrieval, and others. While the existing paradigms are based on learning the discriminant information either at the instance level or at the bag level, we propose to incorporate both levels of information. This is done by defining a discriminative embedding of the original space based on the responses of cluster-adapted instance classifiers. Results clearly show the advantage of the proposed method over the state of the art, where we tested the performance through a variety of well-known databases that come from real problems, and we also included an analysis of the performance using synthetically generated data.  
  Address  
  Corporate Author Thesis  
  Publisher Springer London Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0219-1377 ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 601.042; 600.057; 600.076 Approved no  
  Call Number Admin @ si @ Amo2015 Serial 2383  
Permanent link to this record
 

 
Author Aura Hernandez-Sabate; Meritxell Joanpere; Nuria Gorgorio; Lluis Albarracin edit   pdf
url  openurl
  Title (down) Mathematics learning opportunities when playing a Tower Defense Game Type Journal
  Year 2015 Publication International Journal of Serious Games Abbreviated Journal IJSG  
  Volume 2 Issue 4 Pages 57-71  
  Keywords Tower Defense game; learning opportunities; mathematics; problem solving; game design  
  Abstract A qualitative research study is presented herein with the purpose of identifying mathematics learning opportunities in students between 10 and 12 years old while playing a commercial version of a Tower Defense game. These learning opportunities are understood as mathematicisable moments of the game and involve the establishment of relationships between the game and mathematical problem solving. Based on the analysis of these mathematicisable moments, we conclude that the game can promote problem-solving processes and learning opportunities that can be associated with different mathematical contents that appears in mathematics curricula, thought it seems that teacher or new game elements might be needed to facilitate the processes.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes ADAS; 600.076;IAM Approved no  
  Call Number Admin @ si @ HJG2015 Serial 2730  
Permanent link to this record
Select All    Deselect All
 |   | 
Details

Save Citations:
Export Records: