Home | [141–150] << 151 152 153 154 155 156 157 158 159 160 >> [161–170] |
Records | |||||
---|---|---|---|---|---|
Author | Jean-Pascal Jacob; Mariella Dimiccoli; L. Moisan | ||||
Title | Active skeleton for bacteria modelling | Type | Journal Article | ||
Year | 2017 | Publication | Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization | Abbreviated Journal | CMBBE |
Volume | 5 | Issue | 4 | Pages | 274-286 |
Keywords | |||||
Abstract | The investigation of spatio-temporal dynamics of bacterial cells and their molecular components requires automated image analysis tools to track cell shape properties and molecular component locations inside the cells. In the study of bacteria aging, the molecular components of interest are protein aggregates accumulated near bacteria boundaries. This particular location makes very ambiguous the correspondence between aggregates and cells, since computing accurately bacteria boundaries in phase-contrast time-lapse imaging is a challenging task. This paper proposes an active skeleton formulation for bacteria modelling which provides several advantages: an easy computation of shape properties (perimeter, length, thickness and orientation), an improved boundary accuracy in noisy images and a natural bacteria-centred coordinate system that permits the intrinsic location of molecular components inside the cell. Starting from an initial skeleton estimate, the medial axis of the bacterium is obtained by minimising an energy function which incorporates bacteria shape constraints. Experimental results on biological images and comparative evaluation of the performances validate the proposed approach for modelling cigar-shaped bacteria like Escherichia coli. The Image-J plugin of the proposed method can be found online at http://fluobactracker.inrialpes.fr. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Taylor & Francis Group | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; | Approved | no | ||
Call Number | Admin @ si @JDM2017 | Serial | 2784 | ||
Permanent link to this record | |||||
Author | Svebor Karaman; Giuseppe Lisanti; Andrew Bagdanov; Alberto del Bimbo | ||||
Title | From re-identification to identity inference: Labeling consistency by local similarity constraints | Type | Book Chapter | ||
Year | 2014 | Publication | Person Re-Identification | Abbreviated Journal | |
Volume | 2 | Issue | Pages | 287-307 | |
Keywords | re-identification; Identity inference; Conditional random fields; Video surveillance | ||||
Abstract | In this chapter, we introduce the problem of identity inference as a generalization of person re-identification. It is most appropriate to distinguish identity inference from re-identification in situations where a large number of observations must be identified without knowing a priori that groups of test images represent the same individual. The standard single- and multishot person re-identification common in the literature are special cases of our formulation. We present an approach to solving identity inference by modeling it as a labeling problem in a Conditional Random Field (CRF). The CRF model ensures that the final labeling gives similar labels to detections that are similar in feature space. Experimental results are given on the ETHZ, i-LIDS and CAVIAR datasets. Our approach yields state-of-the-art performance for multishot re-identification, and our results on the more general identity inference problem demonstrate that we are able to infer the identity of very many examples even with very few labeled images in the gallery. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer London | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2191-6586 | ISBN | 978-1-4471-6295-7 | Medium | |
Area | Expedition | Conference | |||
Notes | LAMP; 600.079 | Approved | no | ||
Call Number | Admin @ si @KLB2014b | Serial | 2521 | ||
Permanent link to this record | |||||
Author | Aniol Lidon; Xavier Giro; Marc Bolaños; Petia Radeva; Markus Seidl; Matthias Zeppelzauer | ||||
Title | UPC-UB-STP @ MediaEval 2015 diversity task: iterative reranking of relevant images | Type | Conference Article | ||
Year | 2015 | Publication | 2015 MediaEval Retrieving Diverse Images Task | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper presents the results of the UPC-UB-STP team in the 2015 MediaEval Retrieving Diverse Images Task. The goal of the challenge is to provide a ranked list of Flickr photos for a predefined set of queries. Our approach firstly generates a ranking of images based on a query-independent estimation of its relevance. Only top results are kept and iteratively re-ranked based on their intra-similarity to introduce diversity. | ||||
Address | Wurzen; Germany; September 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MediaEval | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @LGB2016 | Serial | 2793 | ||
Permanent link to this record | |||||
Author | Joan M. Nuñez; Jorge Bernal; F. Javier Sanchez; Fernando Vilariño | ||||
Title | Growing Algorithm for Intersection Detection (GRAID) in branching patterns | Type | Journal Article | ||
Year | 2015 | Publication | Machine Vision and Applications | Abbreviated Journal | MVAP |
Volume | 26 | Issue | 2 | Pages | 387-400 |
Keywords | Bifurcation ; Crossroad; Intersection ;Retina ; Vessel | ||||
Abstract | Analysis of branching structures represents a very important task in fields such as medical diagnosis, road detection or biometrics. Detecting intersection landmarks Becomes crucial when capturing the structure of a branching pattern. We present a very simple geometrical model to describe intersections in branching structures based on two conditions: Bounded Tangency condition (BT) and Shortest Branch (SB) condition. The proposed model precisely sets a geometrical characterization of intersections and allows us to introduce a new unsupervised operator for intersection extraction. We propose an implementation that handles the consequences of digital domain operation that,unlike existing approaches, is not restricted to a particular scale and does not require the computation of the thinned pattern. The new proposal, as well as other existing approaches in the bibliography, are evaluated in a common framework for the first time. The performance analysis is based on two manually segmented image data sets: DRIVE retinal image database and COLON-VESSEL data set, a newly created data set of vascular content in colonoscopy frames. We have created an intersection landmark ground truth for each data set besides comparing our method in the only existing ground truth. Quantitative results confirm that we are able to outperform state-of-the-art performancelevels with the advantage that neither training nor parameter tuning is needed. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ;SIAI | Approved | no | ||
Call Number | Admin @ si @MBS2015 | Serial | 2777 | ||
Permanent link to this record | |||||
Author | Carlos David Martinez Hinarejos; Josep Llados; Alicia Fornes; Francisco Casacuberta; Lluis de Las Heras; Joan Mas; Moises Pastor; Oriol Ramos Terrades; Joan Andreu Sanchez; Enrique Vidal; Fernando Vilariño | ||||
Title | Context, multimodality, and user collaboration in handwritten text processing: the CoMUN-HaT project | Type | Conference Article | ||
Year | 2016 | Publication | 3rd IberSPEECH | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Processing of handwritten documents is a task that is of wide interest for many
purposes, such as those related to preserve cultural heritage. Handwritten text recognition techniques have been successfully applied during the last decade to obtain transcriptions of handwritten documents, and keyword spotting techniques have been applied for searching specific terms in image collections of handwritten documents. However, results on transcription and indexing are far from perfect. In this framework, the use of new data sources arises as a new paradigm that will allow for a better transcription and indexing of handwritten documents. Three main different data sources could be considered: context of the document (style, writer, historical time, topics,. . . ), multimodal data (representations of the document in a different modality, such as the speech signal of the dictation of the text), and user feedback (corrections, amendments,. . . ). The CoMUN-HaT project aims at the integration of these different data sources into the transcription and indexing task for handwritten documents: the use of context derived from the analysis of the documents, how multimodality can aid the recognition process to obtain more accurate transcriptions (including transcription in a modern version of the language), and integration into a userin-the-loop assisted text transcription framework. This will be reflected in the construction of a transcription and indexing platform that can be used by both professional and nonprofessional users, contributing to crowd-sourcing activities to preserve cultural heritage and to obtain an accessible version of the involved corpus. |
||||
Address | Lisboa; Portugal; November 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IberSPEECH | ||
Notes | DAG; MV; 600.097;SIAI | Approved | no | ||
Call Number | Admin @ si @MLF2016 | Serial | 2813 | ||
Permanent link to this record | |||||
Author | Marc Masana; Joost Van de Weijer; Andrew Bagdanov | ||||
Title | On-the-fly Network pruning for object detection | Type | Conference Article | ||
Year | 2016 | Publication | International conference on learning representations | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Object detection with deep neural networks is often performed by passing a few
thousand candidate bounding boxes through a deep neural network for each image. These bounding boxes are highly correlated since they originate from the same image. In this paper we investigate how to exploit feature occurrence at the image scale to prune the neural network which is subsequently applied to all bounding boxes. We show that removing units which have near-zero activation in the image allows us to significantly reduce the number of parameters in the network. Results on the PASCAL 2007 Object Detection Challenge demonstrate that up to 40% of units in some fully-connected layers can be entirely eliminated with little change in the detection result. |
||||
Address | Puerto Rico; May 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICLR | ||
Notes | LAMP; 600.068; 600.106; 600.079 | Approved | no | ||
Call Number | Admin @ si @MWB2016 | Serial | 2758 | ||
Permanent link to this record | |||||
Author | Dan Norton; Fernando Vilariño; Onur Ferhat | ||||
Title | Memory Field – Creative Engagement in Digital Collections | Type | Conference Article | ||
Year | 2015 | Publication | Internet Librarian International Conference | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | “Memory Fields” is a trans-disciplinary project aiming at the (re)valorisation of digital collections.Its main deliverable is an interface for a dual screen installation, used to access and mix the public library digital collections. The collections being used in this case are a collection of digitised posters from the Spanish Civil War, belonging to the Arxiu General de Catalunya, and a collection of field recordings made by Dan Norton. The system generates visualisations, and the images and sounds are mixed together using narrative primitives of video dj. Users contribute to the digital collections by adding personal memories and observations. The comments and recollections appear as flowers growing in a “memory field” and memories remain public in a Twitter feed (@Memoryfields). | ||||
Address | London; UK; October 2015 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ILI | ||
Notes | MV;SIAI | Approved | no | ||
Call Number | Admin @ si @NVF2015 | Serial | 2796 | ||
Permanent link to this record | |||||
Author | G. de Oliveira; A. Cartas; Marc Bolaños; Mariella Dimiccoli; Xavier Giro; Petia Radeva | ||||
Title | LEMoRe: A Lifelog Engine for Moments Retrieval at the NTCIR-Lifelog LSAT Task | Type | Conference Article | ||
Year | 2016 | Publication | 12th NTCIR Conference on Evaluation of Information Access Technologies | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Semantic image retrieval from large amounts of egocentric visual data requires to leverage powerful techniques for filling in the semantic gap. This paper introduces LEMoRe, a Lifelog Engine for Moments Retrieval, developed in the context of the Lifelog Semantic Access Task (LSAT) of the the NTCIR-12 challenge and discusses its performance variation on different trials. LEMoRe integrates classical image descriptors with high-level semantic concepts extracted by Convolutional Neural Networks (CNN), powered by a graphic user interface that uses natural language processing. Although this is just a first attempt towards interactive image retrieval from large egocentric datasets and there is a large room for improvement of the system components and the user interface, the structure of the system itself and the way the single components cooperate are very promising. | ||||
Address | Tokyo; Japan; June 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NTCIR | ||
Notes | MILAB; | Approved | no | ||
Call Number | Admin @ si @OCB2016 | Serial | 2789 | ||
Permanent link to this record | |||||
Author | G. de Oliveira; Mariella Dimiccoli; Petia Radeva | ||||
Title | Egocentric Image Retrieval With Deep Convolutional Neural Networks | Type | Conference Article | ||
Year | 2016 | Publication | 19th International Conference of the Catalan Association for Artificial Intelligence | Abbreviated Journal | |
Volume | Issue | Pages | 71-76 | ||
Keywords | |||||
Abstract | |||||
Address | Barcelona; Spain; October 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CCIA | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ODR2016 | Serial | 2790 | ||
Permanent link to this record | |||||
Author | Maria Oliver; Gloria Haro; Mariella Dimiccoli; Baptiste Mazin; Coloma Ballester | ||||
Title | A computational model of amodal completion | Type | Conference Article | ||
Year | 2016 | Publication | SIAM Conference on Imaging Science | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This paper presents a computational model to recover the most likely interpretation of the 3D scene structure from a planar image, where some objects may occlude others. The estimated scene interpretation is obtained by integrating some global and local cues and provides both the complete disoccluded objects that form the scene and their ordering according to depth. Our method first computes several distal scenes which are compatible with the proximal planar image. To compute these different hypothesized scenes, we propose a perceptually inspired object disocclusion method, which works by minimizing the Euler's elastica as well as by incorporating the relatability of partially occluded contours and the convexity of the disoccluded objects. Then, to estimate the preferred scene we rely on a Bayesian model and define probabilities taking into account the global complexity of the objects in the hypothesized scenes as well as the effort of bringing these objects in their relative position in the planar image, which is also measured by an Euler's elastica-based quantity. The model is illustrated with numerical experiments on, both, synthetic and real images showing the ability of our model to reconstruct the occluded objects and the preferred perceptual order among them. We also present results on images of the Berkeley dataset with provided figure-ground ground-truth labeling. | ||||
Address | Albuquerque; New Mexico; USA; May 2016 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IS | ||
Notes | MILAB; 601.235 | Approved | no | ||
Call Number | Admin @ si @OHD2016a | Serial | 2788 | ||
Permanent link to this record | |||||
Author | Miguel Oliveira; Victor Santos; Angel Sappa; P. Dias; A. Moreira | ||||
Title | Incremental Scenario Representations for Autonomous Driving using Geometric Polygonal Primitives | Type | Journal Article | ||
Year | 2016 | Publication | Robotics and Autonomous Systems | Abbreviated Journal | RAS |
Volume | 83 | Issue | Pages | 312-325 | |
Keywords | Incremental scene reconstruction; Point clouds; Autonomous vehicles; Polygonal primitives | ||||
Abstract | When an autonomous vehicle is traveling through some scenario it receives a continuous stream of sensor data. This sensor data arrives in an asynchronous fashion and often contains overlapping or redundant information. Thus, it is not trivial how a representation of the environment observed by the vehicle can be created and updated over time. This paper presents a novel methodology to compute an incremental 3D representation of a scenario from 3D range measurements. We propose to use macro scale polygonal primitives to model the scenario. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Furthermore, we propose mechanisms designed to update the geometric polygonal primitives over time whenever fresh sensor data is collected. Results show that the approach is capable of producing accurate descriptions of the scene, and that it is computationally very efficient when compared to other reconstruction techniques. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier B.V. | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS; 600.086, 600.076 | Approved | no | ||
Call Number | Admin @ si @OSS2016a | Serial | 2806 | ||
Permanent link to this record | |||||
Author | Ivet Rafegas; Maria Vanrell | ||||
Title | Color encoding in biologically-inspired convolutional neural networks | Type | Journal Article | ||
Year | 2018 | Publication | Vision Research | Abbreviated Journal | VR |
Volume | 151 | Issue | Pages | 7-17 | |
Keywords | Color coding; Computer vision; Deep learning; Convolutional neural networks | ||||
Abstract | Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | CIC; 600.051; 600.087 | Approved | no | ||
Call Number | Admin @ si @RaV2018 | Serial | 3114 | ||
Permanent link to this record | |||||
Author | Pejman Rasti; Salma Samiei; Mary Agoyi; Sergio Escalera; Gholamreza Anbarjafari | ||||
Title | Robust non-blind color video watermarking using QR decomposition and entropy analysis | Type | Journal Article | ||
Year | 2016 | Publication | Journal of Visual Communication and Image Representation | Abbreviated Journal | JVCIR |
Volume | 38 | Issue | Pages | 838-847 | |
Keywords | Video watermarking; QR decomposition; Discrete Wavelet Transformation; Chirp Z-transform; Singular value decomposition; Orthogonal–triangular decomposition | ||||
Abstract | Issues such as content identification, document and image security, audience measurement, ownership and copyright among others can be settled by the use of digital watermarking. Many recent video watermarking methods show drops in visual quality of the sequences. The present work addresses the aforementioned issue by introducing a robust and imperceptible non-blind color video frame watermarking algorithm. The method divides frames into moving and non-moving parts. The non-moving part of each color channel is processed separately using a block-based watermarking scheme. Blocks with an entropy lower than the average entropy of all blocks are subject to a further process for embedding the watermark image. Finally a watermarked frame is generated by adding moving parts to it. Several signal processing attacks are applied to each watermarked frame in order to perform experiments and are compared with some recent algorithms. Experimental results show that the proposed scheme is imperceptible and robust against common signal processing attacks. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB; | Approved | no | ||
Call Number | Admin @ si @RSA2016 | Serial | 2766 | ||
Permanent link to this record | |||||
Author | Oriol Ramos Terrades; Alejandro Hector Toselli; Nicolas Serrano; Veronica Romero; Enrique Vidal; Alfons Juan | ||||
Title | Interactive layout analysis and transcription systems for historic handwritten documents | Type | Conference Article | ||
Year | 2010 | Publication | 10th ACM Symposium on Document Engineering | Abbreviated Journal | |
Volume | Issue | Pages | 219–222 | ||
Keywords | Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis | ||||
Abstract | The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process. | ||||
Address | Manchester, United Kingdom | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @RTS2010 | Serial | 1857 | ||
Permanent link to this record | |||||
Author | Angel Sappa; Cristhian A. Aguilera-Carrasco; Juan A. Carvajal Ayala; Miguel Oliveira; Dennis Romero; Boris X. Vintimilla; Ricardo Toledo | ||||
Title | Monocular visual odometry: A cross-spectral image fusion based approach | Type | Journal Article | ||
Year | 2016 | Publication | Robotics and Autonomous Systems | Abbreviated Journal | RAS |
Volume | 85 | Issue | Pages | 26-36 | |
Keywords | Monocular visual odometry; LWIR-RGB cross-spectral imaging; Image fusion | ||||
Abstract | This manuscript evaluates the usage of fused cross-spectral images in a monocular visual odometry approach. Fused images are obtained through a Discrete Wavelet Transform (DWT) scheme, where the best setup is empirically obtained by means of a mutual information based evaluation metric. The objective is to have a flexible scheme where fusion parameters are adapted according to the characteristics of the given images. Visual odometry is computed from the fused monocular images using an off the shelf approach. Experimental results using data sets obtained with two different platforms are presented. Additionally, comparison with a previous approach as well as with monocular-visible/infrared spectra are also provided showing the advantages of the proposed scheme. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier B.V. | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ADAS;600.086; 600.076 | Approved | no | ||
Call Number | Admin @ si @SAC2016 | Serial | 2811 | ||
Permanent link to this record |