Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–20] |
Records | |||||
---|---|---|---|---|---|
Author | Ernest Valveny; Enric Marti | ||||
Title | Deformable Template Matching within a Bayesian Framework for Hand-Written Graphic Symbol Recognition | Type | Journal Article | ||
Year | 2000 | Publication | Graphics Recognition Recent Advances | Abbreviated Journal | |
Volume | 1941 | Issue | Pages | 193-208 | |
Keywords | |||||
Abstract | We describe a method for hand-drawn symbol recognition based on deformable template matching able to handle uncertainty and imprecision inherent to hand-drawing. Symbols are represented as a set of straight lines and their deformations as geometric transformations of these lines. Matching, however, is done over the original binary image to avoid loss of information during line detection. It is defined as an energy minimization problem, using a Bayesian framework which allows to combine fidelity to ideal shape of the symbol and flexibility to modify the symbol in order to get the best fit to the binary input image. Prior to matching, we find the best global transformation of the symbol to start the recognition process, based on the distance between symbol lines and image lines. We have applied this method to the recognition of dimensions and symbols in architectural floor plans and we show its flexibility to recognize distorted symbols. | ||||
Address | |||||
Corporate Author | Springer Verlag | Thesis | |||
Publisher | Springer Verlag | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG;IAM; | Approved | no | ||
Call Number | IAM @ iam @ MVA2000 | Serial | 1655 | ||
Permanent link to this record | |||||
Author | Ernest Valveny; Enric Marti | ||||
Title | A model for image generation and symbol recognition through the deformation of lineal shapes | Type | Journal Article | ||
Year | 2003 | Publication | Pattern Recognition Letters | Abbreviated Journal | PRL |
Volume | 24 | Issue | 15 | Pages | 2857-2867 |
Keywords | |||||
Abstract | We describe a general framework for the recognition of distorted images of lineal shapes, which relies on three items: a model to represent lineal shapes and their deformations, a model for the generation of distorted binary images and the combination of both models in a common probabilistic framework, where the generation of deformations is related to an internal energy, and the generation of binary images to an external energy. Then, recognition consists in the minimization of a global energy function, performed by using the EM algorithm. This general framework has been applied to the recognition of hand-drawn lineal symbols in graphic documents. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier Science Inc. | Place of Publication | New York, NY, USA | Editor | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0167-8655 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; IAM | Approved | no | ||
Call Number | IAM @ iam @ VAM2003 | Serial | 1653 | ||
Permanent link to this record | |||||
Author | G. Gasbarri; Matias Bilkis; E. Roda Salichs; J. Calsamiglia | ||||
Title | Sequential hypothesis testing for continuously-monitored quantum systems | Type | Journal Article | ||
Year | 2024 | Publication | Quantum | Abbreviated Journal | |
Volume | 8 | Issue | 1289 | Pages | |
Keywords | |||||
Abstract | We consider a quantum system that is being continuously monitored, giving rise to a measurement signal. From such a stream of data, information needs to be inferred about the underlying system's dynamics. Here we focus on hypothesis testing problems and put forward the usage of sequential strategies where the signal is analyzed in real time, allowing the experiment to be concluded as soon as the underlying hypothesis can be identified with a certified prescribed success probability. We analyze the performance of sequential tests by studying the stopping-time behavior, showing a considerable advantage over currently-used strategies based on a fixed predetermined measurement time. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | xxxx | Approved | no | ||
Call Number | Admin @ si @ GBR2024 | Serial | 3847 | ||
Permanent link to this record | |||||
Author | Onur Ferhat; Fernando Vilariño; F. Javier Sanchez | ||||
Title | A cheap portable eye-tracker solution for common setups. | Type | Journal Article | ||
Year | 2014 | Publication | Journal of Eye Movement Research | Abbreviated Journal | JEMR |
Volume | 7 | Issue | 3 | Pages | 1-10 |
Keywords | |||||
Abstract | We analyze the feasibility of a cheap eye-tracker where the hardware consists of a single webcam and a Raspberry Pi device. Our aim is to discover the limits of such a system and to see whether it provides an acceptable performance. We base our work on the open source Opengazer (Zielinski, 2013) and we propose several improvements to create a robust, real-time system which can work on a computer with 30Hz sampling rate. After assessing the accuracy of our eye-tracker in elaborated experiments involving 12 subjects under 4 different system setups, we install it on a Raspberry Pi to create a portable stand-alone eye-tracker which achieves 1.42° horizontal accuracy with 3Hz refresh rate for a building cost of 70 Euros. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ;SIAI | Approved | no | ||
Call Number | Admin @ si @ FVS2014 | Serial | 2435 | ||
Permanent link to this record | |||||
Author | Carlo Gatta; Francesco Ciompi | ||||
Title | Stacked Sequential Scale-Space Taylor Context | Type | Journal Article | ||
Year | 2014 | Publication | IEEE Transactions on Pattern Analysis and Machine Intelligence | Abbreviated Journal | TPAMI |
Volume | 36 | Issue | 8 | Pages | 1694-1700 |
Keywords | |||||
Abstract | We analyze sequential image labeling methods that sample the posterior label field in order to gather contextual information. We propose an effective method that extracts local Taylor coefficients from the posterior at different scales. Results show that our proposal outperforms state-of-the-art methods on MSRC-21, CAMVID, eTRIMS8 and KAIST2 data sets. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0162-8828 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | LAMP; MILAB; 601.160; 600.079 | Approved | no | ||
Call Number | Admin @ si @ GaC2014 | Serial | 2466 | ||
Permanent link to this record | |||||
Author | Shiqi Yang; Yaxing Wang; Luis Herranz; Shangling Jui; Joost Van de Weijer | ||||
Title | Casting a BAIT for offline and online source-free domain adaptation | Type | Journal Article | ||
Year | 2023 | Publication | Computer Vision and Image Understanding | Abbreviated Journal | CVIU |
Volume | 234 | Issue | Pages | 103747 | |
Keywords | |||||
Abstract | We address the source-free domain adaptation (SFDA) problem, where only the source model is available during adaptation to the target domain. We consider two settings: the offline setting where all target data can be visited multiple times (epochs) to arrive at a prediction for each target sample, and the online setting where the target data needs to be directly classified upon arrival. Inspired by diverse classifier based domain adaptation methods, in this paper we introduce a second classifier, but with another classifier head fixed. When adapting to the target domain, the additional classifier initialized from source classifier is expected to find misclassified features. Next, when updating the feature extractor, those features will be pushed towards the right side of the source decision boundary, thus achieving source-free domain adaptation. Experimental results show that the proposed method achieves competitive results for offline SFDA on several benchmark datasets compared with existing DA and SFDA methods, and our method surpasses by a large margin other SFDA methods under online source-free domain adaptation setting. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; MACO | Approved | no | ||
Call Number | Admin @ si @ YWH2023 | Serial | 3874 | ||
Permanent link to this record | |||||
Author | Jose Seabra; Francesco Ciompi; Oriol Pujol; J. Mauri; Petia Radeva; Joao Sanchez | ||||
Title | Rayleigh Mixture Model for Plaque Characterization in Intravascular Ultrasound | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Biomedical Engineering | Abbreviated Journal | TBME |
Volume | 58 | Issue | 5 | Pages | 1314-1324 |
Keywords | |||||
Abstract | Vulnerable plaques are the major cause of carotid and coronary vascular problems, such as heart attack or stroke. A correct modeling of plaque echomorphology and composition can help the identification of such lesions. The Rayleigh distribution is widely used to describe (nearly) homogeneous areas in ultrasound images. Since plaques may contain tissues with heterogeneous regions, more complex distributions depending on multiple parameters are usually needed, such as Rice, K or Nakagami distributions. In such cases, the problem formulation becomes more complex, and the optimization procedure to estimate the plaque echomorphology is more difficult. Here, we propose to model the tissue echomorphology by means of a mixture of Rayleigh distributions, known as the Rayleigh mixture model (RMM). The problem formulation is still simple, but its ability to describe complex textural patterns is very powerful. In this paper, we present a method for the automatic estimation of the RMM mixture parameters by means of the expectation maximization algorithm, which aims at characterizing tissue echomorphology in ultrasound (US). The performance of the proposed model is evaluated with a database of in vitro intravascular US cases. We show that the mixture coefficients and Rayleigh parameters explicitly derived from the mixture model are able to accurately describe different plaque types and to significantly improve the characterization performance of an already existing methodology. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB;HuPBA | Approved | no | ||
Call Number | Admin @ si @ SCP2011 | Serial | 1712 | ||
Permanent link to this record | |||||
Author | Marc Bolaños; Mariella Dimiccoli; Petia Radeva | ||||
Title | Towards Storytelling from Visual Lifelogging: An Overview | Type | Journal Article | ||
Year | 2017 | Publication | IEEE Transactions on Human-Machine Systems | Abbreviated Journal | THMS |
Volume | 47 | Issue | 1 | Pages | 77 - 90 |
Keywords | |||||
Abstract | Visual lifelogging consists of acquiring images that capture the daily experiences of the user by wearing a camera over a long period of time. The pictures taken offer considerable potential for knowledge mining concerning how people live their lives, hence, they open up new opportunities for many potential applications in fields including healthcare, security, leisure and
the quantified self. However, automatically building a story from a huge collection of unstructured egocentric data presents major challenges. This paper provides a thorough review of advances made so far in egocentric data analysis, and in view of the current state of the art, indicates new lines of research to move us towards storytelling from visual lifelogging. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | MILAB; 601.235 | Approved | no | ||
Call Number | Admin @ si @ BDR2017 | Serial | 2712 | ||
Permanent link to this record | |||||
Author | Maria Elena Meza-de-Luna; Juan Ramon Terven Salinas; Bogdan Raducanu; Joaquin Salas | ||||
Title | A Social-Aware Assistant to support individuals with visual impairments during social interaction: A systematic requirements analysis | Type | Journal Article | ||
Year | 2019 | Publication | International Journal of Human-Computer Studies | Abbreviated Journal | IJHC |
Volume | 122 | Issue | Pages | 50-60 | |
Keywords | |||||
Abstract | Visual impairment affects the normal course of activities in everyday life including mobility, education, employment, and social interaction. Most of the existing technical solutions devoted to empowering the visually impaired people are in the areas of navigation (obstacle avoidance), access to printed information and object recognition. Less effort has been dedicated so far in developing solutions to support social interactions. In this paper, we introduce a Social-Aware Assistant (SAA) that provides visually impaired people with cues to enhance their face-to-face conversations. The system consists of a perceptive component (represented by smartglasses with an embedded video camera) and a feedback component (represented by a haptic belt). When the vision system detects a head nodding, the belt vibrates, thus suggesting the user to replicate (mirror) the gesture. In our experiments, sighted persons interacted with blind people wearing the SAA. We instructed the former to mirror the noddings according to the vibratory signal, while the latter interacted naturally. After the face-to-face conversation, the participants had an interview to express their experience regarding the use of this new technological assistant. With the data collected during the experiment, we have assessed quantitatively and qualitatively the device usefulness and user satisfaction. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.109; 600.120 | Approved | no | ||
Call Number | Admin @ si @ MTR2019 | Serial | 3142 | ||
Permanent link to this record | |||||
Author | Alex Gomez-Villa; Adrian Martin; Javier Vazquez; Marcelo Bertalmio; Jesus Malo | ||||
Title | On the synthesis of visual illusions using deep generative models | Type | Journal Article | ||
Year | 2022 | Publication | Journal of Vision | Abbreviated Journal | JOV |
Volume | 22(8) | Issue | 2 | Pages | 1-18 |
Keywords | |||||
Abstract | Visual illusions expand our understanding of the visual system by imposing constraints in the models in two different ways: i) visual illusions for humans should induce equivalent illusions in the model, and ii) illusions synthesized from the model should be compelling for human viewers too. These constraints are alternative strategies to find good vision models. Following the first research strategy, recent studies have shown that artificial neural network architectures also have human-like illusory percepts when stimulated with classical hand-crafted stimuli designed to fool humans. In this work we focus on the second (less explored) strategy: we propose a framework to synthesize new visual illusions using the optimization abilities of current automatic differentiation techniques. The proposed framework can be used with classical vision models as well as with more recent artificial neural network architectures. This framework, validated by psychophysical experiments, can be used to study the difference between a vision model and the actual human perception and to optimize the vision model to decrease this difference. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | LAMP; 600.161; 611.007 | Approved | no | ||
Call Number | Admin @ si @ GMV2022 | Serial | 3682 | ||
Permanent link to this record | |||||
Author | Mingyi Yang; Fei Yang; Luka Murn; Marc Gorriz Blanch; Juil Sock; Shuai Wan; Fuzheng Yang; Luis Herranz | ||||
Title | Task-Switchable Pre-Processor for Image Compression for Multiple Machine Vision Tasks | Type | Journal Article | ||
Year | 2024 | Publication | IEEE Transactions on Circuits and Systems for Video Technology | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | M Yang, F Yang, L Murn, MG Blanch, J Sock, S Wan, F Yang, L Herranz | ||||
Abstract | Visual content is increasingly being processed by machines for various automated content analysis tasks instead of being consumed by humans. Despite the existence of several compression methods tailored for machine tasks, few consider real-world scenarios with multiple tasks. In this paper, we aim to address this gap by proposing a task-switchable pre-processor that optimizes input images specifically for machine consumption prior to encoding by an off-the-shelf codec designed for human consumption. The proposed task-switchable pre-processor adeptly maintains relevant semantic information based on the specific characteristics of different downstream tasks, while effectively suppressing irrelevant information to reduce bitrate. To enhance the processing of semantic information for diverse tasks, we leverage pre-extracted semantic features to modulate the pixel-to-pixel mapping within the pre-processor. By switching between different modulations, multiple tasks can be seamlessly incorporated into the system. Extensive experiments demonstrate the practicality and simplicity of our approach. It significantly reduces the number of parameters required for handling multiple tasks while still delivering impressive performance. Our method showcases the potential to achieve efficient and effective compression for machine vision tasks, supporting the evolving demands of real-world applications. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | xxx | Approved | no | ||
Call Number | Admin @ si @ YYM2024 | Serial | 4007 | ||
Permanent link to this record | |||||
Author | Koen E.A. van de Sande; Theo Gevers; Cees G.M. Snoek | ||||
Title | Empowering Visual Categorization with the GPU | Type | Journal Article | ||
Year | 2011 | Publication | IEEE Transactions on Multimedia | Abbreviated Journal | TMM |
Volume | 13 | Issue | 1 | Pages | 60-70 |
Keywords | |||||
Abstract | Visual categorization is important to manage large collections of digital images and video, where textual meta-data is often incomplete or simply unavailable. The bag-of-words model has become the most powerful method for visual categorization of images and video. Despite its high accuracy, a severe drawback of this model is its high computational cost. As the trend to increase computational power in newer CPU and GPU architectures is to increase their level of parallelism, exploiting this parallelism becomes an important direction to handle the computational cost of the bag-of-words approach. When optimizing a system based on the bag-of-words approach, the goal is to minimize the time it takes to process batches of images. Additionally, we also consider power usage as an evaluation metric. In this paper, we analyze the bag-of-words model for visual categorization in terms of computational cost and identify two major bottlenecks: the quantization step and the classification step. We address these two bottlenecks by proposing two efficient algorithms for quantization and classification by exploiting the GPU hardware and the CUDA parallel programming model. The algorithms are designed to (1) keep categorization accuracy intact, (2) decompose the problem and (3) give the same numerical results. In the experiments on large scale datasets it is shown that, by using a parallel implementation on the Geforce GTX260 GPU, classifying unseen images is 4.8 times faster than a quad-core CPU version on the Core i7 920, while giving the exact same numerical results. In addition, we show how the algorithms can be generalized to other applications, such as text retrieval and video retrieval. Moreover, when the obtained speedup is used to process extra video frames in a video retrieval benchmark, the accuracy of visual categorization is improved by 29%. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ SGS2011b | Serial | 1729 | ||
Permanent link to this record | |||||
Author | Jose Manuel Alvarez; Theo Gevers; Ferran Diego; Antonio Lopez | ||||
Title | Road Geometry Classification by Adaptative Shape Models | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Intelligent Transportation Systems | Abbreviated Journal | TITS |
Volume | 14 | Issue | 1 | Pages | 459-468 |
Keywords | road detection | ||||
Abstract | Vision-based road detection is important for different applications in transportation, such as autonomous driving, vehicle collision warning, and pedestrian crossing detection. Common approaches to road detection are based on low-level road appearance (e.g., color or texture) and neglect of the scene geometry and context. Hence, using only low-level features makes these algorithms highly depend on structured roads, road homogeneity, and lighting conditions. Therefore, the aim of this paper is to classify road geometries for road detection through the analysis of scene composition and temporal coherence. Road geometry classification is proposed by building corresponding models from training images containing prototypical road geometries. We propose adaptive shape models where spatial pyramids are steered by the inherent spatial structure of road images. To reduce the influence of lighting variations, invariant features are used. Large-scale experiments show that the proposed road geometry classifier yields a high recognition rate of 73.57% ± 13.1, clearly outperforming other state-of-the-art methods. Including road shape information improves road detection results over existing appearance-based methods. Finally, it is shown that invariant features and temporal information provide robustness against disturbing imaging conditions. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1524-9050 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS;ISE | Approved | no | ||
Call Number | Admin @ si @ AGD2013;; ADAS @ adas @ | Serial | 2269 | ||
Permanent link to this record | |||||
Author | Juan Borrego-Carazo; Carles Sanchez; David Castells; Jordi Carrabina; Debora Gil | ||||
Title | BronchoPose: an analysis of data and model configuration for vision-based bronchoscopy pose estimation | Type | Journal Article | ||
Year | 2023 | Publication | Computer Methods and Programs in Biomedicine | Abbreviated Journal | CMPB |
Volume | 228 | Issue | Pages | 107241 | |
Keywords | Videobronchoscopy guiding; Deep learning; Architecture optimization; Datasets; Standardized evaluation framework; Pose estimation | ||||
Abstract | Vision-based bronchoscopy (VB) models require the registration of the virtual lung model with the frames from the video bronchoscopy to provide effective guidance during the biopsy. The registration can be achieved by either tracking the position and orientation of the bronchoscopy camera or by calibrating its deviation from the pose (position and orientation) simulated in the virtual lung model. Recent advances in neural networks and temporal image processing have provided new opportunities for guided bronchoscopy. However, such progress has been hindered by the lack of comparative experimental conditions.
In the present paper, we share a novel synthetic dataset allowing for a fair comparison of methods. Moreover, this paper investigates several neural network architectures for the learning of temporal information at different levels of subject personalization. In order to improve orientation measurement, we also present a standardized comparison framework and a novel metric for camera orientation learning. Results on the dataset show that the proposed metric and architectures, as well as the standardized conditions, provide notable improvements to current state-of-the-art camera pose estimation in video bronchoscopy. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | IAM; | Approved | no | ||
Call Number | Admin @ si @ BSC2023 | Serial | 3702 | ||
Permanent link to this record | |||||
Author | Ferran Diego; Joan Serrat; Antonio Lopez | ||||
Title | Joint spatio-temporal alignment of sequences | Type | Journal Article | ||
Year | 2013 | Publication | IEEE Transactions on Multimedia | Abbreviated Journal | TMM |
Volume | 15 | Issue | 6 | Pages | 1377-1387 |
Keywords | video alignment | ||||
Abstract | Video alignment is important in different areas of computer vision such as wide baseline matching, action recognition, change detection, video copy detection and frame dropping prevention. Current video alignment methods usually deal with a relatively simple case of fixed or rigidly attached cameras or simultaneous acquisition. Therefore, in this paper we propose a joint video alignment for bringing two video sequences into a spatio-temporal alignment. Specifically, the novelty of the paper is to formulate the video alignment to fold the spatial and temporal alignment into a single alignment framework. This simultaneously satisfies a frame-correspondence and frame-alignment similarity; exploiting the knowledge among neighbor frames by a standard pairwise Markov random field (MRF). This new formulation is able to handle the alignment of sequences recorded at different times by independent moving cameras that follows a similar trajectory, and also generalizes the particular cases that of fixed geometric transformation and/or linear temporal mapping. We conduct experiments on different scenarios such as sequences recorded simultaneously or by moving cameras to validate the robustness of the proposed approach. The proposed method provides the highest video alignment accuracy compared to the state-of-the-art methods on sequences recorded from vehicles driving along the same track at different times. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1520-9210 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ DSL2013; ADAS @ adas @ | Serial | 2228 | ||
Permanent link to this record |