Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–12] |
Records | |||||
---|---|---|---|---|---|
Author | Partha Pratim Roy; Umapada Pal; Josep Llados | ||||
Title | Seal Object Detection in Document Images using GHT of Local Component Shapes | Type | Conference Article | ||
Year | 2010 | Publication | 10th ACM Symposium On Applied Computing | Abbreviated Journal | |
Volume | Issue | Pages | 23–27 | ||
Keywords | |||||
Abstract | Due to noise, overlapped text/signature and multi-oriented nature, seal (stamp) object detection involves a difficult challenge. This paper deals with automatic detection of seal from documents with cluttered background. Here, a seal object is characterized by scale and rotation invariant spatial feature descriptors (distance and angular position) computed from recognition result of individual connected components (characters). Recognition of multi-scale and multi-oriented component is done using Support Vector Machine classifier. Generalized Hough Transform (GHT) is used to detect the seal and a voting is casted for finding possible location of the seal object in a document based on these spatial feature descriptor of components pairs. The peak of votes in GHT accumulator validates the hypothesis to locate the seal object in a document. Experimental results show that, the method is efficient to locate seal instance of arbitrary shape and orientation in documents. | ||||
Address | Sierre, Switzerland | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | SAC | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ RPL2010a | Serial | 1291 | ||
Permanent link to this record | |||||
Author | Oriol Ramos Terrades; Alejandro Hector Toselli; Nicolas Serrano; Veronica Romero; Enrique Vidal; Alfons Juan | ||||
Title | Interactive layout analysis and transcription systems for historic handwritten documents | Type | Conference Article | ||
Year | 2010 | Publication | 10th ACM Symposium on Document Engineering | Abbreviated Journal | |
Volume | Issue | Pages | 219–222 | ||
Keywords | Handwriting recognition; Interactive predictive processing; Partial supervision; Interactive layout analysis | ||||
Abstract | The amount of digitized legacy documents has been rising dramatically over the last years due mainly to the increasing number of on-line digital libraries publishing this kind of documents, waiting to be classified and finally transcribed into a textual electronic format (such as ASCII or PDF). Nevertheless, most of the available fully-automatic applications addressing this task are far from being perfect and heavy and inefficient human intervention is often required to check and correct the results of such systems. In contrast, multimodal interactive-predictive approaches may allow the users to participate in the process helping the system to improve the overall performance. With this in mind, two sets of recent advances are introduced in this work: a novel interactive method for text block detection and two multimodal interactive handwritten text transcription systems which use active learning and interactive-predictive technologies in the recognition process. | ||||
Address | Manchester, United Kingdom | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @RTS2010 | Serial | 1857 | ||
Permanent link to this record | |||||
Author | N. Serrano; L. Tarazon; D. Perez; Oriol Ramos Terrades; S. Juan | ||||
Title | The GIDOC Prototype | Type | Conference Article | ||
Year | 2010 | Publication | 10th International Workshop on Pattern Recognition in Information Systems | Abbreviated Journal | |
Volume | Issue | Pages | 82-89 | ||
Keywords | |||||
Abstract | Transcription of handwritten text in (old) documents is an important, time-consuming task for digital libraries. It might be carried out by first processing all document images off-line, and then manually supervising system transcriptions to edit incorrect parts. However, current techniques for automatic page layout analysis, text line detection and handwriting recognition are still far from perfect, and thus post-editing system output is not clearly better than simply ignoring it.
A more effective approach to transcribe old text documents is to follow an interactive- predictive paradigm in which both, the system is guided by the user, and the user is assisted by the system to complete the transcription task as efficiently as possible. Following this approach, a system prototype called GIDOC (Gimp-based Interactive transcription of old text DOCuments) has been developed to provide user-friendly, integrated support for interactive-predictive layout analysis, line detection and handwriting transcription. GIDOC is designed to work with (large) collections of homogeneous documents, that is, of similar structure and writing styles. They are annotated sequentially, by (par- tially) supervising hypotheses drawn from statistical models that are constantly updated with an increasing number of available annotated documents. And this is done at different annotation levels. For instance, at the level of page layout analysis, GIDOC uses a novel text block detection method in which conventional, memoryless techniques are improved with a “history” model of text block positions. Similarly, at the level of text line image transcription, GIDOC includes a handwriting recognizer which is steadily improved with a growing number of (partially) supervised transcriptions. |
||||
Address | Funchal, Portugal | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-989-8425-14-0 | Medium | ||
Area | Expedition | Conference | PRIS | ||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ STP2010 | Serial | 1868 | ||
Permanent link to this record | |||||
Author | Marco Pedersoli; Jordi Gonzalez; Andrew Bagdanov; Juan J. Villanueva | ||||
Title | Recursive Coarse-to-Fine Localization for fast Object Recognition | Type | Conference Article | ||
Year | 2010 | Publication | 11th European Conference on Computer Vision | Abbreviated Journal | |
Volume | 6313 | Issue | II | Pages | 280–293 |
Keywords | |||||
Abstract | Cascading techniques are commonly used to speed-up the scan of an image for object detection. However, cascades of detectors are slow to train due to the high number of detectors and corresponding thresholds to learn. Furthermore, they do not use any prior knowledge about the scene structure to decide where to focus the search. To handle these problems, we propose a new way to scan an image, where we couple a recursive coarse-to-fine refinement together with spatial constraints of the object location. For doing that we split an image into a set of uniformly distributed neighborhood regions, and for each of these we apply a local greedy search over feature resolutions. The neighborhood is defined as a scanning region that only one object can occupy. Therefore the best hypothesis is obtained as the location with maximum score and no thresholds are needed. We present an implementation of our method using a pyramid of HOG features and we evaluate it on two standard databases, VOC2007 and INRIA dataset. Results show that the Recursive Coarse-to-Fine Localization (RCFL) achieves a 12x speed-up compared to standard sliding windows. Compared with a cascade of multiple resolutions approach our method has slightly better performance in speed and Average-Precision. Furthermore, in contrast to cascading approach, the speed-up is independent of image conditions, the number of detected objects and clutter. | ||||
Address | Crete (Greece) | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-15566-6 | Medium | |
Area | Expedition | Conference | ECCV | ||
Notes | ISE | Approved | no | ||
Call Number | DAG @ dag @ PGB2010 | Serial | 1438 | ||
Permanent link to this record | |||||
Author | Carles Fernandez; Jordi Gonzalez; Xavier Roca | ||||
Title | Automatic Learning of Background Semantics in Generic Surveilled Scenes | Type | Conference Article | ||
Year | 2010 | Publication | 11th European Conference on Computer Vision | Abbreviated Journal | |
Volume | 6313 | Issue | II | Pages | 678–692 |
Keywords | |||||
Abstract | Advanced surveillance systems for behavior recognition in outdoor traffic scenes depend strongly on the particular configuration of the scenario. Scene-independent trajectory analysis techniques statistically infer semantics in locations where motion occurs, and such inferences are typically limited to abnormality. Thus, it is interesting to design contributions that automatically categorize more specific semantic regions. State-of-the-art approaches for unsupervised scene labeling exploit trajectory data to segment areas like sources, sinks, or waiting zones. Our method, in addition, incorporates scene-independent knowledge to assign more meaningful labels like crosswalks, sidewalks, or parking spaces. First, a spatiotemporal scene model is obtained from trajectory analysis. Subsequently, a so-called GI-MRF inference process reinforces spatial coherence, and incorporates taxonomy-guided smoothness constraints. Our method achieves automatic and effective labeling of conceptual regions in urban scenarios, and is robust to tracking errors. Experimental validation on 5 surveillance databases has been conducted to assess the generality and accuracy of the segmentations. The resulting scene models are used for model-based behavior analysis. | ||||
Address | Crete (Greece) | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-15551-2 | Medium | |
Area | Expedition | Conference | ECCV | ||
Notes | ISE | Approved | no | ||
Call Number | ISE @ ise @ FGR2010 | Serial | 1439 | ||
Permanent link to this record | |||||
Author | Cesar Isaza; Joaquin Salas; Bogdan Raducanu | ||||
Title | Toward the Detection of Urban Infrastructures Edge Shadows | Type | Conference Article | ||
Year | 2010 | Publication | 12th International Conference on Advanced Concepts for Intelligent Vision Systems | Abbreviated Journal | |
Volume | 6474 | Issue | I | Pages | 30–37 |
Keywords | |||||
Abstract | In this paper, we propose a novel technique to detect the shadows cast by urban infrastructure, such as buildings, billboards, and traffic signs, using a sequence of images taken from a fixed camera. In our approach, we compute two different background models in parallel: one for the edges and one for the reflected light intensity. An algorithm is proposed to train the system to distinguish between moving edges in general and edges that belong to static objects, creating an edge background model. Then, during operation, a background intensity model allow us to separate between moving and static objects. Those edges included in the moving objects and those that belong to the edge background model are subtracted from the current image edges. The remaining edges are the ones cast by urban infrastructure. Our method is tested on a typical crossroad scene and the results show that the approach is sound and promising. | ||||
Address | Sydney, Australia | ||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | eds. Blanc–Talon et al | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-17687-6 | Medium | |
Area | Expedition | Conference | ACIVS | ||
Notes | OR;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ ISR2010 | Serial | 1458 | ||
Permanent link to this record | |||||
Author | Alicia Fornes; Josep Llados | ||||
Title | A Symbol-dependent Writer Identifcation Approach in Old Handwritten Music Scores | Type | Conference Article | ||
Year | 2010 | Publication | 12th International Conference on Frontiers in Handwriting Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 634 - 639 | ||
Keywords | |||||
Abstract | Writer identification consists in determining the writer of a piece of handwriting from a set of writers. In this paper we introduce a symbol-dependent approach for identifying the writer of old music scores, which is based on two symbol recognition methods. The main idea is to use the Blurred Shape Model descriptor and a DTW-based method for detecting, recognizing and describing the music clefs and notes. The proposed approach has been evaluated in a database of old music scores, achieving very high writer identification rates. | ||||
Address | Kolkata (India) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-4244-8353-2 | Medium | ||
Area | Expedition | Conference | ICFHR | ||
Notes | DAG | Approved | no | ||
Call Number | DAG @ dag @ FoL2010 | Serial | 1321 | ||
Permanent link to this record | |||||
Author | Sergio Escalera; Petia Radeva; Jordi Vitria; Xavier Baro; Bogdan Raducanu | ||||
Title | Modelling and Analyzing Multimodal Dyadic Interactions Using Social Networks | Type | Conference Article | ||
Year | 2010 | Publication | 12th International Conference on Multimodal Interfaces and 7th Workshop on Machine Learning for Multimodal Interaction. | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Social interaction; Multimodal fusion, Influence model; Social network analysis | ||||
Abstract | Social network analysis became a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from
multimodal dyadic interactions. First, speech detection is performed through an audio/visual fusion scheme based on stacked sequential learning. In the audio domain, speech is detected through clusterization of audio features. Clusters are modelled by means of an One-state Hidden Markov Model containing a diagonal covariance Gaussian Mixture Model. In the visual domain, speech detection is performed through differential-based feature extraction from the segmented mouth region, and a dynamic programming matching procedure. Second, in order to model the dyadic interactions, we employed the Influence Model whose states encode the previous integrated audio/visual data. Third, the social network is extracted based on the estimated influences. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The results are reported both in terms of accuracy of the audio/visual data fusion and centrality measures used to characterize the social network. |
||||
Address | Beijing (China) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICMI-MLI | ||
Notes | OR;MILAB;HUPBA;MV | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ ERV2010 | Serial | 1427 | ||
Permanent link to this record | |||||
Author | Jose Carlos Rubio; Joan Serrat; Antonio Lopez; Daniel Ponsa | ||||
Title | Multiple-target tracking for the intelligent headlights control | Type | Conference Article | ||
Year | 2010 | Publication | 13th Annual International Conference on Intelligent Transportation Systems | Abbreviated Journal | |
Volume | Issue | Pages | 903–910 | ||
Keywords | Intelligent Headlights | ||||
Abstract | TA7.4
Intelligent vehicle lighting systems aim at automatically regulating the headlights' beam to illuminate as much of the road ahead as possible while avoiding dazzling other drivers. A key component of such a system is computer vision software that is able to distinguish blobs due to vehicles' headlights and rear lights from those due to road lamps and reflective elements such as poles and traffic signs. In a previous work, we have devised a set of specialized supervised classifiers to make such decisions based on blob features related to its intensity and shape. Despite the overall good performance, there remain challenging that have yet to be solved: notably, faint and tiny blobs corresponding to quite distant vehicles. In fact, for such distant blobs, classification decisions can be taken after observing them during a few frames. Hence, incorporating tracking could improve the overall lighting system performance by enforcing the temporal consistency of the classifier decision. Accordingly, this paper focuses on the problem of constructing blob tracks, which is actually one of multiple-target tracking (MTT), but under two special conditions: We have to deal with frequent occlusions, as well as blob splits and merges. We approach it in a novel way by formulating the problem as a maximum a posteriori inference on a Markov random field. The qualitative (in video form) and quantitative evaluation of our new MTT method shows good tracking results. In addition, we will also see that the classification performance of the problematic blobs improves due to the proposed MTT algorithm. |
||||
Address | Madeira Island (Portugal) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ITSC | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ RSL2010 | Serial | 1422 | ||
Permanent link to this record | |||||
Author | Ferran Diego; Daniel Ponsa; Joan Serrat; Antonio Lopez | ||||
Title | Vehicle geolocalization based on video synchronization | Type | Conference Article | ||
Year | 2010 | Publication | 13th Annual International Conference on Intelligent Transportation Systems | Abbreviated Journal | |
Volume | Issue | Pages | 1511–1516 | ||
Keywords | video alignment | ||||
Abstract | TC8.6
This paper proposes a novel method for estimating the geospatial localization of a vehicle. I uses as input a georeferenced video sequence recorded by a forward-facing camera attached to the windscreen. The core of the proposed method is an on-line video synchronization which finds out the corresponding frame in the georeferenced video sequence to the one recorded at each time by the camera on a second drive through the same track. Once found the corresponding frame in the georeferenced video sequence, we transfer its geospatial information of this frame. The key advantages of this method are: 1) the increase of the update rate and the geospatial accuracy with regard to a standard low-cost GPS and 2) the ability to localize a vehicle even when a GPS is not available or is not reliable enough, like in certain urban areas. Experimental results for an urban environments are presented, showing an average of relative accuracy of 1.5 meters. |
||||
Address | Madeira Island (Portugal) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2153-0009 | ISBN | 978-1-4244-7657-2 | Medium | |
Area | Expedition | Conference | ITSC | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ DPS2010 | Serial | 1423 | ||
Permanent link to this record | |||||
Author | Ferran Diego; Jose Manuel Alvarez; Joan Serrat; Antonio Lopez | ||||
Title | Vision-based road detection via on-line video registration | Type | Conference Article | ||
Year | 2010 | Publication | 13th Annual International Conference on Intelligent Transportation Systems | Abbreviated Journal | |
Volume | Issue | Pages | 1135–1140 | ||
Keywords | video alignment; road detection | ||||
Abstract | TB6.2
Road segmentation is an essential functionality for supporting advanced driver assistance systems (ADAS) such as road following and vehicle and pedestrian detection. Significant efforts have been made in order to solve this task using vision-based techniques. The major challenge is to deal with lighting variations and the presence of objects on the road surface. In this paper, we propose a new road detection method to infer the areas of the image depicting road surfaces without performing any image segmentation. The idea is to previously segment manually or semi-automatically the road region in a traffic-free reference video record on a first drive. And then to transfer these regions to the frames of a second video sequence acquired later in a second drive through the same road, in an on-line manner. This is possible because we are able to automatically align the two videos in time and space, that is, to synchronize them and warp each frame of the first video to its corresponding frame in the second one. The geometric transform can thus transfer the road region to the present frame on-line. In order to reduce the different lighting conditions which are present in outdoor scenarios, our approach incorporates a shadowless feature space which represents an image in an illuminant-invariant feature space. Furthermore, we propose a dynamic background subtraction algorithm which removes the regions containing vehicles in the observed frames which are within the transferred road region. |
||||
Address | Madeira Island (Portugal) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2153-0009 | ISBN | 978-1-4244-7657-2 | Medium | |
Area | Expedition | Conference | ITSC | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ DAS2010 | Serial | 1424 | ||
Permanent link to this record | |||||
Author | David Aldavert; Arnau Ramisa; Ramon Lopez de Mantaras; Ricardo Toledo | ||||
Title | Real-time Object Segmentation using a Bag of Features Approach | Type | Conference Article | ||
Year | 2010 | Publication | 13th International Conference of the Catalan Association for Artificial Intelligence | Abbreviated Journal | |
Volume | 220 | Issue | Pages | 321–329 | |
Keywords | Object Segmentation; Bag Of Features; Feature Quantization; Densely sampled descriptors | ||||
Abstract | In this paper, we propose an object segmentation framework, based on the popular bag of features (BoF), which can process several images per second while achieving a good segmentation accuracy assigning an object category to every pixel of the image. We propose an efficient color descriptor to complement the information obtained by a typical gradient-based local descriptor. Results show that color proves to be a useful cue to increase the segmentation accuracy, specially in large homogeneous regions. Then, we extend the Hierarchical K-Means codebook using the recently proposed Vector of Locally Aggregated Descriptors method. Finally, we show that the BoF method can be easily parallelized since it is applied locally, thus the time necessary to process an image is further reduced. The performance of the proposed method is evaluated in the standard PASCAL 2007 Segmentation Challenge object segmentation dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | IOS Press Amsterdam, | Place of Publication | Editor | In R.Alquezar, A.Moreno, J.Aguilar. | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 9781607506423 | Medium | ||
Area | Expedition | Conference | CCIA | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ ARL2010b | Serial | 1417 | ||
Permanent link to this record | |||||
Author | Eloi Puertas; Sergio Escalera; Oriol Pujol | ||||
Title | Classifying Objects at Different Sizes with Multi-Scale Stacked Sequential Learning | Type | Conference Article | ||
Year | 2010 | Publication | 13th International Conference of the Catalan Association for Artificial Intelligence | Abbreviated Journal | |
Volume | 220 | Issue | Pages | 193–200 | |
Keywords | |||||
Abstract | Sequential learning is that discipline of machine learning that deals with dependent data. In this paper, we use the Multi-scale Stacked Sequential Learning approach (MSSL) to solve the task of pixel-wise classification based on contextual information. The main contribution of this work is a shifting technique applied during the testing phase that makes possible, thanks to template images, to classify objects at different sizes. The results show that the proposed method robustly classifies such objects capturing their spatial relationships. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | R. Alquezar, A. Moreno, J. Aguilar | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-1-60750-642-3 | Medium | ||
Area | Expedition | Conference | CCIA | ||
Notes | HUPBA;MILAB | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ PEP2010 | Serial | 1448 | ||
Permanent link to this record | |||||
Author | Carlo Gatta; Simone Balocco; Francesco Ciompi; R. Hemetsberger; Oriol Rodriguez-Leor; Petia Radeva | ||||
Title | Real-time gating of IVUS sequences based on motion blur analysis: Method and quantitative validation | Type | Conference Article | ||
Year | 2010 | Publication | 13th international conference on Medical image computing and computer-assisted intervention | Abbreviated Journal | |
Volume | II | Issue | Pages | 59-67 | |
Keywords | |||||
Abstract | Intravascular Ultrasound (IVUS) is an image-guiding technique for cardiovascular diagnostic, providing cross-sectional images of vessels. During the acquisition, the catheter is pulled back (pullback) at a constant speed in order to acquire spatially subsequent images of the artery. However, during this procedure, the heart twist produces a swinging fluctuation of the probe position along the vessel axis. In this paper we propose a real-time gating algorithm based on the analysis of motion blur variations during the IVUS sequence. Quantitative tests performed on an in-vitro ground truth data base shown that our method is superior to state of the art algorithms both in computational speed and accuracy. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer-Verlag Berlin | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MICCAI | ||
Notes | MILAB | Approved | no | ||
Call Number | BCNPCL @ bcnpcl @ GBC2010 | Serial | 1447 | ||
Permanent link to this record | |||||
Author | Diego Alejandro Cheda; Daniel Ponsa; Antonio Lopez | ||||
Title | Camera Egomotion Estimation in the ADAS Context | Type | Conference Article | ||
Year | 2010 | Publication | 13th International IEEE Annual Conference on Intelligent Transportation Systems | Abbreviated Journal | |
Volume | Issue | Pages | 1415–1420 | ||
Keywords | |||||
Abstract | Camera-based Advanced Driver Assistance Systems (ADAS) have concentrated many research efforts in the last decades. Proposals based on monocular cameras require the knowledge of the camera pose with respect to the environment, in order to reach an efficient and robust performance. A common assumption in such systems is considering the road as planar, and the camera pose with respect to it as approximately known. However, in real situations, the camera pose varies along time due to the vehicle movement, the road slope, and irregularities on the road surface. Thus, the changes in the camera position and orientation (i.e., the egomotion) are critical information that must be estimated at every frame to avoid poor performances. This work focuses on egomotion estimation from a monocular camera under the ADAS context. We review and compare egomotion methods with simulated and real ADAS-like sequences. Basing on the results of our experiments, we show which of the considered nonlinear and linear algorithms have the best performance in this domain. | ||||
Address | Madeira Island (Portugal) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2153-0009 | ISBN | 978-1-4244-7657-2 | Medium | |
Area | Expedition | Conference | ITSC | ||
Notes | ADAS | Approved | no | ||
Call Number | ADAS @ adas @ CPL2010 | Serial | 1425 | ||
Permanent link to this record |