|
Records |
Links |
|
Author |
Naveen Onkarappa; Sujay M. Veerabhadrappa; Angel Sappa |
|
|
Title |
Optical Flow in Onboard Applications: A Study on the Relationship Between Accuracy and Scene Texture |
Type |
Conference Article |
|
Year |
2012 |
Publication |
4th International Conference on Signal and Image Processing |
Abbreviated Journal |
|
|
|
Volume |
221 |
Issue |
|
Pages |
257-267 |
|
|
Keywords |
|
|
|
Abstract |
Optical flow has got a major role in making advanced driver assistance systems (ADAS) a reality. ADAS applications are expected to perform efficiently in all kinds of environments, those are highly probable, that one can drive the vehicle in different kinds of roads, times and seasons. In this work, we study the relationship of optical flow with different roads, that is by analyzing optical flow accuracy on different road textures. Texture measures such as TeX , TeX and TeX are evaluated for this purpose. Further, the relation of regularization weight to the flow accuracy in the presence of different textures is also analyzed. Additionally, we present a framework to generate synthetic sequences of different textures in ADAS scenarios with ground-truth optical flow. |
|
|
Address |
Coimbatore, India |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1876-1100 |
ISBN |
978-81-322-0996-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICSIP |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ OVS2012 |
Serial |
2356 |
|
Permanent link to this record |
|
|
|
|
Author |
G.D. Evangelidis; Ferran Diego; Joan Serrat; Antonio Lopez |
|
|
Title |
Slice Matching for Accurate Spatio-Temporal Alignment |
Type |
Conference Article |
|
Year |
2011 |
Publication |
In ICCV Workshop on Visual Surveillance |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
video alignment |
|
|
Abstract |
Video synchronization and alignment is a rather recent topic in computer vision. It usually deals with the problem of aligning sequences recorded simultaneously by static, jointly- or independently-moving cameras. In this paper, we investigate the more difficult problem of matching videos captured at different times from independently-moving cameras, whose trajectories are approximately coincident or parallel. To this end, we propose a novel method that pixel-wise aligns videos and allows thus to automatically highlight their differences. This primarily aims at visual surveillance but the method can be adopted as is by other related video applications, like object transfer (augmented reality) or high dynamic range video. We build upon a slice matching scheme to first synchronize the sequences, while we develop a spatio-temporal alignment scheme to spatially register corresponding frames and refine the temporal mapping. We investigate the performance of the proposed method on videos recorded from vehicles driven along different types of roads and compare with related previous works. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VS |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ EDS2011; ADAS @ adas @ eds2011a |
Serial |
1861 |
|
Permanent link to this record |
|
|
|
|
Author |
G. Roig; Xavier Boix; F. de la Torre; Joan Serrat; C. Vilella |
|
|
Title |
Hierarchical CRF with product label spaces for parts-based Models |
Type |
Conference Article |
|
Year |
2011 |
Publication |
IEEE Conference on Automatic Face and Gesture Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Non-rigid object detection is a challenging an open research problem in computer vision. It is a critical part in many applications such as image search, surveillance, human-computer interaction or image auto-annotation. Most successful approaches to non-rigid object detection make use of part-based models. In particular, Conditional Random Fields (CRF) have been successfully embedded into a discriminative parts-based model framework due to its effectiveness for learning and inference (usually based on a tree structure). However, CRF-based approaches do not incorporate global constraints and only model pairwise interactions. This is especially important when modeling object classes that may have complex parts interactions (e.g. facial features or body articulations), because neglecting them yields an oversimplified model with suboptimal performance. To overcome this limitation, this paper proposes a novel hierarchical CRF (HCRF). The main contribution is to build a hierarchy of part combinations by extending the label set to a hierarchy of product label spaces. In order to keep the inference computation tractable, we propose an effective method to reduce the new label set. We test our method on two applications: facial feature detection on the Multi-PIE database and human pose estimation on the Buffy dataset. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
FG |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RBT2011 |
Serial |
1862 |
|
Permanent link to this record |
|
|
|
|
Author |
David Geronimo; Frederic Lerasle; Antonio Lopez |
|
|
Title |
State-driven particle filter for multi-person tracking |
Type |
Conference Article |
|
Year |
2012 |
Publication |
11th International Conference on Advanced Concepts for Intelligent Vision Systems |
Abbreviated Journal |
|
|
|
Volume |
7517 |
Issue |
|
Pages |
467-478 |
|
|
Keywords |
human tracking |
|
|
Abstract |
Multi-person tracking can be exploited in applications such as driver assistance, surveillance, multimedia and human-robot interaction. With the help of human detectors, particle filters offer a robust method able to filter noisy detections and provide temporal coherence. However, some traditional problems such as occlusions with other targets or the scene, temporal drifting or even the lost targets detection are rarely considered, making the systems performance decrease. Some authors propose to overcome these problems using heuristics not explained
and formalized in the papers, for instance by defining exceptions to the model updating depending on tracks overlapping. In this paper we propose to formalize these events by the use of a state-graph, defining the current state of the track (e.g., potential , tracked, occluded or lost) and the transitions between states in an explicit way. This approach has the advantage of linking track actions such as the online underlying models updating, which gives flexibility to the system. It provides an explicit representation to adapt the multiple parallel trackers depending on the context, i.e., each track can make use of a specific filtering strategy, dynamic model, number of particles, etc. depending on its state. We implement this technique in a single-camera multi-person tracker and test
it in public video sequences. |
|
|
Address |
Brno, Chzech Republic |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer |
Place of Publication |
Heidelberg |
Editor |
J. Blanc-Talon et al. |
|
|
Language |
English |
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ACIVS |
|
|
Notes |
ADAS |
Approved |
yes |
|
|
Call Number |
GLL2012; ADAS @ adas @ gll2012a |
Serial |
1990 |
|
Permanent link to this record |
|
|
|
|
Author |
David Vazquez; Antonio Lopez; Daniel Ponsa |
|
|
Title |
Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection |
Type |
Conference Article |
|
Year |
2012 |
Publication |
21st International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3492 - 3495 |
|
|
Keywords |
Pedestrian Detection; Domain Adaptation; Virtual worlds |
|
|
Abstract |
Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate, and lead it to self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available [12]. This would make possible the desired self-training of our pedestrian detector. However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. 1). |
|
|
Address |
Tsukuba Science City, Japan |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE |
Place of Publication |
Tsukuba Science City, JAPAN |
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1051-4651 |
ISBN |
978-1-4673-2216-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ VLP2012 |
Serial |
1981 |
|
Permanent link to this record |
|
|
|
|
Author |
Jose Carlos Rubio; Joan Serrat; Antonio Lopez; N. Paragios |
|
|
Title |
Image Contextual Representation and Matching through Hierarchies and Higher Order Graphs |
Type |
Conference Article |
|
Year |
2012 |
Publication |
21st International Conference on Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
2664 - 2667 |
|
|
Keywords |
|
|
|
Abstract |
We present a region matching algorithm which establishes correspondences between regions from two segmented images. An abstract graph-based representation conceals the image in a hierarchical graph, exploiting the scene properties at two levels. First, the similarity and spatial consistency of the image semantic objects is encoded in a graph of commute times. Second, the cluttered regions of the semantic objects are represented with a shape descriptor. Many-to-many matching of regions is specially challenging due to the instability of the segmentation under slight image changes, and we explicitly handle it through high order potentials. We demonstrate the matching approach applied to images of world famous buildings, captured under different conditions, showing the robustness of our method to large variations in illumination and viewpoint. |
|
|
Address |
Tsukuba Science City, Japan |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1051-4651 |
ISBN |
978-1-4673-2216-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RSL2012a; |
Serial |
2032 |
|
Permanent link to this record |
|
|
|
|
Author |
Diego Cheda; Daniel Ponsa; Antonio Lopez |
|
|
Title |
Monocular Egomotion Estimation based on Image Matching |
Type |
Conference Article |
|
Year |
2012 |
Publication |
1st International Conference on Pattern Recognition Applications and Methods |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
425-430 |
|
|
Keywords |
SLAM |
|
|
Abstract |
|
|
|
Address |
Portugal |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICPRAM |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ CPL2012a;; ADAS @ adas @ |
Serial |
2011 |
|
Permanent link to this record |
|
|
|
|
Author |
Diego Cheda; Daniel Ponsa; Antonio Lopez |
|
|
Title |
Monocular Depth-based Background Estimation |
Type |
Conference Article |
|
Year |
2012 |
Publication |
7th International Conference on Computer Vision Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
323-328 |
|
|
Keywords |
|
|
|
Abstract |
In this paper, we address the problem of reconstructing the background of a scene from a video sequence with occluding objects. The images are taken by hand-held cameras. Our method composes the background by selecting the appropriate pixels from previously aligned input images. To do that, we minimize a cost function that penalizes the deviations from the following assumptions: background represents objects whose distance to the camera is maximal, and background objects are stationary. Distance information is roughly obtained by a supervised learning approach that allows us to distinguish between close and distant image regions. Moving foreground objects are filtered out by using stationariness and motion boundary constancy measurements. The cost function is minimized by a graph cuts method. We demonstrate the applicability of our approach to recover an occlusion-free background in a set of sequences. |
|
|
Address |
Roma |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISAPP |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ CPL2012b; ADAS @ adas @ cpl2012e |
Serial |
2012 |
|
Permanent link to this record |
|
|
|
|
Author |
Diego Cheda; Daniel Ponsa; Antonio Lopez |
|
|
Title |
Pedestrian Candidates Generation using Monocular Cues |
Type |
Conference Article |
|
Year |
2012 |
Publication |
IEEE Intelligent Vehicles Symposium |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
7-12 |
|
|
Keywords |
pedestrian detection |
|
|
Abstract |
Common techniques for pedestrian candidates generation (e.g., sliding window approaches) are based on an exhaustive search over the image. This implies that the number of windows produced is huge, which translates into a significant time consumption in the classification stage. In this paper, we propose a method that significantly reduces the number of windows to be considered by a classifier. Our method is a monocular one that exploits geometric and depth information available on single images. Both representations of the world are fused together to generate pedestrian candidates based on an underlying model which is focused only on objects standing vertically on the ground plane and having certain height, according with their depths on the scene. We evaluate our algorithm on a challenging dataset and demonstrate its application for pedestrian detection, where a considerable reduction in the number of candidate windows is reached. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
IEEE Xplore |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1931-0587 |
ISBN |
978-1-4673-2119-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IV |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ CPL2012c; ADAS @ adas @ cpl2012d |
Serial |
2013 |
|
Permanent link to this record |
|
|
|
|
Author |
Fernando Barrera; Felipe Lumbreras; Angel Sappa |
|
|
Title |
Evaluation of Similarity Functions in Multimodal Stereo |
Type |
Conference Article |
|
Year |
2012 |
Publication |
9th International Conference on Image Analysis and Recognition |
Abbreviated Journal |
|
|
|
Volume |
7324 |
Issue |
I |
Pages |
320-329 |
|
|
Keywords |
Aveiro, Portugal |
|
|
Abstract |
This paper presents an evaluation framework for multimodal stereo matching, which allows to compare the performance of four similarity functions. Additionally, it presents details of a multimodal stereo head that supply thermal infrared and color images, as well as, aspects of its calibration and rectification. The pipeline includes a novel method for the disparity selection, which is suitable for evaluating the similarity functions. Finally, a benchmark for comparing different initializations of the proposed framework is presented. Similarity functions are based on mutual information, gradient orientation and scale space representations. Their evaluation is performed using two metrics: i) disparity error, and ii) number of correct matches on planar regions. In addition to the proposed evaluation, the current paper also shows that 3D sparse representations can be recovered from such a multimodal stereo head. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-31294-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIAR |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
BLS2012a |
Serial |
2014 |
|
Permanent link to this record |