|
Records |
Links  |
|
Author |
Naveen Onkarappa; Angel Sappa |

|
|
Title |
Synthetic sequences and ground-truth flow field generation for algorithm validation |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Multimedia Tools and Applications |
Abbreviated Journal |
MTAP |
|
|
Volume |
74 |
Issue |
9 |
Pages |
3121-3135 |
|
|
Keywords |
Ground-truth optical flow; Synthetic sequence; Algorithm validation |
|
|
Abstract |
Research in computer vision is advancing by the availability of good datasets that help to improve algorithms, validate results and obtain comparative analysis. The datasets can be real or synthetic. For some of the computer vision problems such as optical flow it is not possible to obtain ground-truth optical flow with high accuracy in natural outdoor real scenarios directly by any sensor, although it is possible to obtain ground-truth data of real scenarios in a laboratory setup with limited motion. In this difficult situation computer graphics offers a viable option for creating realistic virtual scenarios. In the current work we present a framework to design virtual scenes and generate sequences as well as ground-truth flow fields. Particularly, we generate a dataset containing sequences of driving scenarios. The sequences in the dataset vary in different speeds of the on-board vision system, different road textures, complex motion of vehicle and independent moving vehicles in the scene. This dataset enables analyzing and adaptation of existing optical flow methods, and leads to invention of new approaches particularly for driver assistance systems. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer US |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1380-7501 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.055; 601.215; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OnS2014b |
Serial |
2472 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Jesus Martinez del Rincon; Marçal Rusiñol; Aura Hernandez-Sabate |


|
|
Title |
Feature Extraction by Using Dual-Generalized Discriminative Common Vectors |
Type |
Journal Article |
|
Year |
2019 |
Publication |
Journal of Mathematical Imaging and Vision |
Abbreviated Journal |
JMIV |
|
|
Volume |
61 |
Issue |
3 |
Pages |
331-351 |
|
|
Keywords |
Online feature extraction; Generalized discriminative common vectors; Dual learning; Incremental learning; Decremental learning |
|
|
Abstract |
In this paper, a dual online subspace-based learning method called dual-generalized discriminative common vectors (Dual-GDCV) is presented. The method extends incremental GDCV by exploiting simultaneously both the concepts of incremental and decremental learning for supervised feature extraction and classification. Our methodology is able to update the feature representation space without recalculating the full projection or accessing the previously processed training data. It allows both adding information and removing unnecessary data from a knowledge base in an efficient way, while retaining the previously acquired knowledge. The proposed method has been theoretically proved and empirically validated in six standard face recognition and classification datasets, under two scenarios: (1) removing and adding samples of existent classes, and (2) removing and adding new classes to a classification problem. Results show a considerable computational gain without compromising the accuracy of the model in comparison with both batch methodologies and other state-of-art adaptive methods. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; ADAS; 600.084; 600.118; 600.121; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DRR2019 |
Serial |
3172 |
|
Permanent link to this record |
|
|
|
|
Author |
Katerine Diaz; Jesus Martinez del Rincon; Aura Hernandez-Sabate; Marçal Rusiñol; Francesc J. Ferri |


|
|
Title |
Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction |
Type |
Journal Article |
|
Year |
2018 |
Publication |
Journal of Mathematical Imaging and Vision |
Abbreviated Journal |
JMIV |
|
|
Volume |
60 |
Issue |
4 |
Pages |
512-524 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear
problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed, a first one based on the kernel trick (KT) and a second one based on the nonlinear projection trick (NPT) for even higher efficiency. Both methodologies
have been validated on four different image datasets containing faces, objects and handwritten digits, and compared against well known non-linear state-of-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; ADAS; 600.086; 600.130; 600.121; 600.118; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ DMH2018a |
Serial |
3062 |
|
Permanent link to this record |
|
|
|
|
Author |
Naveen Onkarappa; Angel Sappa |

|
|
Title |
A Novel Space Variant Image Representation |
Type |
Journal Article |
|
Year |
2013 |
Publication |
Journal of Mathematical Imaging and Vision |
Abbreviated Journal |
JMIV |
|
|
Volume |
47 |
Issue |
1-2 |
Pages |
48-59 |
|
|
Keywords |
Space-variant representation; Log-polar mapping; Onboard vision applications |
|
|
Abstract |
Traditionally, in machine vision images are represented using cartesian coordinates with uniform sampling along the axes. On the contrary, biological vision systems represent images using polar coordinates with non-uniform sampling. For various advantages provided by space-variant representations many researchers are interested in space-variant computer vision. In this direction the current work proposes a novel and simple space variant representation of images. The proposed representation is compared with the classical log-polar mapping. The log-polar representation is motivated by biological vision having the characteristic of higher resolution at the fovea and reduced resolution at the periphery. On the contrary to the log-polar, the proposed new representation has higher resolution at the periphery and lower resolution at the fovea. Our proposal is proved to be a better representation in navigational scenarios such as driver assistance systems and robotics. The experimental results involve analysis of optical flow fields computed on both proposed and log-polar representations. Additionally, an egomotion estimation application is also shown as an illustrative example. The experimental analysis comprises results from synthetic as well as real sequences. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer US |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0924-9907 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 600.055; 605.203; 601.215 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OnS2013a |
Serial |
2243 |
|
Permanent link to this record |
|
|
|
|
Author |
Carme Julia; Angel Sappa; Felipe Lumbreras; Joan Serrat; Antonio Lopez |


|
|
Title |
Rank Estimation in Missing Data Matrix Problems |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Journal of Mathematical Imaging and Vision |
Abbreviated Journal |
JMIV |
|
|
Volume |
39 |
Issue |
2 |
Pages |
140-160 |
|
|
Keywords |
|
|
|
Abstract |
A novel technique for missing data matrix rank estimation is presented. It is focused on matrices of trajectories, where every element of the matrix corresponds to an image coordinate from a feature point of a rigid moving object at a given frame; missing data are represented as empty entries. The objective of the proposed approach is to estimate the rank of a missing data matrix in order to fill in empty entries with some matrix completion method, without using or assuming neither the number of objects contained in the scene nor the kind of their motion. The key point of the proposed technique consists in studying the frequency behaviour of the individual trajectories, which are seen as 1D signals. The main assumption is that due to the rigidity of the moving objects, the frequency content of the trajectories will be similar after filling in their missing entries. The proposed rank estimation approach can be used in different computer vision problems, where the rank of a missing data matrix needs to be estimated. Experimental results with synthetic and real data are provided in order to empirically show the good performance of the proposed approach. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0924-9907 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ JSL2011; |
Serial |
1710 |
|
Permanent link to this record |
|
|
|
|
Author |
Arnau Ramisa; David Aldavert; Shrihari Vasudevan; Ricardo Toledo; Ramon Lopez de Mantaras |

|
|
Title |
Evaluation of Three Vision Based Object Perception Methods for a Mobile Robot |
Type |
Journal Article |
|
Year |
2012 |
Publication |
Journal of Intelligent and Robotic Systems |
Abbreviated Journal |
JIRC |
|
|
Volume |
68 |
Issue |
2 |
Pages |
185-208 |
|
|
Keywords |
|
|
|
Abstract |
This paper addresses visual object perception applied to mobile robotics. Being able to perceive household objects in unstructured environments is a key capability in order to make robots suitable to perform complex tasks in home environments. However, finding a solution for this task is daunting: it requires the ability to handle the variability in image formation in a moving camera with tight time constraints. The paper brings to attention some of the issues with applying three state of the art object recognition and detection methods in a mobile robotics scenario, and proposes methods to deal with windowing/segmentation. Thus, this work aims at evaluating the state-of-the-art in object perception in an attempt to develop a lightweight solution for mobile robotics use/research in typical indoor settings. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Netherlands |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0921-0296 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RAV2012 |
Serial |
2150 |
|
Permanent link to this record |
|
|
|
|
Author |
Arnau Ramisa; Alex Goldhoorn; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras |

|
|
Title |
Combining Invariant Features and the ALV Homing Method for Autonomous Robot Navigation Based on Panoramas |
Type |
Journal Article |
|
Year |
2011 |
Publication |
Journal of Intelligent and Robotic Systems |
Abbreviated Journal |
JIRC |
|
|
Volume |
64 |
Issue |
3-4 |
Pages |
625-649 |
|
|
Keywords |
|
|
|
Abstract |
Biologically inspired homing methods, such as the Average Landmark Vector, are an interesting solution for local navigation due to its simplicity. However, usually they require a modification of the environment by placing artificial landmarks in order to work reliably. In this paper we combine the Average Landmark Vector with invariant feature points automatically detected in panoramic images to overcome this limitation. The proposed approach has been evaluated first in simulation and, as promising results are found, also in two data sets of panoramas from real world environments. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Netherlands |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0921-0296 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
RV;ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RGA2011 |
Serial |
1728 |
|
Permanent link to this record |
|
|
|
|
Author |
Arnau Ramisa; Adriana Tapus; David Aldavert; Ricardo Toledo; Ramon Lopez de Mantaras |

|
|
Title |
Robust Vision-Based Localization using Combinations of Local Feature Regions Detectors |
Type |
Journal Article |
|
Year |
2009 |
Publication |
Autonomous Robots |
Abbreviated Journal |
AR |
|
|
Volume |
27 |
Issue |
4 |
Pages |
373-385 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a vision-based approach for mobile robot localization. The model of the environment is topological. The new approach characterizes a place using a signature. This signature consists of a constellation of descriptors computed over different types of local affine covariant regions extracted from an omnidirectional image acquired rotating a standard camera with a pan-tilt unit. This type of representation permits a reliable and distinctive environment modelling. Our objectives were to validate the proposed method in indoor environments and, also, to find out if the combination of complementary local feature region detectors improves the localization versus using a single region detector. Our experimental results show that if false matches are effectively rejected, the combination of different covariant affine region detectors increases notably the performance of the approach by combining the different strengths of the individual detectors. In order to reduce the localization time, two strategies are evaluated: re-ranking the map nodes using a global similarity measure and using standard perspective view field of 45°.
In order to systematically test topological localization methods, another contribution proposed in this work is a novel method to see the degradation in localization performance as the robot moves away from the point where the original signature was acquired. This allows to know the robustness of the proposed signature. In order for this to be effective, it must be done in several, variated, environments that test all the possible situations in which the robot may have to perform localization. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0929-5593 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RTA2009 |
Serial |
1245 |
|
Permanent link to this record |
|
|
|
|
Author |
Jaume Amores |


|
|
Title |
MILDE: multiple instance learning by discriminative embedding |
Type |
Journal Article |
|
Year |
2015 |
Publication |
Knowledge and Information Systems |
Abbreviated Journal |
KAIS |
|
|
Volume |
42 |
Issue |
2 |
Pages |
381-407 |
|
|
Keywords |
Multi-instance learning; Codebook; Bag of words |
|
|
Abstract |
While the objective of the standard supervised learning problem is to classify feature vectors, in the multiple instance learning problem, the objective is to classify bags, where each bag contains multiple feature vectors. This represents a generalization of the standard problem, and this generalization becomes necessary in many real applications such as drug activity prediction, content-based image retrieval, and others. While the existing paradigms are based on learning the discriminant information either at the instance level or at the bag level, we propose to incorporate both levels of information. This is done by defining a discriminative embedding of the original space based on the responses of cluster-adapted instance classifiers. Results clearly show the advantage of the proposed method over the state of the art, where we tested the performance through a variety of well-known databases that come from real problems, and we also included an analysis of the performance using synthetically generated data. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer London |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0219-1377 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ADAS; 601.042; 600.057; 600.076 |
Approved |
no |
|
|
Call Number |
Admin @ si @ Amo2015 |
Serial |
2383 |
|
Permanent link to this record |
|
|
|
|
Author |
David Aldavert; Marçal Rusiñol; Ricardo Toledo; Josep Llados |

|
|
Title |
A Study of Bag-of-Visual-Words Representations for Handwritten Keyword Spotting |
Type |
Journal Article |
|
Year |
2015 |
Publication |
International Journal on Document Analysis and Recognition |
Abbreviated Journal |
IJDAR |
|
|
Volume |
18 |
Issue |
3 |
Pages |
223-234 |
|
|
Keywords |
Bag-of-Visual-Words; Keyword spotting; Handwritten documents; Performance evaluation |
|
|
Abstract |
The Bag-of-Visual-Words (BoVW) framework has gained popularity among the document image analysis community, specifically as a representation of handwritten words for recognition or spotting purposes. Although in the computer vision field the BoVW method has been greatly improved, most of the approaches in the document image analysis domain still rely on the basic implementation of the BoVW method disregarding such latest refinements. In this paper, we present a review of those improvements and its application to the keyword spotting task. We thoroughly evaluate their impact against a baseline system in the well-known George Washington dataset and compare the obtained results against nine state-of-the-art keyword spotting methods. In addition, we also compare both the baseline and improved systems with the methods presented at the Handwritten Keyword Spotting Competition 2014. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1433-2833 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
DAG; ADAS; 600.055; 600.061; 601.223; 600.077; 600.097 |
Approved |
no |
|
|
Call Number |
Admin @ si @ ART2015 |
Serial |
2679 |
|
Permanent link to this record |