|
Miguel Oliveira, L. Seabra Lopes, G. Hyun Lim, S. Hamidreza Kasaei, Angel Sappa and A. Tom. 2015. Concurrent Learning of Visual Codebooks and Object Categories in Openended Domains. International Conference on Intelligent Robots and Systems.2488–2495.
Abstract: In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using offline constructed codebooks.
Keywords: Visual Learning; Computer Vision; Autonomous Agents
|
|
|
Angel Sappa and Boris X. Vintimilla. 2006. Edge Point Linking by Means of Global and Local Schemes. IEEE Int. Conf. on Signal-Image Technology and Internet-Based Systems, Hammamet, Tunisia, December 2006, pp. 551-560..
|
|
|
German Ros, Sebastian Ramos, Manuel Granados, Amir Bakhtiary, David Vazquez and Antonio Lopez. 2015. Vision-based Offline-Online Perception Paradigm for Autonomous Driving. IEEE Winter Conference on Applications of Computer Vision.231–238.
Abstract: Autonomous driving is a key factor for future mobility. Properly perceiving the environment of the vehicles is essential for a safe driving, which requires computing accurate geometric and semantic information in real-time. In this paper, we challenge state-of-the-art computer vision algorithms for building a perception system for autonomous driving. An inherent drawback in the computation of visual semantics is the trade-off between accuracy and computational cost. We propose to circumvent this problem by following an offline-online strategy. During the offline stage dense 3D semantic maps are created. In the online stage the current driving area is recognized in the maps via a re-localization process, which allows to retrieve the pre-computed accurate semantics and 3D geometry in realtime. Then, detecting the dynamic obstacles we obtain a rich understanding of the current scene. We evaluate quantitatively our proposal in the KITTI dataset and discuss the related open challenges for the computer vision community.
Keywords: Autonomous Driving; Scene Understanding; SLAM; Semantic Segmentation
|
|
|
Fernando Barrera, Felipe Lumbreras and Angel Sappa. 2010. Multimodal Template Matching based on Gradient and Mutual Information using Scale-Space. 17th IEEE International Conference on Image Processing.2749–2752.
Abstract: This paper presents the combined use of gradient and mutual information for infrared and intensity templates matching. We propose to joint: (i) feature matching in a multiresolution context and (ii) information propagation through scale-space representations. Our method consists in combining mutual information with a shape descriptor based on gradient, and propagate them following a coarse-to-fine strategy. The main contributions of this work are: to offer a theoretical formulation towards a multimodal stereo matching; to show that gradient and mutual information can be reinforced while they are propagated between consecutive levels; and to show that they are valid cost functions in multimodal template matchings. Comparisons are presented showing the improvements and viability of the proposed approach.
|
|
|
Mohammad Rouhani and Angel Sappa. 2010. A Fast accurate Implicit Polynomial Fitting Approach. 17th IEEE International Conference on Image Processing.1429–1432.
Abstract: This paper presents a novel hybrid approach that combines state of the art fitting algorithms: algebraic-based and geometric-based. It consists of two steps; first, the 3L algorithm is used as an initialization and then, the obtained result, is improved through a geometric approach. The adopted geometric approach is based on a distance estimation that avoids costly search for the real orthogonal distance. Experimental results are presented as well as quantitative comparisons.
|
|
|
Jaume Amores, David Geronimo and Antonio Lopez. 2010. Multiple instance and active learning for weakly-supervised object-class segmentation. 3rd IEEE International Conference on Machine Vision.
Abstract: In object-class segmentation, one of the most tedious tasks is to manually segment many object examples in order to learn a model of the object category. Yet, there has been little research on reducing the degree of manual annotation for
object-class segmentation. In this work we explore alternative strategies which do not require full manual segmentation of the object in the training set. In particular, we study the use of bounding boxes as a coarser and much cheaper form of segmentation and we perform a comparative study of several Multiple-Instance Learning techniques that allow to obtain a model with this type of weak annotation. We show that some of these methods can be competitive, when used with coarse
segmentations, with methods that require full manual segmentation of the objects. Furthermore, we show how to use active learning combined with this weakly supervised strategy.
As we see, this strategy permits to reduce the amount of annotation and optimize the number of examples that require full manual segmentation in the training set.
Keywords: Multiple Instance Learning; Active Learning; Object-class segmentation.
|
|
|
Cesar de Souza, Adrien Gaidon, Yohann Cabon and Antonio Lopez. 2017. Procedural Generation of Videos to Train Deep Action Recognition Networks. 30th IEEE Conference on Computer Vision and Pattern Recognition.2594–2604.
Abstract: Deep learning for human action recognition in videos is making significant progress, but is slowed down by its dependency on expensive manual labeling of large video collections. In this work, we investigate the generation of synthetic training data for action recognition, as it has recently shown promising results for a variety of other computer vision tasks. We propose an interpretable parametric generative model of human action videos that relies on procedural generation and other computer graphics techniques of modern game engines. We generate a diverse, realistic, and physically plausible dataset of human action videos, called PHAV for ”Procedural Human Action Videos”. It contains a total of 39, 982 videos, with more than 1, 000 examples for each action of 35 categories. Our approach is not limited to existing motion capture sequences, and we procedurally define 14 synthetic actions. We introduce a deep multi-task representation learning architecture to mix synthetic and real videos, even if the action categories differ. Our experiments on the UCF101 and HMDB51 benchmarks suggest that combining our large set of synthetic videos with small real-world datasets can boost recognition performance, significantly
outperforming fine-tuning state-of-the-art unsupervised generative models of videos.
|
|
|
Patricia Suarez, Angel Sappa and Boris X. Vintimilla. 2017. Infrared Image Colorization based on a Triplet DCGAN Architecture. IEEE Conference on Computer Vision and Pattern Recognition Workshops.
Abstract: This paper proposes a novel approach for colorizing near infrared (NIR) images using Deep Convolutional Generative Adversarial Network (GAN) architectures. The proposed approach is based on the usage of a triplet model for learning each color channel independently, in a more homogeneous way. It allows a fast convergence during the training, obtaining a greater similarity between the given NIR image and the corresponding ground truth. The proposed approach has been evaluated with a large data set of NIR images and compared with a recent approach, which is also based on a GAN architecture but in this case all the
color channels are obtained at the same time.
|
|
|
Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero and Yoshua Bengio. 2017. The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation. IEEE Conference on Computer Vision and Pattern Recognition Workshops.
Abstract: State-of-the-art approaches for semantic image segmentation are built on Convolutional Neural Networks (CNNs). The typical segmentation architecture is composed of (a) a downsampling path responsible for extracting coarse semantic features, followed by (b) an upsampling path trained to recover the input image resolution at the output of the model and, optionally, (c) a post-processing module (e.g. Conditional Random Fields) to refine the model predictions.
Recently, a new CNN architecture, Densely Connected Convolutional Networks (DenseNets), has shown excellent results on image classification tasks. The idea of DenseNets is based on the observation that if each layer is directly connected to every other layer in a feed-forward fashion then the network will be more accurate and easier to train.
In this paper, we extend DenseNets to deal with the problem of semantic segmentation. We achieve state-of-the-art results on urban scene benchmark datasets such as CamVid and Gatech, without any further post-processing module nor pretraining. Moreover, due to smart construction of the model, our approach has much less parameters than currently published best entries for these datasets.
Keywords: Semantic Segmentation
|
|
|
Jaume Amores. 2010. Vocabulary-based Approaches for Multiple-Instance Data: a Comparative Study. 20th International Conference on Pattern Recognition.4246–4250.
Abstract: Multiple Instance Learning (MIL) has become a hot topic and many different algorithms have been proposed in the last years. Despite this fact, there is a lack of comparative studies that shed light into the characteristics of the different methods and their behavior in different scenarios. In this paper we provide such an analysis. We include methods from different families, and pay special attention to vocabulary-based approaches, a new family of methods that has not received much attention in the MIL literature. The empirical comparison includes seven databases from four heterogeneous domains, implementations of eight popular MIL methods, and a study of the behavior under synthetic conditions. Based on this analysis, we show that, with an appropriate implementation, vocabulary-based approaches outperform other MIL methods in most of the cases, showing in general a more consistent performance.
|
|