|
Arnau Ramisa, Adriana Tapus, David Aldavert, Ricardo Toledo and Ramon Lopez de Mantaras. 2009. Robust Vision-Based Localization using Combinations of Local Feature Regions Detectors. AR, 27(4), 373–385.
Abstract: This paper presents a vision-based approach for mobile robot localization. The model of the environment is topological. The new approach characterizes a place using a signature. This signature consists of a constellation of descriptors computed over different types of local affine covariant regions extracted from an omnidirectional image acquired rotating a standard camera with a pan-tilt unit. This type of representation permits a reliable and distinctive environment modelling. Our objectives were to validate the proposed method in indoor environments and, also, to find out if the combination of complementary local feature region detectors improves the localization versus using a single region detector. Our experimental results show that if false matches are effectively rejected, the combination of different covariant affine region detectors increases notably the performance of the approach by combining the different strengths of the individual detectors. In order to reduce the localization time, two strategies are evaluated: re-ranking the map nodes using a global similarity measure and using standard perspective view field of 45°.
In order to systematically test topological localization methods, another contribution proposed in this work is a novel method to see the degradation in localization performance as the robot moves away from the point where the original signature was acquired. This allows to know the robustness of the proposed signature. In order for this to be effective, it must be done in several, variated, environments that test all the possible situations in which the robot may have to perform localization.
|
|
|
Fei Yang, Luis Herranz, Joost van de Weijer, Jose Antonio Iglesias, Antonio Lopez and Mikhail Mozerov. 2020. Variable Rate Deep Image Compression with Modulated Autoencoder. SPL, 27, 331–335.
Abstract: Variable rate is a requirement for flexible and adaptable image and video compression. However, deep image compression methods (DIC) are optimized for a single fixed rate-distortion (R-D) tradeoff. While this can be addressed by training multiple models for different tradeoffs, the memory requirements increase proportionally to the number of models. Scaling the bottleneck representation of a shared autoencoder can provide variable rate compression with a single shared autoencoder. However, the R-D performance using this simple mechanism degrades in low bitrates, and also shrinks the effective range of bitrates. To address these limitations, we formulate the problem of variable R-D optimization for DIC, and propose modulated autoencoders (MAEs), where the representations of a shared autoencoder are adapted to the specific R-D tradeoff via a modulation network. Jointly training this modulated autoencoder and the modulation network provides an effective way to navigate the R-D operational curve. Our experiments show that the proposed method can achieve almost the same R-D performance of independent models with significantly fewer parameters.
|
|
|
J. Pladellorens, M.J. Yzuel, J. Castell and Joan Serrat. 1993. Calculo automatico del volumen del ventriculo izquierdo. Comparacion con expertos..
|
|
|
Jaume Amores and Petia Radeva. 2005. Registration and Retrieval of Highly Elastic Bodies using Contextual Information. PRL, 26(11), 1720–1731.
|
|
|
Katerine Diaz, Francesc J. Ferri and W. Diaz. 2015. Incremental Generalized Discriminative Common Vectors for Image Classification. TNNLS, 26(8), 1761–1775.
Abstract: Subspace-based methods have become popular due to their ability to appropriately represent complex data in such a way that both dimensionality is reduced and discriminativeness is enhanced. Several recent works have concentrated on the discriminative common vector (DCV) method and other closely related algorithms also based on the concept of null space. In this paper, we present a generalized incremental formulation of the DCV methods, which allows the update of a given model by considering the addition of new examples even from unseen classes. Having efficient incremental formulations of well-behaved batch algorithms allows us to conveniently adapt previously trained classifiers without the need of recomputing them from scratch. The proposed generalized incremental method has been empirically validated in different case studies from different application domains (faces, objects, and handwritten digits) considering several different scenarios in which new data are continuously added at different rates starting from an initial model.
|
|
|
Antonio Lopez, Ernest Valveny and Juan J. Villanueva. 2005. Real-time quality control of surgical material packaging by artificial vision. Assembly Automation, 25(3).
|
|
|
Enric Marti, Carme Julia and Debora Gil. 2006. A PBL Experience in the Teaching of Computer Graphics. CGF, 25(1), 95–103.
Abstract: Project-Based Learning (PBL) is an educational strategy to improve student’s learning capability that, in recent years, has had a progressive acceptance in undergraduate studies. This methodology is based on solving a problem or project in a student working group. In this way, PBL focuses on learning the necessary tools to correctly find a solution to given problems. Since the learning initiative is transferred to the student, the PBL method promotes students own abilities. This allows a better assessment of the true workload that carries out the student in the subject. It follows that the methodology conforms to the guidelines of the Bologna document, which quantifies the student workload in a subject by means of the European credit transfer system (ECTS). PBL is currently applied in undergraduate studies needing strong practical training such as medicine, nursing or law sciences. Although this is also the case in engineering studies, amazingly, few experiences have been reported. In this paper we propose to use PBL in the educational organization of the Computer Graphics subjects in the Computer Science degree. Our PBL project focuses in the development of a C++ graphical environment based on the OpenGL libraries for visualization and handling of different graphical objects. The starting point is a basic skeleton that already includes lighting functions, perspective projection with mouse interaction to change the point of view and three predefined objects. Students have to complete this skeleton by adding their own functions to solve the project. A total number of 10 projects have been proposed and successfully solved. The exercises range from human face rendering to articulated objects, such as robot arms or puppets. In the present paper we extensively report the statement and educational objectives for two of the projects: solar system visualization and a chess game. We report our earlier educational experience based on the standard classroom theoretical, problem and practice sessions and the reasons that motivated searching for other learning methods. We have mainly chosen PBL because it improves the student learning initiative. We have applied the PBL educational model since the beginning of the second semester. The student’s feedback increases in his interest for the subject. We present a comparative study of the teachers’ and students’ workload between PBL and the classic teaching approach, which suggests that the workload increase in PBL is not as high as it seems.
|
|
|
Miguel Oliveira, Victor Santos and Angel Sappa. 2015. Multimodal Inverse Perspective Mapping. IF, 24, 108–121.
Abstract: Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints.
Keywords: Inverse perspective mapping; Multimodal sensor fusion; Intelligent vehicles
|
|
|
Mohammad Rouhani, Angel Sappa and E. Boyer. 2015. Implicit B-Spline Surface Reconstruction. TIP, 24(1), 22–32.
Abstract: This paper presents a fast and flexible curve, and surface reconstruction technique based on implicit B-spline. This representation does not require any parameterization and it is locally supported. This fact has been exploited in this paper to propose a reconstruction technique through solving a sparse system of equations. This method is further accelerated to reduce the dimension to the active control lattice. Moreover, the surface smoothness and user interaction are allowed for controlling the surface. Finally, a novel weighting technique has been introduced in order to blend small patches and smooth them in the overlapping regions. The whole framework is very fast and efficient and can handle large cloud of points with very low computational cost. The experimental results show the flexibility and accuracy of the proposed algorithm to describe objects with complex topologies. Comparisons with other fitting methods highlight the superiority of the proposed approach in the presence of noise and missing data.
|
|
|
Fahad Shahbaz Khan, Jiaolong Xu, Muhammad Anwer Rao, Joost van de Weijer, Andrew Bagdanov and Antonio Lopez. 2015. Recognizing Actions through Action-specific Person Detection. TIP, 24(11), 4422–4432.
Abstract: Action recognition in still images is a challenging problem in computer vision. To facilitate comparative evaluation independently of person detection, the standard evaluation protocol for action recognition uses an oracle person detector to obtain perfect bounding box information at both training and test time. The assumption is that, in practice, a general person detector will provide candidate bounding boxes for action recognition. In this paper, we argue that this paradigm is suboptimal and that action class labels should already be considered during the detection stage. Motivated by the observation that body pose is strongly conditioned on action class, we show that: 1) the existing state-of-the-art generic person detectors are not adequate for proposing candidate bounding boxes for action classification; 2) due to limited training examples, the direct training of action-specific person detectors is also inadequate; and 3) using only a small number of labeled action examples, the transfer learning is able to adapt an existing detector to propose higher quality bounding boxes for subsequent action classification. To the best of our knowledge, we are the first to investigate transfer learning for the task of action-specific person detection in still images. We perform extensive experiments on two benchmark data sets: 1) Stanford-40 and 2) PASCAL VOC 2012. For the action detection task (i.e., both person localization and classification of the action performed), our approach outperforms methods based on general person detection by 5.7% mean average precision (MAP) on Stanford-40 and 2.1% MAP on PASCAL VOC 2012. Our approach also significantly outperforms the state of the art with a MAP of 45.4% on Stanford-40 and 31.4% on PASCAL VOC 2012. We also evaluate our action detection approach for the task of action classification (i.e., recognizing actions without localizing them). For this task, our approach, without using any ground-truth person localization at test tim- , outperforms on both data sets state-of-the-art methods, which do use person locations.
|
|