|
Miguel Oliveira, Angel Sappa and V. Santos. 2012. Color Correction for Onboard Multi-camera Systems using 3D Gaussian Mixture Models. IEEE Intelligent Vehicles Symposium. IEEE Xplore, 299–303.
Abstract: The current paper proposes a novel color correction approach for onboard multi-camera systems. It works by segmenting the given images into several regions. A probabilistic segmentation framework, using 3D Gaussian Mixture Models, is proposed. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. An image data set of road scenarios is used to establish a performance comparison of the proposed method with other seven well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.
|
|
|
Miguel Oliveira, Angel Sappa and V. Santos. 2012. Color Correction using 3D Gaussian Mixture Models. 9th International Conference on Image Analysis and Recognition. Springer Berlin Heidelberg, 97–106. (LNCS.)
Abstract: The current paper proposes a novel color correction approach based on a probabilistic segmentation framework by using 3D Gaussian Mixture Models. Regions are used to compute local color correction functions, which are then combined to obtain the final corrected image. The proposed approach is evaluated using both a recently published metric and two large data sets composed of seventy images. The evaluation is performed by comparing our algorithm with eight well known color correction algorithms. Results show that the proposed approach is the highest scoring color correction method. Also, the proposed single step 3D color space probabilistic segmentation reduces processing time over similar approaches.
|
|
|
Patricia Suarez, Angel Sappa and Boris X. Vintimilla. 2017. Colorizing Infrared Images through a Triplet Conditional DCGAN Architecture. 19th international conference on image analysis and processing.
Abstract: This paper focuses on near infrared (NIR) image colorization by using a Conditional Deep Convolutional Generative Adversarial Network (CDCGAN) architecture model. The proposed architecture is based on the usage of a conditional probabilistic generative model. Firstly, it learns to colorize the given input image, by using a triplet model architecture that tackle every channel in an independent way. In the proposed model, the nal layer of red channel consider the infrared image to enhance the details, resulting in a sharp RGB image. Then, in the second stage, a discriminative model is used to estimate the probability that the generated image came from the training dataset, rather than the image automatically generated. Experimental results with a large set of real images are provided showing the validity of the proposed approach. Additionally, the proposed approach is compared with a state of the art approach showing better results.
Keywords: CNN in Multispectral Imaging; Image Colorization
|
|
|
Muhammad Anwer Rao, Fahad Shahbaz Khan, Joost Van de Weijer and Jorma Laaksonen. 2016. Combining Holistic and Part-based Deep Representations for Computational Painting Categorization. 6th International Conference on Multimedia Retrieval.
Abstract: Automatic analysis of visual art, such as paintings, is a challenging inter-disciplinary research problem. Conventional approaches only rely on global scene characteristics by encoding holistic information for computational painting categorization.We argue that such approaches are sub-optimal and that discriminative common visual structures provide complementary information for painting classification. We present an approach that encodes both the global scene layout and discriminative latent common structures for computational painting categorization. The region of interests are automatically extracted, without any manual part labeling, by training class-specific deformable part-based models. Both holistic and region-of-interests are then described using multi-scale dense convolutional features. These features are pooled separately using Fisher vector encoding and concatenated afterwards in a single image representation. Experiments are performed on a challenging dataset with 91 different painters and 13 diverse painting styles. Our approach outperforms the standard method, which only employs the global scene characteristics. Furthermore, our method achieves state-of-the-art results outperforming a recent multi-scale deep features based approach [11] by 6.4% and 3.8% respectively on artist and style classification.
|
|
|
Xavier Boix and 7 others. 2009. Combining local and global bag-of-word representations for semantic segmentation. Workshop on The PASCAL Visual Object Classes Challenge.
|
|
|
Ernest Valveny, Ricardo Toledo, Ramon Baldrich and Enric Marti. 2002. Combining recognition-based in segmentation-based approaches for graphic symol recognition using deformable template matching. Proceeding of the Second IASTED International Conference Visualization, Imaging and Image Proceesing VIIP 2002.502–507.
|
|
|
Arnau Ramisa, Ramon Lopez de Mantaras and Ricardo Toledo. 2007. Comparing Combinations of Feature Regions for Panoramic VSLAM. 4th International Conference on Informatics in Control, Automation and Robotics.292–297.
|
|
|
Eugenio Alcala and 6 others. 2016. Comparison of two non-linear model-based control strategies for autonomous vehicles. 24th Mediterranean Conference on Control and Automation.846–851.
Abstract: This paper presents the comparison of two nonlinear model-based control strategies for autonomous cars. A control oriented model of vehicle based on a bicycle model is used. The two control strategies use a model reference approach. Using this approach, the error dynamics model is developed. Both controllers receive as input the longitudinal, lateral and orientation errors generating as control outputs the steering angle and the velocity of the vehicle. The first control approach is based on a non-linear control law that is designed by means of the Lyapunov direct approach. The second approach is based on a sliding mode-control that defines a set of sliding surfaces over which the error trajectories will converge. The main advantage of the sliding-control technique is the robustness against non-linearities and parametric uncertainties in the model. However, the main drawback of first order sliding mode is the chattering, so it has been implemented a high order sliding mode control. To test and compare the proposed control strategies, different path following scenarios are used in simulation.
Keywords: Autonomous Driving; Control
|
|
|
David Geronimo, Antonio Lopez and Angel Sappa. 2007. Computer Vision Approaches for Pedestrian Detection: Visible Spectrum Survey. In J. Marti et al., ed. 3rd Iberian Conference on Pattern Recognition and Image Analysis, LNCS 4477.547–554.
Abstract: Pedestrian detection from images of the visible spectrum is a high relevant area of research given its potential impact in the design of pedestrian protection systems. There are many proposals in the literature but they lack a comparative viewpoint. According to this, in this paper we first propose a common framework where we fit the different approaches, and second we use this framework to provide a comparative point of view of the details of such different approaches, pointing out also the main challenges to be solved in the future. In summary, we expect
this survey to be useful for both novel and experienced researchers in the field. In the first case, as a clarifying snapshot of the state of the art; in the second, as a way to unveil trends and to take conclusions from the comparative study.
Keywords: Pedestrian detection
|
|
|
Miguel Oliveira, L. Seabra Lopes, G. Hyun Lim, S. Hamidreza Kasaei, Angel Sappa and A. Tom. 2015. Concurrent Learning of Visual Codebooks and Object Categories in Openended Domains. International Conference on Intelligent Robots and Systems.2488–2495.
Abstract: In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using offline constructed codebooks.
Keywords: Visual Learning; Computer Vision; Autonomous Agents
|
|