|
Jiaolong Xu, Sebastian Ramos, David Vazquez and Antonio Lopez. 2013. DA-DPM Pedestrian Detection. ICCV Workshop on Reconstruction meets Recognition.
Keywords: Domain Adaptation; Pedestrian Detection
|
|
|
Alejandro Gonzalez Alzate, Gabriel Villalonga, German Ros, David Vazquez and Antonio Lopez. 2015. 3D-Guided Multiscale Sliding Window for Pedestrian Detection. Pattern Recognition and Image Analysis, Proceedings of 7th Iberian Conference , ibPRIA 2015.560–568.
Abstract: The most relevant modules of a pedestrian detector are the candidate generation and the candidate classification. The former aims at presenting image windows to the latter so that they are classified as containing a pedestrian or not. Much attention has being paid to the classification module, while candidate generation has mainly relied on (multiscale) sliding window pyramid. However, candidate generation is critical for achieving real-time. In this paper we assume a context of autonomous driving based on stereo vision. Accordingly, we evaluate the effect of taking into account the 3D information (derived from the stereo) in order to prune the hundred of thousands windows per image generated by classical pyramidal sliding window. For our study we use a multimodal (RGB, disparity) and multi-descriptor (HOG, LBP, HOG+LBP) holistic ensemble based on linear SVM. Evaluation on data from the challenging KITTI benchmark suite shows the effectiveness of using 3D information to dramatically reduce the number of candidate windows, even improving the overall pedestrian detection accuracy.
Keywords: Pedestrian Detection
|
|
|
Hanne Kause and 6 others. 2015. Quality Assessment of Optical Flow in Tagging MRI. 5th Dutch Bio-Medical Engineering Conference BME2015.
|
|
|
M. Cruz, Cristhian A. Aguilera-Carrasco, Boris X. Vintimilla, Ricardo Toledo and Angel Sappa. 2015. Cross-spectral image registration and fusion: an evaluation study. 2nd International Conference on Machine Vision and Machine Learning.
Abstract: This paper presents a preliminary study on the registration and fusion of cross-spectral imaging. The objective is to evaluate the validity of widely used computer vision approaches when they are applied at different
spectral bands. In particular, we are interested in merging images from the infrared (both long wave infrared: LWIR and near infrared: NIR) and visible spectrum (VS). Experimental results with different data sets are presented.
Keywords: multispectral imaging; image registration; data fusion; infrared and visible spectra
|
|
|
Cristhian A. Aguilera-Carrasco, Angel Sappa and Ricardo Toledo. 2015. LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations. 22th IEEE International Conference on Image Processing.178–181.
|
|
|
Dennis G.Romero, Anselmo Frizera, Angel Sappa, Boris X. Vintimilla and Teodiano F.Bastos. 2015. A predictive model for human activity recognition by observing actions and context. Advanced Concepts for Intelligent Vision Systems, Proceedings of 16th International Conference, ACIVS 2015. Springer International Publishing, 323–333. (LNCS.)
Abstract: This paper presents a novel model to estimate human activities — a human activity is defined by a set of human actions. The proposed approach is based on the usage of Recurrent Neural Networks (RNN) and Bayesian inference through the continuous monitoring of human actions and its surrounding environment. In the current work human activities are inferred considering not only visual analysis but also additional resources; external sources of information, such as context information, are incorporated to contribute to the activity estimation. The novelty of the proposed approach lies in the way the information is encoded, so that it can be later associated according to a predefined semantic structure. Hence, a pattern representing a given activity can be defined by a set of actions, plus contextual information or other kind of information that could be relevant to describe the activity. Experimental results with real data are provided showing the validity of the proposed approach.
|
|
|
Miguel Oliveira, Victor Santos, Angel Sappa and P. Dias. 2015. Scene Representations for Autonomous Driving: an approach based on polygonal primitives. 2nd Iberian Robotics Conference ROBOT2015.503–515.
Abstract: In this paper, we present a novel methodology to compute a 3D scene
representation. The algorithm uses macro scale polygonal primitives to model the scene. This means that the representation of the scene is given as a list of large scale polygons that describe the geometric structure of the environment. Results show that the approach is capable of producing accurate descriptions of the scene. In addition, the algorithm is very efficient when compared to other techniques.
Keywords: Scene reconstruction; Point cloud; Autonomous vehicles
|
|
|
J.Poujol, Cristhian A. Aguilera-Carrasco, E.Danos, Boris X. Vintimilla, Ricardo Toledo and Angel Sappa. 2015. Visible-Thermal Fusion based Monocular Visual Odometry. 2nd Iberian Robotics Conference ROBOT2015. Springer International Publishing, 517–528.
Abstract: The manuscript evaluates the performance of a monocular visual odometry approach when images from different spectra are considered, both independently and fused. The objective behind this evaluation is to analyze if classical approaches can be improved when the given images, which are from different spectra, are fused and represented in new domains. The images in these new domains should have some of the following properties: i) more robust to noisy data; ii) less sensitive to changes (e.g., lighting); iii) more rich in descriptive information, among other. In particular in the current work two different image fusion strategies are considered. Firstly, images from the visible and thermal spectrum are fused using a Discrete Wavelet Transform (DWT) approach. Secondly, a monochrome threshold strategy is considered. The obtained
representations are evaluated under a visual odometry framework, highlighting
their advantages and disadvantages, using different urban and semi-urban scenarios. Comparisons with both monocular-visible spectrum and monocular-infrared spectrum, are also provided showing the validity of the proposed approach.
Keywords: Monocular Visual Odometry; LWIR-RGB cross-spectral Imaging; Image Fusion.
|
|
|
Miguel Oliveira, L. Seabra Lopes, G. Hyun Lim, S. Hamidreza Kasaei, Angel Sappa and A. Tom. 2015. Concurrent Learning of Visual Codebooks and Object Categories in Openended Domains. International Conference on Intelligent Robots and Systems.2488–2495.
Abstract: In open-ended domains, robots must continuously learn new object categories. When the training sets are created offline, it is not possible to ensure their representativeness with respect to the object categories and features the system will find when operating online. In the Bag of Words model, visual codebooks are constructed from training sets created offline. This might lead to non-discriminative visual words and, as a consequence, to poor recognition performance. This paper proposes a visual object recognition system which concurrently learns in an incremental and online fashion both the visual object category representations as well as the codebook words used to encode them. The codebook is defined using Gaussian Mixture Models which are updated using new object views. The approach contains similarities with the human visual object recognition system: evidence suggests that the development of recognition capabilities occurs on multiple levels and is sustained over large periods of time. Results show that the proposed system with concurrent learning of object categories and codebooks is capable of learning more categories, requiring less examples, and with similar accuracies, when compared to the classical Bag of Words approach using offline constructed codebooks.
Keywords: Visual Learning; Computer Vision; Autonomous Agents
|
|
|
Fahad Shahbaz Khan, Muhammad Anwer Rao, Joost Van de Weijer, Michael Felsberg and J.Laaksonen. 2015. Deep semantic pyramids for human attributes and action recognition. Image Analysis, Proceedings of 19th Scandinavian Conference , SCIA 2015. Springer International Publishing, 341–353.
Abstract: Describing persons and their actions is a challenging problem due to variations in pose, scale and viewpoint in real-world images. Recently, semantic pyramids approach [1] for pose normalization has shown to provide excellent results for gender and action recognition. The performance of semantic pyramids approach relies on robust image description and is therefore limited due to the use of shallow local features. In the context of object recognition [2] and object detection [3], convolutional neural networks (CNNs) or deep features have shown to improve the performance over the conventional shallow features.
We propose deep semantic pyramids for human attributes and action recognition. The method works by constructing spatial pyramids based on CNNs of different part locations. These pyramids are then combined to obtain a single semantic representation. We validate our approach on the Berkeley and 27 Human Attributes datasets for attributes classification. For action recognition, we perform experiments on two challenging datasets: Willow and PASCAL VOC 2010. The proposed deep semantic pyramids provide a significant gain of 17.2%, 13.9%, 24.3% and 22.6% compared to the standard shallow semantic pyramids on Berkeley, 27 Human Attributes, Willow and PASCAL VOC 2010 datasets respectively. Our results also show that deep semantic pyramids outperform conventional CNNs based on the full bounding box of the person. Finally, we compare our approach with state-of-the-art methods and show a gain in performance compared to best methods in literature.
Keywords: Action recognition; Human attributes; Semantic pyramids
|
|