|
Angel Sappa, Niki Aifanti, Sotiris Malassiotis, & N. Grammalidis. (2005). Survey of 3D Human Body Representations. In Encyclopedia of Information Science and Technology, 1(5):2696–2701.
|
|
|
Angel Sappa, Niki Aifanti, Sotiris Malassiotis, & Michael G. Strintzis. (2004). 3D Gait Estimation from Monoscopic Video.
|
|
|
Angel Sappa, Niki Aifanti, Sotiris Malassiotis, & Michael G. Strintzis. (2004). 3D Human Walking Modelling.
|
|
|
Angel Sappa, Niki Aifanti, Sotiris Malassiotis, & Michael G. Strintzis. (2004). Unsupervised Motion Classification by means of Efficient Feature Selection and Tracking.
|
|
|
Angel Sappa, Niki Aifanti, Sotiris Malassiotis, & Michael G. Strintzis. (2003). Monocular 3D Human Body Reconstruction Towards Depth Augmentation of Television Sequences. In IEEE International Conference on Image Processing, Barcelona, Spain, September 2003 (pp. 325–328).
|
|
|
Angel Sappa, & M.A. Garcia. (2007). Incremental Integration of Multiresolution Range Images. The imaging science journal. Vol. 55, No. 3 pp. 127–139.
|
|
|
Angel Sappa, & M.A. Garcia. (2007). Aprendiendo a recrear la realidad en 3D. UAB Divulga, Revista de Divulgacion Cientifica.
|
|
|
Angel Sappa, & M.A. Garcia. (2007). Coarse-to-Fine Approximation of Range Images with Bounded Error Adaptive Triangular Meshes. Journal of Electronic Imaging, 16(2), 023010(11 pages).
|
|
|
Angel Sappa, & M.A. Garcia. (2007). Generating compact representations of static scenes by means of 3D object hierarchies. The Visual Computer, 23(2): 143–154.
|
|
|
Angel Sappa, & M.A. Garcia. (2004). Hierarchical Clustering of 3D Objects and its Application to Minimum Distance Computation. In IEEE International Conference on Robotics & Automation, 5287–5292, New Orleans, LA (USA), ISBN: 0–7803–8232–3.
|
|
|
Angel Sappa, Niki Aifanti, N. Grammalidis, & Sotiris Malassiotis. (2004). Advances in Vision-Based Human Body Modeling. In N. Sarris and M. Strintzis. (Ed.), 3D Modeling & Animation: Systhesis and Analysis Techniques for the Human Body (pp. 1–26).
|
|
|
Angel Sappa, & Fadi Dornaika. (2006). An Edge-Based Approach to Motion Detection. In 6th International Conference on Computational Science (ICCS´06), LNCS 3991:563–570.
|
|
|
Muhammad Anwer Rao, David Vazquez, & Antonio Lopez. (2011). Color Contribution to Part-Based Person Detection in Different Types of Scenarios. In W. Kropatsch A. Berciano H. Molina D. D. P. Real (Ed.), 14th International Conference on Computer Analysis of Images and Patterns (Vol. 6855, pp. 463–470). Berlin Heidelberg: Springer.
Abstract: Camera-based person detection is of paramount interest due to its potential applications. The task is diffcult because the great variety of backgrounds (scenarios, illumination) in which persons are present, as well as their intra-class variability (pose, clothe, occlusion). In fact, the class person is one of the included in the popular PASCAL visual object classes (VOC) challenge. A breakthrough for this challenge, regarding person detection, is due to Felzenszwalb et al. These authors proposed a part-based detector that relies on histograms of oriented gradients (HOG) and latent support vector machines (LatSVM) to learn a model of the whole human body and its constitutive parts, as well as their relative position. Since the approach of Felzenszwalb et al. appeared new variants have been proposed, usually giving rise to more complex models. In this paper, we focus on an issue that has not attracted suficient interest up to now. In particular, we refer to the fact that HOG is usually computed from RGB color space, but other possibilities exist and deserve the corresponding investigation. In this paper we challenge RGB space with the opponent color space (OPP), which is inspired in the human vision system.We will compute the HOG on top of OPP, then we train and test the part-based human classifer by Felzenszwalb et al. using PASCAL VOC challenge protocols and person database. Our experiments demonstrate that OPP outperforms RGB. We also investigate possible differences among types of scenarios: indoor, urban and countryside. Interestingly, our experiments suggest that the beneficts of OPP with respect to RGB mainly come for indoor and countryside scenarios, those in which the human visual system was designed by evolution.
Keywords: Pedestrian Detection; Color
|
|
|
Muhammad Anwer Rao, David Vazquez, & Antonio Lopez. (2011). Opponent Colors for Human Detection. In J. Vitria, J.M. Sanches, & M. Hernandez (Eds.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 363–370). LNCS. Berlin Heidelberg: Springer.
Abstract: Human detection is a key component in fields such as advanced driving assistance and video surveillance. However, even detecting non-occluded standing humans remains a challenge of intensive research. Finding good features to build human models for further detection is probably one of the most important issues to face. Currently, shape, texture and motion features have deserve extensive attention in the literature. However, color-based features, which are important in other domains (e.g., image categorization), have received much less attention. In fact, the use of RGB color space has become a kind of choice by default. The focus has been put in developing first and second order features on top of RGB space (e.g., HOG and co-occurrence matrices, resp.). In this paper we evaluate the opponent colors (OPP) space as a biologically inspired alternative for human detection. In particular, by feeding OPP space in the baseline framework of Dalal et al. for human detection (based on RGB, HOG and linear SVM), we will obtain better detection performance than by using RGB space. This is a relevant result since, up to the best of our knowledge, OPP space has not been previously used for human detection. This suggests that in the future it could be worth to compute co-occurrence matrices, self-similarity features, etc., also on top of OPP space, i.e., as we have done with HOG in this paper.
Keywords: Pedestrian Detection; Color; Part Based Models
|
|
|
German Ros, Laura Sellart, Gabriel Villalonga, Elias Maidanik, Francisco Molero, Marc Garcia, et al. (2017). Semantic Segmentation of Urban Scenes via Domain Adaptation of SYNTHIA. In Gabriela Csurka (Ed.), Domain Adaptation in Computer Vision Applications (Vol. 12, pp. 227–241). Springer.
Abstract: Vision-based semantic segmentation in urban scenarios is a key functionality for autonomous driving. Recent revolutionary results of deep convolutional neural networks (DCNNs) foreshadow the advent of reliable classifiers to perform such visual tasks. However, DCNNs require learning of many parameters from raw images; thus, having a sufficient amount of diverse images with class annotations is needed. These annotations are obtained via cumbersome, human labour which is particularly challenging for semantic segmentation since pixel-level annotations are required. In this chapter, we propose to use a combination of a virtual world to automatically generate realistic synthetic images with pixel-level annotations, and domain adaptation to transfer the models learnt to correctly operate in real scenarios. We address the question of how useful synthetic data can be for semantic segmentation – in particular, when using a DCNN paradigm. In order to answer this question we have generated a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations and object identifiers. We use SYNTHIA in combination with publicly available real-world urban images with manually provided annotations. Then, we conduct experiments with DCNNs that show that combining SYNTHIA with simple domain adaptation techniques in the training stage significantly improves performance on semantic segmentation.
Keywords: SYNTHIA; Virtual worlds; Autonomous Driving
|
|