|
Marco Pedersoli, Jordi Gonzalez, & Juan J. Villanueva. (2009). High-Speed Human Detection Using a Multiresolution Cascade of Histograms of Oriented Gradients. In 4th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 5524). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents a new method for human detection based on a multiresolution cascade of Histograms of Oriented Gradients (HOG) that can highly reduce the computational cost of the detection search without affecting accuracy. The method consists of a cascade of sliding window detectors. Each detector is a Support Vector Machine (SVM) composed by features at different resolution, from coarse for the first level to fine for the last one.
Considering that the spatial stride of the sliding window search is affected by the HOG features size, unlike previous methods based on Adaboost cascades, we can adopt a spatial stride inversely proportional to the features resolution. This produces that the speed-up of the cascade is not only due to the low number of features that need to be computed in the first levels, but also to the lower number of detection windows that needs to be evaluated.
Experimental results shows that our method permits a detection rate comparable with the state of the art, but at the same time a gain in the speed of the detection search of 10-20 times depending on the cascade configuration.
|
|
|
Fadi Dornaika, & Bogdan Raducanu. (2009). Simultaneous 3D face pose and person-specific shape estimation from a single image using a holistic approach. In IEEE Workshop on Applications of Computer Vision.
Abstract: This paper presents a new approach for the simultaneous estimation of the 3D pose and specific shape of a previously unseen face from a single image. The face pose is not limited to a frontal view. We describe a holistic approach based on a deformable 3D model and a learned statistical facial texture model. Rather than obtaining a person-specific facial surface, the goal of this work is to compute person-specific 3D face shape in terms of a few control parameters that are used by many applications. The proposed holistic approach estimates the 3D pose parameters as well as the face shape control parameters by registering the warped texture to a statistical face texture, which is carried out by a stochastic and genetic optimizer. The proposed approach has several features that make it very attractive: (i) it uses a single grey-scale image, (ii) it is person-independent, (iii) it is featureless (no facial feature extraction is required), and (iv) its learning stage is easy. The proposed approach lends itself nicely to 3D face tracking and face gesture recognition in monocular videos. We describe extensive experiments that show the feasibility and robustness of the proposed approach.
|
|
|
Fatemeh Noroozi, Marina Marjanovic, Angelina Njegus, Sergio Escalera, & Gholamreza Anbarjafari. (2019). Audio-Visual Emotion Recognition in Video Clips. TAC - IEEE Transactions on Affective Computing, 10(1), 60–75.
Abstract: This paper presents a multimodal emotion recognition system, which is based on the analysis of audio and visual cues. From the audio channel, Mel-Frequency Cepstral Coefficients, Filter Bank Energies and prosodic features are extracted. For the visual part, two strategies are considered. First, facial landmarks’ geometric relations, i.e. distances and angles, are computed. Second, we summarize each emotional video into a reduced set of key-frames, which are taught to visually discriminate between the emotions. In order to do so, a convolutional neural network is applied to key-frames summarizing videos. Finally, confidence outputs of all the classifiers from all the modalities are used to define a new feature space to be learned for final emotion label prediction, in a late fusion/stacking fashion. The experiments conducted on the SAVEE, eNTERFACE’05, and RML databases show significant performance improvements by our proposed system in comparison to current alternatives, defining the current state-of-the-art in all three databases.
|
|
|
Jorge Charco, Angel Sappa, & Boris X. Vintimilla. (2022). Human Pose Estimation through a Novel Multi-view Scheme. In 17th International Conference on Computer Vision Theory and Applications (VISAPP 2022) (Vol. 5, pp. 855–862).
Abstract: This paper presents a multi-view scheme to tackle the challenging problem of the self-occlusion in human pose estimation problem. The proposed approach first obtains the human body joints of a set of images, which are captured from different views at the same time. Then, it enhances the obtained joints by using a
multi-view scheme. Basically, the joints from a given view are used to enhance poorly estimated joints from another view, especially intended to tackle the self occlusions cases. A network architecture initially proposed for the monocular case is adapted to be used in the proposed multi-view scheme. Experimental results and
comparisons with the state-of-the-art approaches on Human3.6m dataset are presented showing improvements in the accuracy of body joints estimations.
Keywords: Multi-view Scheme; Human Pose Estimation; Relative Camera Pose; Monocular Approach
|
|
|
Jon Almazan, Alicia Fornes, & Ernest Valveny. (2011). A Non-Rigid Feature Extraction Method for Shape Recognition. In 11th International Conference on Document Analysis and Recognition (pp. 987–991).
Abstract: This paper presents a methodology for shape recognition that focuses on dealing with the difficult problem of large deformations. The proposed methodology consists in a novel feature extraction technique, which uses a non-rigid representation adaptable to the shape. This technique employs a deformable grid based on the computation of geometrical centroids that follows a region partitioning algorithm. Then, a feature vector is extracted by computing pixel density measures around these geometrical centroids. The result is a shape descriptor that adapts its representation to the given shape and encodes the pixel density distribution. The validity of the method when dealing with large deformations has been experimentally shown over datasets composed of handwritten shapes. It has been applied to signature verification and shape recognition tasks demonstrating high accuracy and low computational cost.
|
|
|
Xavier Soria, Gonzalo Pomboza-Junez, & Angel Sappa. (2022). LDC: Lightweight Dense CNN for Edge Detection. ACCESS - IEEE Access, 10, 68281–68290.
Abstract: This paper presents a Lightweight Dense Convolutional (LDC) neural network for edge detection. The proposed model is an adaptation of two state-of-the-art approaches, but it requires less than 4% of parameters in comparison with these approaches. The proposed architecture generates thin edge maps and reaches the highest score (i.e., ODS) when compared with lightweight models (models with less than 1 million parameters), and reaches a similar performance when compare with heavy architectures (models with about 35 million parameters). Both quantitative and qualitative results and comparisons with state-of-the-art models, using different edge detection datasets, are provided. The proposed LDC does not use pre-trained weights and requires straightforward hyper-parameter settings. The source code is released at https://github.com/xavysp/LDC
|
|
|
Pau Riba, Josep Llados, & Alicia Fornes. (2015). Handwritten Word Spotting by Inexact Matching of Grapheme Graphs. In 13th International Conference on Document Analysis and Recognition ICDAR2015 (pp. 781–785).
Abstract: This paper presents a graph-based word spotting for handwritten documents. Contrary to most word spotting techniques, which use statistical representations, we propose a structural representation suitable to be robust to the inherent deformations of handwriting. Attributed graphs are constructed using a part-based approach. Graphemes extracted from shape convexities are used as stable units of handwriting, and are associated to graph nodes. Then, spatial relations between them determine graph edges. Spotting is defined in terms of an error-tolerant graph matching using bipartite-graph matching algorithm. To make the method usable in large datasets, a graph indexing approach that makes use of binary embeddings is used as preprocessing. Historical documents are used as experimental framework. The approach is comparable to statistical ones in terms of time and memory requirements, especially when dealing with large document collections.
|
|
|
David Geronimo, Joan Serrat, Antonio Lopez, & Ramon Baldrich. (2013). Traffic sign recognition for computer vision project-based learning. T-EDUC - IEEE Transactions on Education, 56(3), 364–371.
Abstract: This paper presents a graduate course project on computer vision. The aim of the project is to detect and recognize traffic signs in video sequences recorded by an on-board vehicle camera. This is a demanding problem, given that traffic sign recognition is one of the most challenging problems for driving assistance systems. Equally, it is motivating for the students given that it is a real-life problem. Furthermore, it gives them the opportunity to appreciate the difficulty of real-world vision problems and to assess the extent to which this problem can be solved by modern computer vision and pattern classification techniques taught in the classroom. The learning objectives of the course are introduced, as are the constraints imposed on its design, such as the diversity of students' background and the amount of time they and their instructors dedicate to the course. The paper also describes the course contents, schedule, and how the project-based learning approach is applied. The outcomes of the course are discussed, including both the students' marks and their personal feedback.
Keywords: traffic signs
|
|
|
Henry Velesaca, Raul Mira, Patricia Suarez, Christian X. Larrea, & Angel Sappa. (2020). Deep Learning Based Corn Kernel Classification. In 1st International Workshop and Prize Challenge on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture.
Abstract: This paper presents a full pipeline to classify sample sets of corn kernels. The proposed approach follows a segmentation-classification scheme. The image segmentation is performed through a well known deep learningbased approach, the Mask R-CNN architecture, while the classification is performed hrough a novel-lightweight network specially designed for this task—good corn kernel, defective corn kernel and impurity categories are considered. As a second contribution, a carefully annotated multitouching corn kernel dataset has been generated. This dataset has been used for training the segmentation and the classification modules. Quantitative evaluations have been
performed and comparisons with other approaches are provided showing improvements with the proposed pipeline.
|
|
|
Francesc Tous, Agnes Borras, Robert Benavente, Ramon Baldrich, Maria Vanrell, & Josep Llados. (2002). Textual Descriptors for browsing people by visual appearence. In 5è. Congrés Català d’Intel·ligència Artificial CCIA.
Abstract: This paper presents a first approach to build colour and structural descriptors for information retrieval on a people database. Queries are formulated in terms of their appearance that allows to seek people wearing specific clothes of a given colour name or texture. Descriptors are automatically computed by following three essential steps. A colour naming labelling from pixel properties. A region seg- mentation step based on colour properties of pixels combined with edge information. And a high level step that models the region arrangements in order to build clothes structure. Results are tested on large set of images from real scenes taken at the entrance desk of a building.
Keywords: Image retrieval, textual descriptors, colour naming, colour normalization, graph matching.
|
|
|
Francesc Tous, Agnes Borras, Robert Benavente, Ramon Baldrich, Maria Vanrell, & Josep Llados. (2002). Textual Descriptions for Browsing People by Visual Apperance. In Lecture Notes in Artificial Intelligence (Vol. 2504, pp. 419–429). Springer Verlag.
Abstract: This paper presents a first approach to build colour and structural descriptors for information retrieval on a people database. Queries are formulated in terms of their appearance that allows to seek people wearing specific clothes of a given colour name or texture. Descriptors are automatically computed by following three essential steps. A colour naming labelling from pixel properties. A region seg- mentation step based on colour properties of pixels combined with edge information. And a high level step that models the region arrangements in order to build clothes structure. Results are tested on large set of images from real scenes taken at the entrance desk of a building
|
|
|
Mohammad Rouhani, Angel Sappa, & E. Boyer. (2015). Implicit B-Spline Surface Reconstruction. TIP - IEEE Transactions on Image Processing, 24(1), 22–32.
Abstract: This paper presents a fast and flexible curve, and surface reconstruction technique based on implicit B-spline. This representation does not require any parameterization and it is locally supported. This fact has been exploited in this paper to propose a reconstruction technique through solving a sparse system of equations. This method is further accelerated to reduce the dimension to the active control lattice. Moreover, the surface smoothness and user interaction are allowed for controlling the surface. Finally, a novel weighting technique has been introduced in order to blend small patches and smooth them in the overlapping regions. The whole framework is very fast and efficient and can handle large cloud of points with very low computational cost. The experimental results show the flexibility and accuracy of the proposed algorithm to describe objects with complex topologies. Comparisons with other fitting methods highlight the superiority of the proposed approach in the presence of noise and missing data.
|
|
|
Jorge Charco, Angel Sappa, Boris X. Vintimilla, & Henry Velesaca. (2021). Camera pose estimation in multi-view environments: From virtual scenarios to the real world. IVC - Image and Vision Computing, 110, 104182.
Abstract: This paper presents a domain adaptation strategy to efficiently train network architectures for estimating the relative camera pose in multi-view scenarios. The network architectures are fed by a pair of simultaneously acquired images, hence in order to improve the accuracy of the solutions, and due to the lack of large datasets with pairs of overlapped images, a domain adaptation strategy is proposed. The domain adaptation strategy consists on transferring the knowledge learned from synthetic images to real-world scenarios. For this, the networks are firstly trained using pairs of synthetic images, which are captured at the same time by a pair of cameras in a virtual environment; and then, the learned weights of the networks are transferred to the real-world case, where the networks are retrained with a few real images. Different virtual 3D scenarios are generated to evaluate the relationship between the accuracy on the result and the similarity between virtual and real scenarios—similarity on both geometry of the objects contained in the scene as well as relative pose between camera and objects in the scene. Experimental results and comparisons are provided showing that the accuracy of all the evaluated networks for estimating the camera pose improves when the proposed domain adaptation strategy is used, highlighting the importance on the similarity between virtual-real scenarios.
|
|
|
Fadi Dornaika, & Angel Sappa. (2009). A Featureless and Stochastic Approach to On-board Stereo Vision System Pose. IMAVIS - Image and Vision Computing, 27(9), 1382–1393.
Abstract: This paper presents a direct and stochastic technique for real-time estimation of on-board stereo head’s position and orientation. Unlike existing works which rely on feature extraction either in the image domain or in 3D space, our proposed approach directly estimates the unknown parameters from the stream of stereo pairs’ brightness. The pose parameters are tracked using the particle filtering framework which implicitly enforces the smoothness constraints on the estimated parameters. The proposed technique can be used with a driver assistance applications as well as with augmented reality applications. Extended experiments on urban environments with different road geometries are presented. Comparisons with a 3D data-based approach are presented. Moreover, we provide a performance study aiming at evaluating the accuracy of the proposed approach.
Keywords: On-board stereo vision system; Pose estimation; Featureless approach; Particle filtering; Image warping
|
|
|
Maria Oliver, Gloria Haro, Mariella Dimiccoli, Baptiste Mazin, & Coloma Ballester. (2016). A computational model of amodal completion. In SIAM Conference on Imaging Science.
Abstract: This paper presents a computational model to recover the most likely interpretation of the 3D scene structure from a planar image, where some objects may occlude others. The estimated scene interpretation is obtained by integrating some global and local cues and provides both the complete disoccluded objects that form the scene and their ordering according to depth. Our method first computes several distal scenes which are compatible with the proximal planar image. To compute these different hypothesized scenes, we propose a perceptually inspired object disocclusion method, which works by minimizing the Euler's elastica as well as by incorporating the relatability of partially occluded contours and the convexity of the disoccluded objects. Then, to estimate the preferred scene we rely on a Bayesian model and define probabilities taking into account the global complexity of the objects in the hypothesized scenes as well as the effort of bringing these objects in their relative position in the planar image, which is also measured by an Euler's elastica-based quantity. The model is illustrated with numerical experiments on, both, synthetic and real images showing the ability of our model to reconstruct the occluded objects and the preferred perceptual order among them. We also present results on images of the Berkeley dataset with provided figure-ground ground-truth labeling.
|
|