Josep M. Gonfaus, Marco Pedersoli, Jordi Gonzalez, Andrea Vedaldi, & Xavier Roca. (2015). Factorized appearances for object detection. CVIU - Computer Vision and Image Understanding, 138, 92–101.
Abstract: Deformable object models capture variations in an object’s appearance that can be represented as image deformations. Other effects such as out-of-plane rotations, three-dimensional articulations, and self-occlusions are often captured by considering mixture of deformable models, one per object aspect. A more scalable approach is representing instead the variations at the level of the object parts, applying the concept of a mixture locally. Combining a few part variations can in fact cheaply generate a large number of global appearances.
A limited version of this idea was proposed by Yang and Ramanan [1], for human pose dectection. In this paper we apply it to the task of generic object category detection and extend it in several ways. First, we propose a model for the relationship between part appearances more general than the tree of Yang and Ramanan [1], which is more suitable for generic categories. Second, we treat part locations as well as their appearance as latent variables so that training does not need part annotations but only the object bounding boxes. Third, we modify the weakly-supervised learning of Felzenszwalb et al. and Girshick et al. [2], [3] to handle a significantly more complex latent structure.
Our model is evaluated on standard object detection benchmarks and is found to improve over existing approaches, yielding state-of-the-art results for several object categories.
Keywords: Object recognition; Deformable part models; Learning and sharing parts; Discovering discriminative parts
|
Cristhian A. Aguilera-Carrasco, Cristhian Aguilera, Cristobal A. Navarro, & Angel Sappa. (2020). Fast CNN Stereo Depth Estimation through Embedded GPU Devices. SENS - Sensors, 20(11), 3249.
Abstract: Current CNN-based stereo depth estimation models can barely run under real-time constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art evaluations usually do not consider model optimization techniques, being that it is unknown what is the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models on three different embedded GPU devices, with and without optimization methods, presenting performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically augmenting the runtime speed of current models. In our experiments, we achieve real-time inference speed, in the range of 5–32 ms, for 1216 × 368 input stereo images on the Jetson TX2, Jetson Xavier, and Jetson Nano embedded devices.
Keywords: stereo matching; deep learning; embedded GPU
|
Katerine Diaz, Jesus Martinez del Rincon, Aura Hernandez-Sabate, Marçal Rusiñol, & Francesc J. Ferri. (2018). Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction. JMIV - Journal of Mathematical Imaging and Vision, 60(4), 512–524.
Abstract: This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear
problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed, a first one based on the kernel trick (KT) and a second one based on the nonlinear projection trick (NPT) for even higher efficiency. Both methodologies
have been validated on four different image datasets containing faces, objects and handwritten digits, and compared against well known non-linear state-of-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model.
|
Carlo Gatta, Oriol Pujol, Oriol Rodriguez-Leor, J. M. Ferre, & Petia Radeva. (2009). Fast Rigid Registration of Vascular Structures in IVUS Sequences. IEEE Transactions on Information Technology in Biomedicine, 13(6), 106–1011.
Abstract: Intravascular ultrasound (IVUS) technology permits visualization of high-resolution images of internal vascular structures. IVUS is a unique image-guiding tool to display longitudinal view of the vessels, and estimate the length and size of vascular structures with the goal of accurate diagnosis. Unfortunately, due to pulsatile contraction and expansion of the heart, the captured images are affected by different motion artifacts that make visual inspection difficult. In this paper, we propose an efficient algorithm that aligns vascular structures and strongly reduces the saw-shaped oscillation, simplifying the inspection of longitudinal cuts; it reduces the motion artifacts caused by the displacement of the catheter in the short-axis plane and the catheter rotation due to vessel tortuosity. The algorithm prototype aligns 3.16 frames/s and clearly outperforms state-of-the-art methods with similar computational cost. The speed of the algorithm is crucial since it allows to inspect the corrected sequence during patient intervention. Moreover, we improved an indirect methodology for IVUS rigid registration algorithm evaluation.
|
Dena Bazazian, Raul Gomez, Anguelos Nicolaou, Lluis Gomez, Dimosthenis Karatzas, & Andrew Bagdanov. (2019). Fast: Facilitated and accurate scene text proposals through fcn guided pruning. PRL - Pattern Recognition Letters, 119, 112–120.
Abstract: Class-specific text proposal algorithms can efficiently reduce the search space for possible text object locations in an image. In this paper we combine the Text Proposals algorithm with Fully Convolutional Networks to efficiently reduce the number of proposals while maintaining the same recall level and thus gaining a significant speed up. Our experiments demonstrate that such text proposal approaches yield significantly higher recall rates than state-of-the-art text localization techniques, while also producing better-quality localizations. Our results on the ICDAR 2015 Robust Reading Competition (Challenge 4) and the COCO-text datasets show that, when combined with strong word classifiers, this recall margin leads to state-of-the-art results in end-to-end scene text recognition.
|
Simone Balocco, O. Camara, E. Vivas, T. Sola, L. Guimaraens, H. A. van Andel, et al. (2010). Feasibility of Estimating Regional Mechanical Properties of Cerebral Aneurysms In Vivo. MEDPHYS - Medical Physics, 37(4), 1689–1706.
Abstract: PURPOSE:
In this article, the authors studied the feasibility of estimating regional mechanical properties in cerebral aneurysms, integrating information extracted from imaging and physiological data with generic computational models of the arterial wall behavior.
METHODS:
A data assimilation framework was developed to incorporate patient-specific geometries into a given biomechanical model, whereas wall motion estimates were obtained from applying registration techniques to a pair of simulated MR images and guided the mechanical parameter estimation. A simple incompressible linear and isotropic Hookean model coupled with computational fluid-dynamics was employed as a first approximation for computational purposes. Additionally, an automatic clustering technique was developed to reduce the number of parameters to assimilate at the optimization stage and it considerably accelerated the convergence of the simulations. Several in silico experiments were designed to assess the influence of aneurysm geometrical characteristics and the accuracy of wall motion estimates on the mechanical property estimates. Hence, the proposed methodology was applied to six real cerebral aneurysms and tested against a varying number of regions with different elasticity, different mesh discretization, imaging resolution, and registration configurations.
RESULTS:
Several in silico experiments were conducted to investigate the feasibility of the proposed workflow, results found suggesting that the estimation of the mechanical properties was mainly influenced by the image spatial resolution and the chosen registration configuration. According to the in silico experiments, the minimal spatial resolution needed to extract wall pulsation measurements with enough accuracy to guide the proposed data assimilation framework was of 0.1 mm.
CONCLUSIONS:
Current routine imaging modalities do not have such a high spatial resolution and therefore the proposed data assimilation framework cannot currently be used on in vivo data to reliably estimate regional properties in cerebral aneurysms. Besides, it was observed that the incorporation of fluid-structure interaction in a biomechanical model with linear and isotropic material properties did not have a substantial influence in the final results.
|
Katerine Diaz, Jesus Martinez del Rincon, Marçal Rusiñol, & Aura Hernandez-Sabate. (2019). Feature Extraction by Using Dual-Generalized Discriminative Common Vectors. JMIV - Journal of Mathematical Imaging and Vision, 61(3), 331–351.
Abstract: In this paper, a dual online subspace-based learning method called dual-generalized discriminative common vectors (Dual-GDCV) is presented. The method extends incremental GDCV by exploiting simultaneously both the concepts of incremental and decremental learning for supervised feature extraction and classification. Our methodology is able to update the feature representation space without recalculating the full projection or accessing the previously processed training data. It allows both adding information and removing unnecessary data from a knowledge base in an efficient way, while retaining the previously acquired knowledge. The proposed method has been theoretically proved and empirically validated in six standard face recognition and classification datasets, under two scenarios: (1) removing and adding samples of existent classes, and (2) removing and adding new classes to a classification problem. Results show a considerable computational gain without compromising the accuracy of the model in comparison with both batch methodologies and other state-of-art adaptive methods.
Keywords: Online feature extraction; Generalized discriminative common vectors; Dual learning; Incremental learning; Decremental learning
|
P. Ricaurte, C. Chilan, Cristhian A. Aguilera-Carrasco, Boris X. Vintimilla, & Angel Sappa. (2014). Feature Point Descriptors: Infrared and Visible Spectra. SENS - Sensors, 14(2), 3690–3701.
Abstract: This manuscript evaluates the behavior of classical feature point descriptors when they are used in images from long-wave infrared spectral band and compare them with the results obtained in the visible spectrum. Robustness to changes in rotation, scaling, blur, and additive noise are analyzed using a state of the art framework. Experimental results using a cross-spectral outdoor image data set are presented and conclusions from these experiments are given.
|
Jaume Gibert, Ernest Valveny, & Horst Bunke. (2012). Feature Selection on Node Statistics Based Embedding of Graphs. PRL - Pattern Recognition Letters, 33(15), 1980–1990.
Abstract: Representing a graph with a feature vector is a common way of making statistical machine learning algorithms applicable to the domain of graphs. Such a transition from graphs to vectors is known as graphembedding. A key issue in graphembedding is to select a proper set of features in order to make the vectorial representation of graphs as strong and discriminative as possible. In this article, we propose features that are constructed out of frequencies of node label representatives. We first build a large set of features and then select the most discriminative ones according to different ranking criteria and feature transformation algorithms. On different classification tasks, we experimentally show that only a small significant subset of these features is needed to achieve the same classification rates as competing to state-of-the-art methods.
Keywords: Structural pattern recognition; Graph embedding; Feature ranking; PCA; Graph classification
|
Arash Akbarinia, & C. Alejandro Parraga. (2018). Feedback and Surround Modulated Boundary Detection. IJCV - International Journal of Computer Vision, 126(12), 1367–1380.
Abstract: Edges are key components of any visual scene to the extent that we can recognise objects merely by their silhouettes. The human visual system captures edge information through neurons in the visual cortex that are sensitive to both intensity discontinuities and particular orientations. The “classical approach” assumes that these cells are only responsive to the stimulus present within their receptive fields, however, recent studies demonstrate that surrounding regions and inter-areal feedback connections influence their responses significantly. In this work we propose a biologically-inspired edge detection model in which orientation selective neurons are represented through the first derivative of a Gaussian function resembling double-opponent cells in the primary visual cortex (V1). In our model we account for four kinds of receptive field surround, i.e. full, far, iso- and orthogonal-orientation, whose contributions are contrast-dependant. The output signal from V1 is pooled in its perpendicular direction by larger V2 neurons employing a contrast-variant centre-surround kernel. We further introduce a feedback connection from higher-level visual areas to the lower ones. The results of our model on three benchmark datasets show a big improvement compared to the current non-learning and biologically-inspired state-of-the-art algorithms while being competitive to the learning-based methods.
Keywords: Boundary detection; Surround modulation; Biologically-inspired vision
|
Mohamed Ali Souibgui, Alicia Fornes, Yousri Kessentini, & Beata Megyesi. (2022). Few shots are all you need: A progressive learning approach for low resource handwritten text recognition. PRL - Pattern Recognition Letters, 160, 43–49.
Abstract: Handwritten text recognition in low resource scenarios, such as manuscripts with rare alphabets, is a challenging problem. In this paper, we propose a few-shot learning-based handwriting recognition approach that significantly reduces the human annotation process, by requiring only a few images of each alphabet symbols. The method consists of detecting all the symbols of a given alphabet in a textline image and decoding the obtained similarity scores to the final sequence of transcribed symbols. Our model is first pretrained on synthetic line images generated from an alphabet, which could differ from the alphabet of the target domain. A second training step is then applied to reduce the gap between the source and the target data. Since this retraining would require annotation of thousands of handwritten symbols together with their bounding boxes, we propose to avoid such human effort through an unsupervised progressive learning approach that automatically assigns pseudo-labels to the unlabeled data. The evaluation on different datasets shows that our model can lead to competitive results with a significant reduction in human effort. The code will be publicly available in the following repository: https://github.com/dali92002/HTRbyMatching
|
Mariella Dimiccoli. (2016). Figure-ground segregation: A fully nonlocal approach. VR - Vision Research, 126, 308–317.
Abstract: We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas.
Keywords: Figure-ground segregation; Nonlocal approach; Directional linear voting; Nonlinear diffusion
|
Josep Llados, Horst Bunke, & Enric Marti. (1997). Finding rotational symmetries by cyclic string matching. PRL - Pattern recognition letters, 18(14), 1435–1442.
Abstract: Symmetry is an important shape feature. In this paper, a simple and fast method to detect perfect and distorted rotational symmetries of 2D objects is described. The boundary of a shape is polygonally approximated and represented as a string. Rotational symmetries are found by cyclic string matching between two identical copies of the shape string. The set of minimum cost edit sequences that transform the shape string to a cyclically shifted version of itself define the rotational symmetry and its order. Finally, a modification of the algorithm is proposed to detect reflectional symmetries. Some experimental results are presented to show the reliability of the proposed algorithm
Keywords: Rotational symmetry; Reflectional symmetry; String matching
|
Julio C. S. Jacques Junior, Yagmur Gucluturk, Marc Perez, Umut Guçlu, Carlos Andujar, Xavier Baro, et al. (2022). First Impressions: A Survey on Vision-Based Apparent Personality Trait Analysis. TAC - IEEE Transactions on Affective Computing, 13(1), 75–95.
Abstract: Personality analysis has been widely studied in psychology, neuropsychology, and signal processing fields, among others. From the past few years, it also became an attractive research area in visual computing. From the computational point of view, by far speech and text have been the most considered cues of information for analyzing personality. However, recently there has been an increasing interest from the computer vision community in analyzing personality from visual data. Recent computer vision approaches are able to accurately analyze human faces, body postures and behaviors, and use these information to infer apparent personality traits. Because of the overwhelming research interest in this topic, and of the potential impact that this sort of methods could have in society, we present in this paper an up-to-date review of existing vision-based approaches for apparent personality trait recognition. We describe seminal and cutting edge works on the subject, discussing and comparing their distinctive features and limitations. Future venues of research in the field are identified and discussed. Furthermore, aspects on the subjectivity in data labeling/evaluation, as well as current datasets and challenges organized to push the research on the field are reviewed.
Keywords: Personality computing; first impressions; person perception; big-five; subjective bias; computer vision; machine learning; nonverbal signals; facial expression; gesture; speech analysis; multi-modal recognition
|
Ana Garcia Rodriguez, Yael Tudela, Henry Cordova, S. Carballal, I. Ordas, L. Moreira, et al. (2022). First in Vivo Computer-Aided Diagnosis of Colorectal Polyps using White Light Endoscopy. END - Endoscopy, 54.
|