Sounak Dey, Anguelos Nicolaou, Josep Llados, & Umapada Pal. (2019). Evaluation of the Effect of Improper Segmentation on Word Spotting. IJDAR - International Journal on Document Analysis and Recognition, 22, 361–374.
Abstract: Word spotting is an important recognition task in large-scale retrieval of document collections. In most of the cases, methods are developed and evaluated assuming perfect word segmentation. In this paper, we propose an experimental framework to quantify the goodness that word segmentation has on the performance achieved by word spotting methods in identical unbiased conditions. The framework consists of generating systematic distortions on segmentation and retrieving the original queries from the distorted dataset. We have tested our framework on several established and state-of-the-art methods using George Washington and Barcelona Marriage Datasets. The experiments done allow for an estimate of the end-to-end performance of word spotting methods.
|
Arnau Ramisa, David Aldavert, Shrihari Vasudevan, Ricardo Toledo, & Ramon Lopez de Mantaras. (2012). Evaluation of Three Vision Based Object Perception Methods for a Mobile Robot. JIRC - Journal of Intelligent and Robotic Systems, 68(2), 185–208.
Abstract: This paper addresses visual object perception applied to mobile robotics. Being able to perceive household objects in unstructured environments is a key capability in order to make robots suitable to perform complex tasks in home environments. However, finding a solution for this task is daunting: it requires the ability to handle the variability in image formation in a moving camera with tight time constraints. The paper brings to attention some of the issues with applying three state of the art object recognition and detection methods in a mobile robotics scenario, and proposes methods to deal with windowing/segmentation. Thus, this work aims at evaluating the state-of-the-art in object perception in an attempt to develop a lightweight solution for mobile robotics use/research in typical indoor settings.
|
Alberto Hidalgo, Ferran Poveda, Enric Marti, Debora Gil, Albert Andaluz, Francesc Carreras, et al. (2012). Evidence of continuous helical structure of the cardiac ventricular anatomy assessed by diffusion tensor imaging magnetic resonance multiresolution tractography. ECR - European Radiology, 3(1), 361–362.
Abstract: Deep understanding of myocardial structure linking morphology and func- tion of the heart would unravel crucial knowledge for medical and surgical clinical procedures and studies. Diffusion tensor MRI provides a discrete measurement of the 3D arrangement of myocardial fibres by the observation of local anisotropic
diffusion of water molecules in biological tissues. In this work, we present a multi- scale visualisation technique based on DT-MRI streamlining capable of uncovering additional properties of the architectural organisation of the heart. Methods and Materials: We selected the John Hopkins University (JHU) Canine Heart Dataset, where the long axis cardiac plane is aligned with the scanner’s Z- axis. Their equipment included a 4-element passed array coil emitting a 1.5 T. For DTI acquisition, a 3D-FSE sequence is apply. We used 200 seeds for full-scale tractography, while we applied a MIP mapping technique for simplified tractographic reconstruction. In this case, we reduced each DTI 3D volume dimensions by order- two magnitude before streamlining.
Our simplified tractographic reconstruction method keeps the main geometric features of fibres, allowing for an easier identification of their global morphological disposition, including the ventricular basal ring. Moreover, we noticed a clearly visible helical disposition of the myocardial fibres, in line with the helical myocardial band ventricular structure described by Torrent-Guasp. Finally, our simplified visualisation with single tracts identifies the main segments of the helical ventricular architecture.
DT-MRI makes possible the identification of a continuous helical architecture of the myocardial fibres, which validates Torrent-Guasp’s helical myocardial band ventricular anatomical model.
|
Hugo Jair Escalante, Victor Ponce, Sergio Escalera, Xavier Baro, Alicia Morales-Reyes, & Jose Martinez-Carranza. (2017). Evolving weighting schemes for the Bag of Visual Words. Neural Computing and Applications - Neural Computing and Applications, 28(5), 925–939.
Abstract: The Bag of Visual Words (BoVW) is an established representation in computer vision. Taking inspiration from text mining, this representation has proved
to be very effective in many domains. However, in most cases, standard term-weighting schemes are adopted (e.g.,term-frequency or TF-IDF). It remains open the question of whether alternative weighting schemes could boost the
performance of methods based on BoVW. More importantly, it is unknown whether it is possible to automatically learn and determine effective weighting schemes from
scratch. This paper brings some light into both of these unknowns. On the one hand, we report an evaluation of the most common weighting schemes used in text mining, but rarely used in computer vision tasks. Besides, we propose an evolutionary algorithm capable of automatically learning weighting schemes for computer vision problems. We report empirical results of an extensive study in several computer vision problems. Results show the usefulness of the proposed method.
Keywords: Bag of Visual Words; Bag of features; Genetic programming; Term-weighting schemes; Computer vision
|
Julio C. S. Jacques Junior, Xavier Baro, & Sergio Escalera. (2018). Exploiting feature representations through similarity learning, post-ranking and ranking aggregation for person re-identification. IMAVIS - Image and Vision Computing, 79, 76–85.
Abstract: Person re-identification has received special attention by the human analysis community in the last few years. To address the challenges in this field, many researchers have proposed different strategies, which basically exploit either cross-view invariant features or cross-view robust metrics. In this work, we propose to exploit a post-ranking approach and combine different feature representations through ranking aggregation. Spatial information, which potentially benefits the person matching, is represented using a 2D body model, from which color and texture information are extracted and combined. We also consider background/foreground information, automatically extracted via Deep Decompositional Network, and the usage of Convolutional Neural Network (CNN) features. To describe the matching between images we use the polynomial feature map, also taking into account local and global information. The Discriminant Context Information Analysis based post-ranking approach is used to improve initial ranking lists. Finally, the Stuart ranking aggregation method is employed to combine complementary ranking lists obtained from different feature representations. Experimental results demonstrated that we improve the state-of-the-art on VIPeR and PRID450s datasets, achieving 67.21% and 75.64% on top-1 rank recognition rate, respectively, as well as obtaining competitive results on CUHK01 dataset.
|
Ivan Huerta, Ariel Amato, Xavier Roca, & Jordi Gonzalez. (2013). Exploiting Multiple Cues in Motion Segmentation Based on Background Subtraction. NEUCOM - Neurocomputing, 100, 183–196.
Abstract: This paper presents a novel algorithm for mobile-object segmentation from static background scenes, which is both robust and accurate under most of the common problems found in motionsegmentation. In our first contribution, a case analysis of motionsegmentation errors is presented taking into account the inaccuracies associated with different cues, namely colour, edge and intensity. Our second contribution is an hybrid architecture which copes with the main issues observed in the case analysis by fusing the knowledge from the aforementioned three cues and a temporal difference algorithm. On one hand, we enhance the colour and edge models to solve not only global and local illumination changes (i.e. shadows and highlights) but also the camouflage in intensity. In addition, local information is also exploited to solve the camouflage in chroma. On the other hand, the intensity cue is applied when colour and edge cues are not available because their values are beyond the dynamic range. Additionally, temporal difference scheme is included to segment motion where those three cues cannot be reliably computed, for example in those background regions not visible during the training period. Lastly, our approach is extended for handling ghost detection. The proposed method obtains very accurate and robust motionsegmentation results in multiple indoor and outdoor scenarios, while outperforming the most-referred state-of-art approaches.
Keywords: Motion segmentation; Shadow suppression; Colour segmentation; Edge segmentation; Ghost detection; Background subtraction
|
Xialei Liu, Joost Van de Weijer, & Andrew Bagdanov. (2019). Exploiting Unlabeled Data in CNNs by Self-Supervised Learning to Rank. TPAMI - IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(8), 1862–1878.
Abstract: For many applications the collection of labeled data is expensive laborious. Exploitation of unlabeled data during training is thus a long pursued objective of machine learning. Self-supervised learning addresses this by positing an auxiliary task (different, but related to the supervised task) for which data is abundantly available. In this paper, we show how ranking can be used as a proxy task for some regression problems. As another contribution, we propose an efficient backpropagation technique for Siamese networks which prevents the redundant computation introduced by the multi-branch network architecture. We apply our framework to two regression problems: Image Quality Assessment (IQA) and Crowd Counting. For both we show how to automatically generate ranked image sets from unlabeled data. Our results show that networks trained to regress to the ground truth targets for labeled data and to simultaneously learn to rank unlabeled data obtain significantly better, state-of-the-art results for both IQA and crowd counting. In addition, we show that measuring network uncertainty on the self-supervised proxy task is a good measure of informativeness of unlabeled data. This can be used to drive an algorithm for active learning and we show that this reduces labeling effort by up to 50 percent.
Keywords: Task analysis;Training;Image quality;Visualization;Uncertainty;Labeling;Neural networks;Learning from rankings;image quality assessment;crowd counting;active learning
|
Gloria Fernandez Esparrach, Jorge Bernal, Maria Lopez Ceron, Henry Cordova, Cristina Sanchez Montes, Cristina Rodriguez de Miguel, et al. (2016). Exploring the clinical potential of an automatic colonic polyp detection method based on the creation of energy maps. END - Endoscopy, 48(9), 837–842.
Abstract: Background and aims: Polyp miss-rate is a drawback of colonoscopy that increases significantly in small polyps. We explored the efficacy of an automatic computer vision method for polyp detection.
Methods: Our method relies on a model that defines polyp boundaries as valleys of image intensity. Valley information is integrated into energy maps which represent the likelihood of polyp presence.
Results: In 24 videos containing polyps from routine colonoscopies, all polyps were detected in at least one frame. Mean values of the maximum of energy map were higher in frames with polyps than without (p<0.001). Performance improved in high quality frames (AUC= 0.79, 95%CI: 0.70-0.87 vs 0.75, 95%CI: 0.66-0.83). Using 3.75 as maximum threshold value, sensitivity and specificity for detection of polyps were 70.4% (95%CI: 60.3-80.8) and 72.4% (95%CI: 61.6-84.6), respectively.
Conclusion: Energy maps showed a good performance for colonic polyp detection. This indicates a potential applicability in clinical practice.
|
Debora Gil, & Petia Radeva. (2005). Extending anisotropic operators to recover smooth shapes. Computer Vision and Image Understanding, 99(1), 110–125.
Abstract: Anisotropic differential operators are widely used in image enhancement processes. Recently, their property of smoothly extending functions to the whole image domain has begun to be exploited. Strong ellipticity of differential operators is a requirement that ensures existence of a unique solution. This condition is too restrictive for operators designed to extend image level sets: their own functionality implies that they should restrict to some vector field. The diffusion tensor that defines the diffusion operator links anisotropic processes with Riemmanian manifolds. In this context, degeneracy implies restricting diffusion to the varieties generated by the vector fields of positive eigenvalues, provided that an integrability condition is satisfied. We will use that any smooth vector field fulfills this integrability requirement to design line connection algorithms for contour completion. As application we present a segmenting strategy that assures convergent snakes whatever the geometry of the object to be modelled is.
Keywords: Contour completion; Functional extension; Differential operators; Riemmanian manifolds; Snake segmentation
|
Fernando Vilariño, Stephan Ameling, Gerard Lacey, Stephen Patchett, & Hugh Mulcahy. (2009). Eye Tracking Search Patterns in Expert and Trainee Colonoscopists: A Novel Method of Assessing Endoscopic Competency? GI - Gastrointestinal Endoscopy, 69(5), 370.
|
Fadi Dornaika, Abdelmalik Moujahid, & Bogdan Raducanu. (2013). Facial expression recognition using tracked facial actions: Classifier performance analysis. EAAI - Engineering Applications of Artificial Intelligence, 26(1), 467–477.
Abstract: In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%.
Keywords: Visual face tracking; 3D deformable models; Facial actions; Dynamic facial expression recognition; Human–computer interaction
|
Josep M. Gonfaus, Marco Pedersoli, Jordi Gonzalez, Andrea Vedaldi, & Xavier Roca. (2015). Factorized appearances for object detection. CVIU - Computer Vision and Image Understanding, 138, 92–101.
Abstract: Deformable object models capture variations in an object’s appearance that can be represented as image deformations. Other effects such as out-of-plane rotations, three-dimensional articulations, and self-occlusions are often captured by considering mixture of deformable models, one per object aspect. A more scalable approach is representing instead the variations at the level of the object parts, applying the concept of a mixture locally. Combining a few part variations can in fact cheaply generate a large number of global appearances.
A limited version of this idea was proposed by Yang and Ramanan [1], for human pose dectection. In this paper we apply it to the task of generic object category detection and extend it in several ways. First, we propose a model for the relationship between part appearances more general than the tree of Yang and Ramanan [1], which is more suitable for generic categories. Second, we treat part locations as well as their appearance as latent variables so that training does not need part annotations but only the object bounding boxes. Third, we modify the weakly-supervised learning of Felzenszwalb et al. and Girshick et al. [2], [3] to handle a significantly more complex latent structure.
Our model is evaluated on standard object detection benchmarks and is found to improve over existing approaches, yielding state-of-the-art results for several object categories.
Keywords: Object recognition; Deformable part models; Learning and sharing parts; Discovering discriminative parts
|
Cristhian A. Aguilera-Carrasco, Cristhian Aguilera, Cristobal A. Navarro, & Angel Sappa. (2020). Fast CNN Stereo Depth Estimation through Embedded GPU Devices. SENS - Sensors, 20(11), 3249.
Abstract: Current CNN-based stereo depth estimation models can barely run under real-time constraints on embedded graphic processing unit (GPU) devices. Moreover, state-of-the-art evaluations usually do not consider model optimization techniques, being that it is unknown what is the current potential on embedded GPU devices. In this work, we evaluate two state-of-the-art models on three different embedded GPU devices, with and without optimization methods, presenting performance results that illustrate the actual capabilities of embedded GPU devices for stereo depth estimation. More importantly, based on our evaluation, we propose the use of a U-Net like architecture for postprocessing the cost-volume, instead of a typical sequence of 3D convolutions, drastically augmenting the runtime speed of current models. In our experiments, we achieve real-time inference speed, in the range of 5–32 ms, for 1216 × 368 input stereo images on the Jetson TX2, Jetson Xavier, and Jetson Nano embedded devices.
Keywords: stereo matching; deep learning; embedded GPU
|
Katerine Diaz, Jesus Martinez del Rincon, Aura Hernandez-Sabate, Marçal Rusiñol, & Francesc J. Ferri. (2018). Fast Kernel Generalized Discriminative Common Vectors for Feature Extraction. JMIV - Journal of Mathematical Imaging and Vision, 60(4), 512–524.
Abstract: This paper presents a supervised subspace learning method called Kernel Generalized Discriminative Common Vectors (KGDCV), as a novel extension of the known Discriminative Common Vectors method with Kernels. Our method combines the advantages of kernel methods to model complex data and solve nonlinear
problems with moderate computational complexity, with the better generalization properties of generalized approaches for large dimensional data. These attractive combination makes KGDCV specially suited for feature extraction and classification in computer vision, image processing and pattern recognition applications. Two different approaches to this generalization are proposed, a first one based on the kernel trick (KT) and a second one based on the nonlinear projection trick (NPT) for even higher efficiency. Both methodologies
have been validated on four different image datasets containing faces, objects and handwritten digits, and compared against well known non-linear state-of-art methods. Results show better discriminant properties than other generalized approaches both linear or kernel. In addition, the KGDCV-NPT approach presents a considerable computational gain, without compromising the accuracy of the model.
|
Carlo Gatta, Oriol Pujol, Oriol Rodriguez-Leor, J. M. Ferre, & Petia Radeva. (2009). Fast Rigid Registration of Vascular Structures in IVUS Sequences. IEEE Transactions on Information Technology in Biomedicine, 13(6), 106–1011.
Abstract: Intravascular ultrasound (IVUS) technology permits visualization of high-resolution images of internal vascular structures. IVUS is a unique image-guiding tool to display longitudinal view of the vessels, and estimate the length and size of vascular structures with the goal of accurate diagnosis. Unfortunately, due to pulsatile contraction and expansion of the heart, the captured images are affected by different motion artifacts that make visual inspection difficult. In this paper, we propose an efficient algorithm that aligns vascular structures and strongly reduces the saw-shaped oscillation, simplifying the inspection of longitudinal cuts; it reduces the motion artifacts caused by the displacement of the catheter in the short-axis plane and the catheter rotation due to vessel tortuosity. The algorithm prototype aligns 3.16 frames/s and clearly outperforms state-of-the-art methods with similar computational cost. The speed of the algorithm is crucial since it allows to inspect the corrected sequence during patient intervention. Moreover, we improved an indirect methodology for IVUS rigid registration algorithm evaluation.
|