|
Anjan Dutta, Josep Llados, & Umapada Pal. (2013). A symbol spotting approach in graphical documents by hashing serialized graphs. PR - Pattern Recognition, 46(3), 752–768.
Abstract: In this paper we propose a symbol spotting technique in graphical documents. Graphs are used to represent the documents and a (sub)graph matching technique is used to detect the symbols in them. We propose a graph serialization to reduce the usual computational complexity of graph matching. Serialization of graphs is performed by computing acyclic graph paths between each pair of connected nodes. Graph paths are one-dimensional structures of graphs which are less expensive in terms of computation. At the same time they enable robust localization even in the presence of noise and distortion. Indexing in large graph databases involves a computational burden as well. We propose a graph factorization approach to tackle this problem. Factorization is intended to create a unified indexed structure over the database of graphical documents. Once graph paths are extracted, the entire database of graphical documents is indexed in hash tables by locality sensitive hashing (LSH) of shape descriptors of the paths. The hashing data structure aims to execute an approximate k-NN search in a sub-linear time. We have performed detailed experiments with various datasets of line drawings and compared our method with the state-of-the-art works. The results demonstrate the effectiveness and efficiency of our technique.
Keywords: Symbol spotting; Graphics recognition; Graph matching; Graph serialization; Graph factorization; Graph paths; Hashing
|
|
|
Joan Serrat, Felipe Lumbreras, & Antonio Lopez. (2013). Cost estimation of custom hoses from STL files and CAD drawings. COMPUTIND - Computers in Industry, 64(3), 299–309.
Abstract: We present a method for the cost estimation of custom hoses from CAD models. They can come in two formats, which are easy to generate: a STL file or the image of a CAD drawing showing several orthogonal projections. The challenges in either cases are, first, to obtain from them a high level 3D description of the shape, and second, to learn a regression function for the prediction of the manufacturing time, based on geometric features of the reconstructed shape. The chosen description is the 3D line along the medial axis of the tube and the diameter of the circular sections along it. In order to extract it from STL files, we have adapted RANSAC, a robust parametric fitting algorithm. As for CAD drawing images, we propose a new technique for 3D reconstruction from data entered on any number of orthogonal projections. The regression function is a Gaussian process, which does not constrain the function to adopt any specific form and is governed by just two parameters. We assess the accuracy of the manufacturing time estimation by k-fold cross validation on 171 STL file models for which the time is provided by an expert. The results show the feasibility of the method, whereby the relative error for 80% of the testing samples is below 15%.
Keywords: On-line quotation; STL format; Regression; Gaussian process
|
|
|
Fadi Dornaika, Abdelmalik Moujahid, & Bogdan Raducanu. (2013). Facial expression recognition using tracked facial actions: Classifier performance analysis. EAAI - Engineering Applications of Artificial Intelligence, 26(1), 467–477.
Abstract: In this paper, we address the analysis and recognition of facial expressions in continuous videos. More precisely, we study classifiers performance that exploit head pose independent temporal facial action parameters. These are provided by an appearance-based 3D face tracker that simultaneously provides the 3D head pose and facial actions. The use of such tracker makes the recognition pose- and texture-independent. Two different schemes are studied. The first scheme adopts a dynamic time warping technique for recognizing expressions where training data are given by temporal signatures associated with different universal facial expressions. The second scheme models temporal signatures associated with facial actions with fixed length feature vectors (observations), and uses some machine learning algorithms in order to recognize the displayed expression. Experiments quantified the performance of different schemes. These were carried out on CMU video sequences and home-made video sequences. The results show that the use of dimension reduction techniques on the extracted time series can improve the classification performance. Moreover, these experiments show that the best recognition rate can be above 90%.
Keywords: Visual face tracking; 3D deformable models; Facial actions; Dynamic facial expression recognition; Human–computer interaction
|
|
|
David Roche, Debora Gil, & Jesus Giraldo. (2013). Multiple active receptor conformation, agonist efficacy and maximum effect of the system: the conformation-based operational model of agonism,. DDT - Drug Discovery Today, 18(7-8), 365–371.
Abstract: The operational model of agonism assumes that the maximum effect a particular receptor system can achieve (the Em parameter) is fixed. Em estimates are above but close to the asymptotic maximum effects of endogenous agonists. The concept of Em is contradicted by superagonists and those positive allosteric modulators that significantly increase the maximum effect of endogenous agonists. An extension of the operational model is proposed that assumes that the Em parameter does not necessarily have a single value for a receptor system but has multiple values associated to multiple active receptor conformations. The model provides a mechanistic link between active receptor conformation and agonist efficacy, which can be useful for the analysis of agonist response under different receptor scenarios.
|
|
|
Ferran Poveda, Debora Gil, Enric Marti, Albert Andaluz, Manel Ballester, & Francesc Carreras Costa. (2013). Helical structure of the cardiac ventricular anatomy assessed by Diffusion Tensor Magnetic Resonance Imaging multi-resolution tractography. REC - Revista Española de Cardiología, 66(10), 782–790.
Abstract: Deep understanding of myocardial structure linking morphology and function of the heart would unravel crucial knowledge for medical and surgical clinical procedures and studies. Several conceptual models of myocardial fiber organization have been proposed but the lack of an automatic and objective methodology prevented an agreement. We sought to deepen in this knowledge through advanced computer graphic representations of the myocardial fiber architecture by diffusion tensor magnetic resonance imaging (DT-MRI).
We performed automatic tractography reconstruction of unsegmented DT-MRI canine heart datasets coming from the public database of the Johns Hopkins University. Full scale tractographies have been build with 200 seeds and are composed by streamlines computed on the vectorial field of primary eigenvectors given at the diffusion tensor volumes. Also, we introduced a novel multi-scale visualization technique in order to obtain a simplified tractography. This methodology allowed to keep the main geometric features of the fiber tracts, making easier to decipher the main properties of the architectural organization of the heart.
On the analysis of the output from our tractographic representations we found exact correlation with low-level details of myocardial architecture, but also with the more abstract conceptualization of a continuous helical ventricular myocardial fiber array.
Objective analysis of myocardial architecture by an automated method, including the entire myocardium and using several 3D levels of complexity, reveals a continuous helical myocardial fiber arrangement of both right and left ventricles, supporting the anatomical model of the helical ventricular myocardial band described by Torrent-Guasp.
Keywords: Heart;Diffusion magnetic resonance imaging;Diffusion tractography;Helical heart;Myocardial ventricular band.
|
|
|
Egils Avots, M. Daneshmanda, Andres Traumann, Sergio Escalera, & G. Anbarjafaria. (2016). Automatic garment retexturing based on infrared information. CG - Computers & Graphics, 59, 28–38.
Abstract: This paper introduces a new automatic technique for garment retexturing using a single static image along with the depth and infrared information obtained using the Microsoft Kinect II as the RGB-D acquisition device. First, the garment is segmented out from the image using either the Breadth-First Search algorithm or the semi-automatic procedure provided by the GrabCut method. Then texture domain coordinates are computed for each pixel belonging to the garment using normalised 3D information. Afterwards, shading is applied to the new colours from the texture image. As the main contribution of the proposed method, the latter information is obtained based on extracting a linear map transforming the colour present on the infrared image to that of the RGB colour channels. One of the most important impacts of this strategy is that the resulting retexturing algorithm is colour-, pattern- and lighting-invariant. The experimental results show that it can be used to produce realistic representations, which is substantiated through implementing it under various experimentation scenarios, involving varying lighting intensities and directions. Successful results are accomplished also on video sequences, as well as on images of subjects taking different poses. Based on the Mean Opinion Score analysis conducted on many randomly chosen users, it has been shown to produce more realistic-looking results compared to the existing state-of-the-art methods suggested in the literature. From a wide perspective, the proposed method can be used for retexturing all sorts of segmented surfaces, although the focus of this study is on garment retexturing, and the investigation of the configurations is steered accordingly, since the experiments target an application in the context of virtual fitting rooms.
Keywords: Garment Retexturing; Texture Mapping; Infrared Images; RGB-D Acquisition Devices; Shading
|
|
|
Simone Balocco, Maria Zuluaga, Guillaume Zahnd, Su-Lin Lee, & Stefanie Demirci. (2016). Computing and Visualization for Intravascular Imaging and Computer Assisted Stenting. Elsevier.
|
|
|
Pau Riba, Lutz Goldmann, Oriol Ramos Terrades, Diede Rusticus, Alicia Fornes, & Josep Llados. (2022). Table detection in business document images by message passing networks. PR - Pattern Recognition, 127, 108641.
Abstract: Tabular structures in business documents offer a complementary dimension to the raw textual data. For instance, there is information about the relationships among pieces of information. Nowadays, digital mailroom applications have become a key service for workflow automation. Therefore, the detection and interpretation of tables is crucial. With the recent advances in information extraction, table detection and recognition has gained interest in document image analysis, in particular, with the absence of rule lines and unknown information about rows and columns. However, business documents usually contain sensitive contents limiting the amount of public benchmarking datasets. In this paper, we propose a graph-based approach for detecting tables in document images which do not require the raw content of the document. Hence, the sensitive content can be previously removed and, instead of using the raw image or textual content, we propose a purely structural approach to keep sensitive data anonymous. Our framework uses graph neural networks (GNNs) to describe the local repetitive structures that constitute a table. In particular, our main application domain are business documents. We have carefully validated our approach in two invoice datasets and a modern document benchmark. Our experiments demonstrate that tables can be detected by purely structural approaches.
|
|
|
Patricia Suarez, Angel Sappa, & Boris X. Vintimilla. (2021). Deep learning-based vegetation index estimation. In A.Solanki, A.Nayyar, & M.Naved (Eds.), Generative Adversarial Networks for Image-to-Image Translation (pp. 205–234). Elsevier.
|
|
|
Juan Borrego-Carazo, Carles Sanchez, David Castells, Jordi Carrabina, & Debora Gil. (2023). BronchoPose: an analysis of data and model configuration for vision-based bronchoscopy pose estimation. CMPB - Computer Methods and Programs in Biomedicine, 228, 107241.
Abstract: Vision-based bronchoscopy (VB) models require the registration of the virtual lung model with the frames from the video bronchoscopy to provide effective guidance during the biopsy. The registration can be achieved by either tracking the position and orientation of the bronchoscopy camera or by calibrating its deviation from the pose (position and orientation) simulated in the virtual lung model. Recent advances in neural networks and temporal image processing have provided new opportunities for guided bronchoscopy. However, such progress has been hindered by the lack of comparative experimental conditions.
In the present paper, we share a novel synthetic dataset allowing for a fair comparison of methods. Moreover, this paper investigates several neural network architectures for the learning of temporal information at different levels of subject personalization. In order to improve orientation measurement, we also present a standardized comparison framework and a novel metric for camera orientation learning. Results on the dataset show that the proposed metric and architectures, as well as the standardized conditions, provide notable improvements to current state-of-the-art camera pose estimation in video bronchoscopy.
Keywords: Videobronchoscopy guiding; Deep learning; Architecture optimization; Datasets; Standardized evaluation framework; Pose estimation
|
|
|
Mohamed Ali Souibgui, Alicia Fornes, Yousri Kessentini, & Beata Megyesi. (2022). Few shots are all you need: A progressive learning approach for low resource handwritten text recognition. PRL - Pattern Recognition Letters, 160, 43–49.
Abstract: Handwritten text recognition in low resource scenarios, such as manuscripts with rare alphabets, is a challenging problem. In this paper, we propose a few-shot learning-based handwriting recognition approach that significantly reduces the human annotation process, by requiring only a few images of each alphabet symbols. The method consists of detecting all the symbols of a given alphabet in a textline image and decoding the obtained similarity scores to the final sequence of transcribed symbols. Our model is first pretrained on synthetic line images generated from an alphabet, which could differ from the alphabet of the target domain. A second training step is then applied to reduce the gap between the source and the target data. Since this retraining would require annotation of thousands of handwritten symbols together with their bounding boxes, we propose to avoid such human effort through an unsupervised progressive learning approach that automatically assigns pseudo-labels to the unlabeled data. The evaluation on different datasets shows that our model can lead to competitive results with a significant reduction in human effort. The code will be publicly available in the following repository: https://github.com/dali92002/HTRbyMatching
|
|
|
Yecong Wan, Yuanshuo Cheng, Miingwen Shao, & Jordi Gonzalez. (2022). Image rain removal and illumination enhancement done in one go. KBS - Knowledge-Based Systems, 252, 109244.
Abstract: Rain removal plays an important role in the restoration of degraded images. Recently, CNN-based methods have achieved remarkable success. However, these approaches neglect that the appearance of real-world rain is often accompanied by low light conditions, which will further degrade the image quality, thereby hindering the restoration mission. Therefore, it is very indispensable to jointly remove the rain and enhance illumination for real-world rain image restoration. To this end, we proposed a novel spatially-adaptive network, dubbed SANet, which can remove the rain and enhance illumination in one go with the guidance of degradation mask. Meanwhile, to fully utilize negative samples, a contrastive loss is proposed to preserve more natural textures and consistent illumination. In addition, we present a new synthetic dataset, named DarkRain, to boost the development of rain image restoration algorithms in practical scenarios. DarkRain not only contains different degrees of rain, but also considers different lighting conditions, and more realistically simulates real-world rainfall scenarios. SANet is extensively evaluated on the proposed dataset and attains new state-of-the-art performance against other combining methods. Moreover, after a simple transformation, our SANet surpasses existing the state-of-the-art algorithms in both rain removal and low-light image enhancement.
|
|
|
Aymen Azaza. (2018). Context, Motion and Semantic Information for Computational Saliency (Joost Van de Weijer, & Ali Douik, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: The main objective of this thesis is to highlight the salient object in an image or in a video sequence. We address three important—but in our opinion
insufficiently investigated—aspects of saliency detection. Firstly, we start
by extending previous research on saliency which explicitly models the information provided from the context. Then, we show the importance of
explicit context modelling for saliency estimation. Several important works
in saliency are based on the usage of object proposals. However, these methods
focus on the saliency of the object proposal itself and ignore the context.
To introduce context in such saliency approaches, we couple every object
proposal with its direct context. This allows us to evaluate the importance
of the immediate surround (context) for its saliency. We propose several
saliency features which are computed from the context proposals including
features based on omni-directional and horizontal context continuity. Secondly,
we investigate the usage of top-downmethods (high-level semantic
information) for the task of saliency prediction since most computational
methods are bottom-up or only include few semantic classes. We propose
to consider a wider group of object classes. These objects represent important
semantic information which we will exploit in our saliency prediction
approach. Thirdly, we develop a method to detect video saliency by computing
saliency from supervoxels and optical flow. In addition, we apply the
context features developed in this thesis for video saliency detection. The
method combines shape and motion features with our proposed context
features. To summarize, we prove that extending object proposals with their
direct context improves the task of saliency detection in both image and
video data. Also the importance of the semantic information in saliency
estimation is evaluated. Finally, we propose a newmotion feature to detect
saliency in video data. The three proposed novelties are evaluated on standard
saliency benchmark datasets and are shown to improve with respect to
state-of-the-art.
|
|
|
Daniel Ponsa. (2007). Model-Based Visual Localisation of Contours and Vehicles (Antonio Lopez, & Xavier Roca, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
|
|
|
Robert Benavente. (2007). A Parametric Model for Computational Colour Naming (Maria Vanrell, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
|
|