Pau Riba, Josep Llados, & Alicia Fornes. (2020). Hierarchical graphs for coarse-to-fine error tolerant matching. PRL - Pattern Recognition Letters, 134, 116–124.
Abstract: During the last years, graph-based representations are experiencing a growing usage in visual recognition and retrieval due to their ability to capture both structural and appearance-based information. Thus, they provide a greater representational power than classical statistical frameworks. However, graph-based representations leads to high computational complexities usually dealt by graph embeddings or approximated matching techniques. Despite their representational power, they are very sensitive to noise and small variations of the input image. With the aim to cope with the time complexity and the variability present in the generated graphs, in this paper we propose to construct a novel hierarchical graph representation. Graph clustering techniques adapted from social media analysis have been used in order to contract a graph at different abstraction levels while keeping information about the topology. Abstract nodes attributes summarise information about the contracted graph partition. For the proposed representations, a coarse-to-fine matching technique is defined. Hence, small graphs are used as a filtering before more accurate matching methods are applied. This approach has been validated in real scenarios such as classification of colour images or retrieval of handwritten words (i.e. word spotting).
Keywords: Hierarchical graph representation; Coarse-to-fine graph matching; Graph-based retrieval
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2008). Loss-Weighted Decoding for Error-Correcting Output Coding. In 3rd International Conference on Computer Vision Theory and Applications, (Vol. 2, 117–122).
|
Michal Drozdzal, Laura Igual, Petia Radeva, Jordi Vitria, Carolina Malagelada, & Fernando Azpiroz. (2010). Aligning Endoluminal Scene Sequences in Wireless Capsule Endoscopy. In IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis (117–124).
Abstract: Intestinal motility analysis is an important examination in detection of various intestinal malfunctions. One of the big challenges of automatic motility analysis is how to compare sequence of images and extract dynamic paterns taking into account the high deformability of the intestine wall as well as the capsule motion. From clinical point of view the ability to align endoluminal scene sequences will help to find regions of similar intestinal activity and in this way will provide a valuable information on intestinal motility problems. This work, for first time, addresses the problem of aligning endoluminal sequences taking into account motion and structure of the intestine. To describe motility in the sequence, we propose different descriptors based on the Sift Flow algorithm, namely: (1) Histograms of Sift Flow Directions to describe the flow course, (2) Sift Descriptors to represent image intestine structure and (3) Sift Flow Magnitude to quantify intestine deformation. We show that the merge of all three descriptors provides robust information on sequence description in terms of motility. Moreover, we develop a novel methodology to rank the intestinal sequences based on the expert feedback about relevance of the results. The experimental results show that the selected descriptors are useful in the alignment and similarity description and the proposed method allows the analysis of the WCE.
|
Nuria Cirera, Alicia Fornes, Volkmar Frinken, & Josep Llados. (2013). Hybrid grammar language model for handwritten historical documents recognition. In 6th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 7887, pp. 117–124). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we present a hybrid language model for the recognition of handwritten historical documents with a structured syntactical layout. Using a hidden Markov model-based recognition framework, a word-based grammar with a closed dictionary is enhanced by a character sequence recognition method. This allows to recognize out-of-dictionary words in controlled parts of the recognition, while keeping a closed vocabulary restriction for other parts. While the current status is work in progress, we can report an improvement in terms of character error rate.
|
Mark Philip Philipsen, Jacob Velling Dueholm, Anders Jorgensen, Sergio Escalera, & Thomas B. Moeslund. (2018). Organ Segmentation in Poultry Viscera Using RGB-D. SENS - Sensors, 18(1), 117.
Abstract: We present a pattern recognition framework for semantic segmentation of visual structures, that is, multi-class labelling at pixel level, and apply it to the task of segmenting organs in the eviscerated viscera from slaughtered poultry in RGB-D images. This is a step towards replacing the current strenuous manual inspection at poultry processing plants. Features are extracted from feature maps such as activation maps from a convolutional neural network (CNN). A random forest classifier assigns class probabilities, which are further refined by utilizing context in a conditional random field. The presented method is compatible with both 2D and 3D features, which allows us to explore the value of adding 3D and CNN-derived features. The dataset consists of 604 RGB-D images showing 151 unique sets of eviscerated viscera from four different perspectives. A mean Jaccard index of 78.11% is achieved across the four classes of organs by using features derived from 2D, 3D and a CNN, compared to 74.28% using only basic 2D image features.
Keywords: semantic segmentation; RGB-D; random forest; conditional random field; 2D; 3D; CNN
|
Joan Mas, Gemma Sanchez, & Josep Llados. (2010). SSP: Sketching slide Presentations, a Syntactic Approach. In Graphics Recognition. Achievements, Challenges, and Evolution. 8th International Workshop, GREC 2009. Selected Papers (Vol. 6020, pp. 118–129). LNCS. Springer Berlin Heidelberg.
Abstract: The design of a slide presentation is a creative process. In this process first, humans visualize in their minds what they want to explain. Then, they have to be able to represent this knowledge in an understandable way. There exists a lot of commercial software that allows to create our own slide presentations but the creativity of the user is rather limited. In this article we present an application that allows the user to create and visualize a slide presentation from a sketch. A slide may be seen as a graphical document or a diagram where its elements are placed in a particular spatial arrangement. To describe and recognize slides a syntactic approach is proposed. This approach is based on an Adjacency Grammar and a parsing methodology to cope with this kind of grammars. The experimental evaluation shows the performance of our methodology from a qualitative and a quantitative point of view. Six different slides containing different number of symbols, from 4 to 7, have been given to the users and they have drawn them without restrictions in the order of the elements. The quantitative results give an idea on how suitable is our methodology to describe and recognize the different elements in a slide.
|
Pichao Wang, Wanqing Li, Philip Ogunbona, Jun Wan, & Sergio Escalera. (2018). RGB-D-based Human Motion Recognition with Deep Learning: A Survey. CVIU - Computer Vision and Image Understanding, 171, 118–139.
Abstract: Human motion recognition is one of the most important branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development in artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data. In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based. As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. Particularly, we highlighted the methods of encoding spatial-temporal-structural information inherent in video sequence, and discuss potential directions for future research.
Keywords: Human motion recognition; RGB-D data; Deep learning; Survey
|
Thanh Nam Le, Muhammad Muzzamil Luqman, Anjan Dutta, Pierre Heroux, Christophe Rigaud, Clement Guerin, et al. (2018). Subgraph spotting in graph representations of comic book images. PRL - Pattern Recognition Letters, 112, 118–124.
Abstract: Graph-based representations are the most powerful data structures for extracting, representing and preserving the structural information of underlying data. Subgraph spotting is an interesting research problem, especially for studying and investigating the structural information based content-based image retrieval (CBIR) and query by example (QBE) in image databases. In this paper we address the problem of lack of freely available ground-truthed datasets for subgraph spotting and present a new dataset for subgraph spotting in graph representations of comic book images (SSGCI) with its ground-truth and evaluation protocol. Experimental results of two state-of-the-art methods of subgraph spotting are presented on the new SSGCI dataset.
Keywords: Attributed graph; Region adjacency graph; Graph matching; Graph isomorphism; Subgraph isomorphism; Subgraph spotting; Graph indexing; Graph retrieval; Query by example; Dataset and comic book images
|
Miguel Angel Bautista, Xavier Baro, Oriol Pujol, Petia Radeva, Jordi Vitria, & Sergio Escalera. (2010). Compact Evolutive Design of Error-Correcting Output Codes. In Supervised and Unsupervised Ensemble Methods and their Applications in the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (pp. 119–128).
Abstract: The classication of large number of object categories is a challenging trend in the Machine Learning eld. In literature, this is often addressed using an ensemble of classiers. In this scope, the Error-Correcting Output Codes framework has demonstrated to be a powerful tool for the combination of classiers. However, most of the state-of-the-art ECOC approaches use a linear or exponential number of classiers, making the discrimination of a large number of classes unfeasible. In this paper, we explore and propose a minimal design of ECOC in terms of the number of classiers. Evolutionary computation is used for tuning the parameters of the classiers and looking for the best Minimal ECOC code conguration. The results over several public UCI data sets and a challenging multi-class Computer Vision problem show that the proposed methodology obtains comparable and even better results than state-of-the-art ECOC methodologies with far less number of dichotomizers.
Keywords: Ensemble of Dichotomizers; Error-Correcting Output Codes; Evolutionary optimization
|
Neus Salvatella, E Fernandez-Nofrerias, Francesco Ciompi, Oriol Rodriguez-Leor, H. Tizon, Xavier Carrillo, et al. (2010). Radial Artery Volume Changes After Administration Of Two Different Intra-arterial Drug Regimens. Assessment by Intravascular Ultrasound. JACC - Journal of the American College of Cardiology, 56(13s1), B119.
|
Miguel Angel Bautista, Sergio Escalera, Xavier Baro, Oriol Pujol, Jordi Vitria, & Petia Radeva. (2010). Compact Evolutive Design of Error-Correcting Output Codes. Supervised and Unsupervised Ensemble Methods and Applications. In European Conference on Machine Learning (Vol. I, pp. 119–128).
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2010). On the Decoding Process in Ternary Error-Correcting Output Codes. TPAMI - IEEE on Pattern Analysis and Machine Intelligence, 32(1), 120–134.
Abstract: A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-correcting output codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a ldquodo not carerdquo symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI machine learning repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
|
Mohammad Rouhani, & Angel Sappa. (2009). A Novel Approach to Geometric Fitting of Implicit Quadrics. In 8th International Conference on Advanced Concepts for Intelligent Vision Systems (Vol. 5807, 121–132). LNCS. Springer Berlin Heidelberg.
Abstract: This paper presents a novel approach for estimating the geometric distance from a given point to the corresponding implicit quadric curve/surface. The proposed estimation is based on the height of a tetrahedron, which is used as a coarse but reliable estimation of the real distance. The estimated distance is then used for finding the best set of quadric parameters, by means of the Levenberg-Marquardt algorithm, which is a common framework in other geometric fitting approaches. Comparisons of the proposed approach with previous ones are provided to show both improvements in CPU time as well as in the accuracy of the obtained results.
|
Marc Oliu, Ciprian Corneanu, Laszlo A. Jeni, Jeffrey F. Cohn, Takeo Kanade, & Sergio Escalera. (2016). Continuous Supervised Descent Method for Facial Landmark Localisation. In 13th Asian Conference on Computer Vision (Vol. 10112, pp. 121–135). LNCS.
Abstract: Recent methods for facial landmark location perform well on close-to-frontal faces but have problems in generalising to large head rotations. In order to address this issue we propose a second order linear regression method that is both compact and robust against strong rotations. We provide a closed form solution, making the method fast to train. We test the method’s performance on two challenging datasets. The first has been intensely used by the community. The second has been specially generated from a well known 3D face dataset. It is considerably more challenging, including a high diversity of rotations and more samples than any other existing public dataset. The proposed method is compared against state-of-the-art approaches, including RCPR, CGPRT, LBF, CFSS, and GSDM. Results upon both datasets show that the proposed method offers state-of-the-art performance on near frontal view data, improves state-of-the-art methods on more challenging head rotation problems and keeps a compact model size.
|
Jaume Garcia, Debora Gil, Francesc Carreras, Sandra Pujades, R.Leta, Xavier Alomar, et al. (2008). Un Model 3D del Ventricle Esquerre Integrant Anatomia i Funcionalitat. In XX Congrés de la Societat Catalana de Cardiologia, Actes del Congres (122). Barcelona.
Abstract: Els canvis en la dinàmica del Ventricle Esquerre (VE) reflecteixen la majoria de malalties cardiovasculars . Els avenços en imatge mèdica han impulsat la recerca en models i simulacions de la dinàmica 3D del VE . La majoria dels models existents sols consideren l’anatomia externa del VE i no permeten una avaluació de l’acoblament electromecànic . Donat que la mecànica d’un muscle depèn de la orientació de les seves fibres, un model realista hauria d’incloure la disposició espacial de la banda ventricular helicoidal (BVH) .
Proposem desenvolupar un model del VE adaptat a cada pacient que integri, per primer cop, l’anatomia de la banda ventricular, l’anatomia externa del VE i la seva funcionalitat, per a una millor determinació del patró d’activació electromecànica
|