|
Jose Antonio Rodriguez, & Florent Perronnin. (2008). Local Gradient Histogram Features for Word Spotting in Unconstrained Handwritten Documents. In International Conference on Frontiers in Handwriting Recognition (7–12).
|
|
|
Ariel Amato, Mikhail Mozerov, Xavier Roca, & Jordi Gonzalez. (2010). Robust Real-Time Background Subtraction Based on Local Neighborhood Patterns. EURASIPJ - EURASIP Journal on Advances in Signal Processing, , 7.
Abstract: Article ID 901205
This paper describes an efficient background subtraction technique for detecting moving objects. The proposed approach is able to overcome difficulties like illumination changes and moving shadows. Our method introduces two discriminative features based on angular and modular patterns, which are formed by similarity measurement between two sets of RGB color vectors: one belonging to the background image and the other to the current image. We show how these patterns are used to improve foreground detection in the presence of moving shadows and in the case when there are strong similarities in color between background and foreground pixels. Experimental results over a collection of public and own datasets of real image sequences demonstrate that the proposed technique achieves a superior performance compared with state-of-the-art methods. Furthermore, both the low computational and space complexities make the presented algorithm feasible for real-time applications.
|
|
|
Michal Drozdzal, Laura Igual, Jordi Vitria, Petia Radeva, C. Malagelada, & Fernando Azpiroz. (2010). SIFT flow-based Sequences Alignment. In Medical Image Computing in Catalunya: Graduate Student Workshop (7–8).
|
|
|
Sergio Escalera, David M.J. Tax, Oriol Pujol, Petia Radeva, & Robert P.W. Duin. (2011). Multi-Class Classification in Image Analysis Via Error-Correcting Output Codes. In H. Kawasnicka, & L.Jain (Eds.), Innovations in Intelligent Image Analysis (Vol. 339, pp. 7–29). Berlin: Springer Berlin Heidelberg.
Abstract: A common way to model multi-class classification problems is by means of Error-Correcting Output Codes (ECOC). Given a multi-class problem, the ECOC technique designs a codeword for each class, where each position of the code identifies the membership of the class for a given binary problem.A classification decision is obtained by assigning the label of the class with the closest code. In this paper, we overview the state-of-the-art on ECOC designs and test them in real applications. Results on different multi-class data sets show the benefits of using the ensemble of classifiers when categorizing objects in images.
|
|
|
Anjan Dutta, Josep Llados, Horst Bunke, & Umapada Pal. (2014). A Product Graph Based Method for Dual Subgraph Matching Applied to Symbol Spotting. In Bart Lamiroy, & Jean-Marc Ogier (Eds.), Graphics Recognition. Current Trends and Challenges (Vol. 8746, pp. 7–11). LNCS. Springer Berlin Heidelberg.
Abstract: Product graph has been shown as a way for matching subgraphs. This paper reports the extension of the product graph methodology for subgraph matching applied to symbol spotting in graphical documents. Here we focus on the two major limitations of the previous version of the algorithm: (1) spurious nodes and edges in the graph representation and (2) inefficient node and edge attributes. To deal with noisy information of vectorized graphical documents, we consider a dual edge graph representation on the original graph representing the graphical information and the product graph is computed between the dual edge graphs of the pattern graph and the target graph. The dual edge graph with redundant edges is helpful for efficient and tolerating encoding of the structural information of the graphical documents. The adjacency matrix of the product graph locates the pair of similar edges of two operand graphs and exponentiating the adjacency matrix finds similar random walks of greater lengths. Nodes joining similar random walks between two graphs are found by combining different weighted exponentials of adjacency matrices. An experimental investigation reveals that the recall obtained by this approach is quite encouraging.
Keywords: Product graph; Dual edge graph; Subgraph matching; Random walks; Graph kernel
|
|
|
Diego Cheda, Daniel Ponsa, & Antonio Lopez. (2012). Pedestrian Candidates Generation using Monocular Cues. In IEEE Intelligent Vehicles Symposium (pp. 7–12). IEEE Xplore.
Abstract: Common techniques for pedestrian candidates generation (e.g., sliding window approaches) are based on an exhaustive search over the image. This implies that the number of windows produced is huge, which translates into a significant time consumption in the classification stage. In this paper, we propose a method that significantly reduces the number of windows to be considered by a classifier. Our method is a monocular one that exploits geometric and depth information available on single images. Both representations of the world are fused together to generate pedestrian candidates based on an underlying model which is focused only on objects standing vertically on the ground plane and having certain height, according with their depths on the scene. We evaluate our algorithm on a challenging dataset and demonstrate its application for pedestrian detection, where a considerable reduction in the number of candidate windows is reached.
Keywords: pedestrian detection
|
|
|
Ivet Rafegas, & Maria Vanrell. (2018). Color encoding in biologically-inspired convolutional neural networks. VR - Vision Research, 151, 7–17.
Abstract: Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations.
Keywords: Color coding; Computer vision; Deep learning; Convolutional neural networks
|
|
|
Mohamed Ali Souibgui, Pau Torras, Jialuo Chen, & Alicia Fornes. (2023). An Evaluation of Handwritten Text Recognition Methods for Historical Ciphered Manuscripts. In 7th International Workshop on Historical Document Imaging and Processing (pp. 7–12).
Abstract: This paper investigates the effectiveness of different deep learning HTR families, including LSTM, Seq2Seq, and transformer-based approaches with self-supervised pretraining, in recognizing ciphered manuscripts from different historical periods and cultures. The goal is to identify the most suitable method or training techniques for recognizing ciphered manuscripts and to provide insights into the challenges and opportunities in this field of research. We evaluate the performance of these models on several datasets of ciphered manuscripts and discuss their results. This study contributes to the development of more accurate and efficient methods for recognizing historical manuscripts for the preservation and dissemination of our cultural heritage.
|
|
|
Victor Ponce, Mario Gorga, Xavier Baro, Petia Radeva, & Sergio Escalera. (2011). Analisis de la Expresion Oral y Gestual en Proyectos Fin de Carrera Via un Sistema de Vision Artificial (Vol. 4).
Abstract: La comunicación y expresión oral es una competencia de especial relevancia en el EEES. No obstante, en muchas enseñanzas superiores la puesta en práctica de esta competencia ha sido relegada principalmente a la presentación de proyectos fin de carrera. Dentro de un proyecto de innovación docente, se ha desarrollado una herramienta informática para la extracción de información objetiva para el análisis de la expresión oral y gestual de los alumnos. El objetivo es dar un “feedback” a los estudiantes que les permita mejorar la calidad de sus presentaciones. El prototipo inicial que se presenta en este trabajo permite extraer de forma automática información audiovisual y analizarla mediante técnicas de aprendizaje. El sistema ha sido aplicado a 15 proyectos fin de carrera y 15 exposiciones dentro de una asignatura de cuarto curso. Los resultados obtenidos muestran la viabilidad del sistema para sugerir factores que ayuden tanto en el éxito de la comunicación así como en los criterios de evaluación.
|
|
|
Fernando Vilariño, Panagiota Spyridonos, Jordi Vitria, Fernando Azpiroz, & Petia Radeva. (2006). Cascade analysis for intestinal contraction detection. In 20th International Congress and exhibition Computer Assisted Radiology and Surgery (pp. 9–10).
Abstract: In this work, we address the study of intestinal contractions in a novel approach based on a machine learning framework to process data from Wireless Capsule Video Endoscopy. Wireless endoscopy represents a unique way to visualize the intestine motility by creating long videos to visualize intestine dynamics. In this paper we argue that to analyze huge amount of wireless endoscopy data and define robust methods for contraction detection we should base our approach on sophisticated machine learning techniques. In particular, we propose a cascade of classifiers in order to remove different physiological phenomenon and obtain the motility pattern of small intestines. Our results show obtaining high specificity and sensitivity rates that highlight the high efficiency of the selected approach and support the feasibility of the proposed methodology in the automatic detection and analysis of intestine contractions.
Keywords: intestine video analysis, anisotropic features, support vector machine, cascade of classifiers
|
|
|
Arnau Ramisa, Shrihari Vasudevan, David Aldavert, Ricardo Toledo, & Ramon Lopez de Mantaras. (2009). Evaluation of the SIFT Object Recognition Method in Mobile Robots: Frontiers in Artificial Intelligence and Applications. In 12th International Conference of the Catalan Association for Artificial Intelligence (Vol. 202, pp. 9–18).
Abstract: General object recognition in mobile robots is of primary importance in order to enhance the representation of the environment that robots will use for their reasoning processes. Therefore, we contribute reduce this gap by evaluating the SIFT Object Recognition method in a challenging dataset, focusing on issues relevant to mobile robotics. Resistance of the method to the robotics working conditions was found, but it was limited mainly to well-textured objects.
|
|
|
Jorge Bernal. (2014). Polyp Localization and Segmentation in Colonoscopy Images by Means of a Model of Appearance for Polyps. ELCVIA - Electronic Letters on Computer Vision and Image Analysis, 13(2), 9–10.
Abstract: Colorectal cancer is the fourth most common cause of cancer death worldwide and its survival rate depends on the stage in which it is detected on hence the necessity for an early colon screening. There are several screening techniques but colonoscopy is still nowadays the gold standard, although it has some drawbacks such as the miss rate. Our contribution, in the field of intelligent systems for colonoscopy, aims at providing a polyp localization and a polyp segmentation system based on a model of appearance for polyps. To develop both methods we define a model of appearance for polyps, which describes a polyp as enclosed by intensity valleys. The novelty of our contribution resides on the fact that we include in our model aspects of the image formation and we also consider the presence of other elements from the endoluminal scene such as specular highlights and blood vessels, which have an impact on the performance of our methods. In order to develop our polyp localization method we accumulate valley information in order to generate energy maps, which are also used to guide the polyp segmentation. Our methods achieve promising results in polyp localization and segmentation. As we want to explore the usability of our methods we present a comparative analysis between physicians fixations obtained via an eye tracking device and our polyp localization method. The results show that our method is indistinguishable to novice physicians although it is far from expert physicians.
Keywords: Colonoscopy; polyp localization; polyp segmentation; Eye-tracking
|
|
|
Pedro Martins, Paulo Carvalho, & Carlo Gatta. (2016). On the completeness of feature-driven maximally stable extremal regions. PRL - Pattern Recognition Letters, 74, 9–16.
Abstract: By definition, local image features provide a compact representation of the image in which most of the image information is preserved. This capability offered by local features has been overlooked, despite being relevant in many application scenarios. In this paper, we analyze and discuss the performance of feature-driven Maximally Stable Extremal Regions (MSER) in terms of the coverage of informative image parts (completeness). This type of features results from an MSER extraction on saliency maps in which features related to objects boundaries or even symmetry axes are highlighted. These maps are intended to be suitable domains for MSER detection, allowing this detector to provide a better coverage of informative image parts. Our experimental results, which were based on a large-scale evaluation, show that feature-driven MSER have relatively high completeness values and provide more complete sets than a traditional MSER detection even when sets of similar cardinality are considered.
Keywords: Local features; Completeness; Maximally Stable Extremal Regions
|
|
|
Michael Teutsch, Angel Sappa, & Riad I. Hammoud. (2022). Image and Video Enhancement. In Computer Vision in the Infrared Spectrum. Synthesis Lectures on Computer Vision (pp. 9–21). SLCV. Springer.
Abstract: Image and video enhancement aims at improving the signal quality relative to imaging artifacts such as noise and blur or atmospheric perturbations such as turbulence and haze. It is usually performed in order to assist humans in analyzing image and video content or simply to present humans visually appealing images and videos. However, image and video enhancement can also be used as a preprocessing technique to ease the task and thus improve the performance of subsequent automatic image content analysis algorithms: preceding dehazing can improve object detection as shown by [23] or explicit turbulence modeling can improve moving object detection as discussed by [24]. But it remains an open question whether image and video enhancement should rather be performed explicitly as a preprocessing step or implicitly for example by feeding affected images directly to a neural network for image content analysis like object detection [25]. Especially for real-time video processing at low latency it can be better to handle image perturbation implicitly in order to minimize the processing time of an algorithm. This can be achieved by making algorithms for image content analysis robust or even invariant to perturbations such as noise or blur. Additionally, mistakes of an individual preprocessing module can obviously affect the quality of the entire processing pipeline.
|
|
|
Bogdan Raducanu, Alireza Bosaghzadeh, & Fadi Dornaika. (2015). Multi-observation Face Recognition in Videos based on Label Propagation. In 6th Workshop on Analysis and Modeling of Faces and Gestures AMFG2015 (pp. 10–17).
Abstract: In order to deal with the huge amount of content generated by social media, especially for indexing and retrieval purposes, the focus shifted from single object recognition to multi-observation object recognition. Of particular interest is the problem of face recognition (used as primary cue for persons’ identity assessment), since it is highly required by popular social media search engines like Facebook and Youtube. Recently, several approaches for graph-based label propagation were proposed. However, the associated graphs were constructed in an ad-hoc manner (e.g., using the KNN graph) that cannot cope properly with the rapid and frequent changes in data appearance, a phenomenon intrinsically related with video sequences. In this paper, we
propose a novel approach for efficient and adaptive graph construction, based on a two-phase scheme: (i) the first phase is used to adaptively find the neighbors of a sample and also to find the adequate weights for the minimization function of the second phase; (ii) in the second phase, the
selected neighbors along with their corresponding weights are used to locally and collaboratively estimate the sparse affinity matrix weights. Experimental results performed on Honda Video Database (HVDB) and a subset of video
sequences extracted from the popular TV-series ’Friends’ show a distinct advantage of the proposed method over the existing standard graph construction methods.
|
|