|
Mirko Arnold, Anarta Ghosh, Stephen Ameling, & G Lacey. (2010). Automatic segmentation and inpainting of specular highlights for endoscopic imaging. EURASIP JIVP - EURASIP Journal on Image and Video Processing, 2010(9).
|
|
|
Mario Rojas, David Masip, A. Todorov, & Jordi Vitria. (2010). Automatic Point-based Facial Trait Judgments Evaluation. In 23rd IEEE Conference on Computer Vision and Pattern Recognition (2715–2720).
Abstract: Humans constantly evaluate the personalities of other people using their faces. Facial trait judgments have been studied in the psychological field, and have been determined to influence important social outcomes of our lives, such as elections outcomes and social relationships. Recent work on textual descriptions of faces has shown that trait judgments are highly correlated. Further, behavioral studies suggest that two orthogonal dimensions, valence and dominance, can describe the basis of the human judgments from faces. In this paper, we used a corpus of behavioral data of judgments on different trait dimensions to automatically learn a trait predictor from facial pixel images. We study whether trait evaluations performed by humans can be learned using machine learning classifiers, and used later in automatic evaluations of new facial images. The experiments performed using local point-based descriptors show promising results in the evaluation of the main traits.
|
|
|
Carles Fernandez, Jordi Gonzalez, & Xavier Roca. (2010). Automatic Learning of Background Semantics in Generic Surveilled Scenes. In 11th European Conference on Computer Vision (Vol. 6313, 678–692). LNCS. Springer Berlin Heidelberg.
Abstract: Advanced surveillance systems for behavior recognition in outdoor traffic scenes depend strongly on the particular configuration of the scenario. Scene-independent trajectory analysis techniques statistically infer semantics in locations where motion occurs, and such inferences are typically limited to abnormality. Thus, it is interesting to design contributions that automatically categorize more specific semantic regions. State-of-the-art approaches for unsupervised scene labeling exploit trajectory data to segment areas like sources, sinks, or waiting zones. Our method, in addition, incorporates scene-independent knowledge to assign more meaningful labels like crosswalks, sidewalks, or parking spaces. First, a spatiotemporal scene model is obtained from trajectory analysis. Subsequently, a so-called GI-MRF inference process reinforces spatial coherence, and incorporates taxonomy-guided smoothness constraints. Our method achieves automatic and effective labeling of conceptual regions in urban scenarios, and is robust to tracking errors. Experimental validation on 5 surveillance databases has been conducted to assess the generality and accuracy of the segmentations. The resulting scene models are used for model-based behavior analysis.
|
|
|
Wenjuan Gong, Andrew Bagdanov, Xavier Roca, & Jordi Gonzalez. (2010). Automatic Key Pose Selection for 3D Human Action Recognition. In 6th International Conference on Articulated Motion and Deformable Objects (Vol. 6169, 290–299). Springer Verlag.
Abstract: This article describes a novel approach to the modeling of human actions in 3D. The method we propose is based on a “bag of poses” model that represents human actions as histograms of key-pose occurrences over the course of a video sequence. Actions are first represented as 3D poses using a sequence of 36 direction cosines corresponding to the angles 12 joints form with the world coordinate frame in an articulated human body model. These pose representations are then projected to three-dimensional, action-specific principal eigenspaces which we refer to as aSpaces. We introduce a method for key-pose selection based on a local-motion energy optimization criterion and we show that this method is more stable and more resistant to noisy data than other key-poses selection criteria for action recognition.
|
|
|
Sergio Escalera, Oriol Pujol, Petia Radeva, Jordi Vitria, & Maria Teresa Anguera. (2010). Automatic Detection of Dominance and Expected Interest. EURASIPJ - EURASIP Journal on Advances in Signal Processing, , 12.
Abstract: Article ID 491819
Social Signal Processing is an emergent area of research that focuses on the analysis of social constructs. Dominance and interest are two of these social constructs. Dominance refers to the level of influence a person has in a conversation. Interest, when referred in terms of group interactions, can be defined as the degree of engagement that the members of a group collectively display during their interaction. In this paper, we argue that only using behavioral motion information, we are able to predict the interest of observers when looking at face-to-face interactions as well as the dominant people. First, we propose a simple set of movement-based features from body, face, and mouth activity in order to define a higher set of interaction indicators. The considered indicators are manually annotated by observers. Based on the opinions obtained, we define an automatic binary dominance detection problem and a multiclass interest quantification problem. Error-Correcting Output Codes framework is used to learn to rank the perceived observer's interest in face-to-face interactions meanwhile Adaboost is used to solve the dominant detection problem. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers in both dominance and interest detection problems.
|
|
|
David Rotger, Petia Radeva, & N. Bruining. (2010). Automatic Detection of Bioabsorbable Coronary Stents in IVUS Images using a Cascade of Classifiers. TITB - IEEE Transactions on Information Technology in Biomedicine, 14(2), 535 – 537.
Abstract: Bioabsorbable drug-eluting coronary stents present a very promising improvement to the common metallic ones solving some of the most important problems of stent implantation: the late restenosis. These stents made of poly-L-lactic acid cause a very subtle acoustic shadow (compared to the metallic ones) making difficult the automatic detection and measurements in images. In this paper, we propose a novel approach based on a cascade of GentleBoost classifiers to detect the stent struts using structural features to code the information of the different subregions of the struts. A stochastic gradient descent method is applied to optimize the overall performance of the detector. Validation results of struts detection are very encouraging with an average F-measure of 81%.
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2010). An Iterative Multiresolution Scheme for SFM with Missing Data: single and multiple object scenes. IMAVIS - Image and Vision Computing, 28(1), 164–176.
Abstract: Most of the techniques proposed for tackling the Structure from Motion problem (SFM) cannot deal with high percentages of missing data in the matrix of trajectories. Furthermore, an additional problem should be faced up when working with multiple object scenes: the rank of the matrix of trajectories should be estimated. This paper presents an iterative multiresolution scheme for SFM with missing data to be used in both the single and multiple object cases. The proposed scheme aims at recovering missing entries in the original input matrix. The objective is to improve the results by applying a factorization technique to the partially or totally filled in matrix instead of to the original input one. Experimental results obtained with synthetic and real data sequences, containing single and multiple objects, are presented to show the viability of the proposed approach.
|
|
|
Anjan Dutta, Umapada Pal, Alicia Fornes, & Josep Llados. (2010). An Efficient Staff Removal Technique from Printed Musical Documents. In 20th International Conference on Pattern Recognition (1965–1968).
Abstract: Staff removal is an important preprocessing step of the Optical Music Recognition (OMR). The process aims to remove the stafflines from a musical document and retain only the musical symbols, later these symbols are used effectively to identify the music information. This paper proposes a simple but robust method to remove stafflines from printed musical scores. In the proposed methodology we have considered a staffline segment as a horizontal linkage of vertical black runs with uniform height. We have used the neighbouring properties of a staffline segment to validate it as a true segment. We have considered the dataset along with the deformations described in for evaluation purpose. From experimentation we have got encouraging results.
|
|
|
Michal Drozdzal, Laura Igual, Petia Radeva, Jordi Vitria, Carolina Malagelada, & Fernando Azpiroz. (2010). Aligning Endoluminal Scene Sequences in Wireless Capsule Endoscopy. In IEEE Computer Society Workshop on Mathematical Methods in Biomedical Image Analysis (117–124).
Abstract: Intestinal motility analysis is an important examination in detection of various intestinal malfunctions. One of the big challenges of automatic motility analysis is how to compare sequence of images and extract dynamic paterns taking into account the high deformability of the intestine wall as well as the capsule motion. From clinical point of view the ability to align endoluminal scene sequences will help to find regions of similar intestinal activity and in this way will provide a valuable information on intestinal motility problems. This work, for first time, addresses the problem of aligning endoluminal sequences taking into account motion and structure of the intestine. To describe motility in the sequence, we propose different descriptors based on the Sift Flow algorithm, namely: (1) Histograms of Sift Flow Directions to describe the flow course, (2) Sift Descriptors to represent image intestine structure and (3) Sift Flow Magnitude to quantify intestine deformation. We show that the merge of all three descriptors provides robust information on sequence description in terms of motility. Moreover, we develop a novel methodology to rank the intestinal sequences based on the expert feedback about relevance of the results. The experimental results show that the selected descriptors are useful in the alignment and similarity description and the proposed method allows the analysis of the WCE.
|
|
|
Monica Piñol. (2010). Adaptative Vocabulary Tree for Image Classification using Reinforcement Learning (Vol. 162). Master's thesis, , .
|
|
|
C. Alejandro Parraga, Ramon Baldrich, & Maria Vanrell. (2010). Accurate Mapping of Natural Scenes Radiance to Cone Activation Space: A New Image Dataset. In 5th European Conference on Colour in Graphics, Imaging and Vision and 12th International Symposium on Multispectral Colour Science (50–57).
Abstract: The characterization of trichromatic cameras is usually done in terms of a device-independent color space, such as the CIE 1931 XYZ space. This is indeed convenient since it allows the testing of results against colorimetric measures. We have characterized our camera to represent human cone activation by mapping the camera sensor's (RGB) responses to human (LMS) through a polynomial transformation, which can be “customized” according to the types of scenes we want to represent. Here we present a method to test the accuracy of the camera measures and a study on how the choice of training reflectances for the polynomial may alter the results.
|
|
|
Aura Hernandez-Sabate, Monica Mitiko, Sergio Shiguemi, & Debora Gil. (2010). A validation protocol for assessing cardiac phase retrieval in IntraVascular UltraSound. In Computing in Cardiology (Vol. 37, pp. 899–902). IEEE.
Abstract: A good reliable approach to cardiac triggering is of utmost importance in obtaining accurate quantitative results of atherosclerotic plaque burden from the analysis of IntraVascular UltraSound. Although, in the last years, there has been an increase in research of methods for retrospective gating, there is no general consensus in a validation protocol. Many methods are based on quality assessment of longitudinal cuts appearance and those reporting quantitative numbers do not follow a standard protocol. Such heterogeneity in validation protocols makes faithful comparison across methods a difficult task. We propose a validation protocol based on the variability of the retrieved cardiac phase and explore the capability of several quality measures for quantifying such variability. An ideal detector, suitable for its application in clinical practice, should produce stable phases. That is, it should always sample the same cardiac cycle fraction. In this context, one should measure the variability (variance) of a candidate sampling with respect a ground truth (reference) sampling, since the variance would indicate how spread we are aiming a target. In order to quantify the deviation between the sampling and the ground truth, we have considered two quality scores reported in the literature: signed distance to the closest reference sample and distance to the right of each reference sample. We have also considered the residuals of the regression line of reference against candidate sampling. The performance of the measures has been explored on a set of synthetic samplings covering different cardiac cycle fractions and variabilities. From our simulations, we conclude that the metrics related to distances are sensitive to the shift considered while the residuals are robust against fraction and variabilities as far as one can establish a pair-wise correspondence between candidate and reference. We will further investigate the impact of false positive and negative detections in experimental data.
|
|
|
Sebastien Mace, Herve Locteau, Ernest Valveny, & Salvatore Tabbone. (2010). A system to detect rooms in architectural floor plan images. In 9th IAPR International Workshop on Document Analysis Systems (167–174).
Abstract: In this article, a system to detect rooms in architectural floor plan images is described. We first present a primitive extraction algorithm for line detection. It is based on an original coupling of classical Hough transform with image vectorization in order to perform robust and efficient line detection. We show how the lines that satisfy some graphical arrangements are combined into walls. We also present the way we detect some door hypothesis thanks to the extraction of arcs. Walls and door hypothesis are then used by our room segmentation strategy; it consists in recursively decomposing the image until getting nearly convex regions. The notion of convexity is difficult to quantify, and the selection of separation lines between regions can also be rough. We take advantage of knowledge associated to architectural floor plans in order to obtain mostly rectangular rooms. Qualitative and quantitative evaluations performed on a corpus of real documents show promising results.
|
|
|
Joan Mas. (2010). A Syntactic Pattern Recognition Approach based on a Distribution Tolerant Adjacency Grammar and a Spatial Indexed Parser. Application to Sketched Document Recognition (Gemma Sanchez, & Josep Llados, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Sketch recognition is a discipline which has gained an increasing interest in the last
20 years. This is due to the appearance of new devices such as PDA, Tablet PC’s
or digital pen & paper protocols. From the wide range of sketched documents we
focus on those that represent structured documents such as: architectural floor-plans,
engineering drawing, UML diagrams, etc. To recognize and understand these kinds
of documents, first we have to recognize the different compounding symbols and then
we have to identify the relations between these elements. From the way that a sketch
is captured, there are two categories: on-line and off-line. On-line input modes refer
to draw directly on a PDA or a Tablet PC’s while off-line input modes refer to scan
a previously drawn sketch.
This thesis is an overlapping of three different areas on Computer Science: Pattern
Recognition, Document Analysis and Human-Computer Interaction. The aim of this
thesis is to interpret sketched documents independently on whether they are captured
on-line or off-line. For this reason, the proposed approach should contain the following
features. First, as we are working with sketches the elements present in our input
contain distortions. Second, as we would work in on-line or off-line input modes, the
order in the input of the primitives is indifferent. Finally, the proposed method should
be applied in real scenarios, its response time must be slow.
To interpret a sketched document we propose a syntactic approach. A syntactic
approach is composed of two correlated components: a grammar and a parser. The
grammar allows describing the different elements on the document as well as their
relations. The parser, given a document checks whether it belongs to the language
generated by the grammar or not. Thus, the grammar should be able to cope with
the distortions appearing on the instances of the elements. Moreover, it would be
necessary to define a symbol independently of the order of their primitives. Concerning to the parser when analyzing 2D sentences, it does not assume an order in the
primitives. Then, at each new primitive in the input, the parser searches among the
previous analyzed symbols candidates to produce a valid reduction.
Taking into account these features, we have proposed a grammar based on Adjacency Grammars. This kind of grammars defines their productions as a multiset
of symbols rather than a list. This allows describing a symbol without an order in
their components. To cope with distortion we have proposed a distortion model.
This distortion model is an attributed estimated over the constraints of the grammar and passed through the productions. This measure gives an idea on how far is the
symbol from its ideal model. In addition to the distortion on the constraints other
distortions appear when working with sketches. These distortions are: overtracing,
overlapping, gaps or spurious strokes. Some grammatical productions have been defined to cope with these errors. Concerning the recognition, we have proposed an
incremental parser with an indexation mechanism. Incremental parsers analyze the
input symbol by symbol given a response to the user when a primitive is analyzed.
This makes incremental parser suitable to work in on-line as well as off-line input
modes. The parser has been adapted with an indexation mechanism based on a spatial division. This indexation mechanism allows setting the primitives in the space
and reducing the search to a neighbourhood.
A third contribution is a grammatical inference algorithm. This method given a
set of symbols captures the production describing it. In the field of formal languages,
different approaches has been proposed but in the graphical domain not so much work
is done in this field. The proposed method is able to capture the production from
a set of symbol although they are drawn in different order. A matching step based
on the Haussdorff distance and the Hungarian method has been proposed to match
the primitives of the different symbols. In addition the proposed approach is able to
capture the variability in the parameters of the constraints.
From the experimental results, we may conclude that we have proposed a robust
approach to describe and recognize sketches. Moreover, the addition of new symbols
to the alphabet is not restricted to an expert. Finally, the proposed approach has
been used in two real scenarios obtaining a good performance.
|
|
|
Joan Mas, Josep Llados, Gemma Sanchez, & J.A. Jorge. (2010). A syntactic approach based on distortion-tolerant Adjacency Grammars and a spatial-directed parser to interpret sketched diagrams. PR - Pattern Recognition, 43(12), 4148–4164.
Abstract: This paper presents a syntactic approach based on Adjacency Grammars (AG) for sketch diagram modeling and understanding. Diagrams are a combination of graphical symbols arranged according to a set of spatial rules defined by a visual language. AG describe visual shapes by productions defined in terms of terminal and non-terminal symbols (graphical primitives and subshapes), and a set functions describing the spatial arrangements between symbols. Our approach to sketch diagram understanding provides three main contributions. First, since AG are linear grammars, there is a need to define shapes and relations inherently bidimensional using a sequential formalism. Second, our parsing approach uses an indexing structure based on a spatial tessellation. This serves to reduce the search space when finding candidates to produce a valid reduction. This allows order-free parsing of 2D visual sentences while keeping combinatorial explosion in check. Third, working with sketches requires a distortion model to cope with the natural variations of hand drawn strokes. To this end we extended the basic grammar with a distortion measure modeled on the allowable variation on spatial constraints associated with grammar productions. Finally, the paper reports on an experimental framework an interactive system for sketch analysis. User tests performed on two real scenarios show that our approach is usable in interactive settings.
Keywords: Syntactic Pattern Recognition; Symbol recognition; Diagram understanding; Sketched diagrams; Adjacency Grammars; Incremental parsing; Spatial directed parsing
|
|