|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2009). Separability of Ternary Codes for Sparse Designs of Error-Correcting Output Codes. PRL - Pattern Recognition Letters, 30(3), 285–297.
Abstract: Error Correcting Output Codes (ECOC) represent a successful framework to deal with multi-class categorization problems based on combining binary classifiers. In this paper, we present a new formulation of the ternary ECOC distance and the error-correcting capabilities in the ternary ECOC framework. Based on the new measure, we stress on how to design coding matrices preventing codification ambiguity and propose a new Sparse Random coding matrix with ternary distance maximization. The results on the UCI Repository and in a real speed traffic categorization problem show that when the coding design satisfies the new ternary measures, significant performance improvement is obtained independently of the decoding strategy applied.
|
|
|
C. Alejandro Parraga, Robert Benavente, Maria Vanrell, & Ramon Baldrich. (2009). Psychophysical measurements to model inter-colour regions of colour-naming space. Journal of Imaging Science and Technology, 53(3), 031106 (8 pages).
Abstract: JCR Impact Factor 2009: 0.391
In this paper, we present a fuzzy-set of parametric functions which segment the CIE lab space into eleven regions which correspond to the group of common universal categories present in all evolved languages as identified by anthropologists and linguists. The set of functions is intended to model a color-name assignment task by humans and differs from other models in its emphasis on the inter-color boundary regions, which were explicitly measured by means of a psychophysics experiment. In our particular implementation, the CIE lab space was segmented into eleven color categories using a Triple Sigmoid as the fuzzy sets basis, whose parameters are included in this paper. The model’s parameters were adjusted according to the psychophysical results of a yes/no discrimination paradigm where observers had to choose (English) names for isoluminant colors belonging to regions in-between neighboring categories. These colors were presented on a calibrated CRT monitor (14-bit x 3 precision). The experimental results show that inter- color boundary regions are much less defined than expected and color samples other than those near the most representatives are needed to define the position and shape of boundaries between categories. The extended set of model parameters is given as a table.
Keywords: image processing; Analysis
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, Joan Serrat, & Antonio Lopez. (2009). An iterative multiresolution scheme for SFM with missing data. JMIV - Journal of Mathematical Imaging and Vision, 34(3), 240–258.
Abstract: Several techniques have been proposed for tackling the Structure from Motion problem through factorization in the case of missing data. However, when the percentage of unknown data is high, most of them may not perform as well as expected. Focussing on this problem, an iterative multiresolution scheme, which aims at recovering missing entries in the originally given input matrix, is proposed. Information recovered following a coarse-to-fine strategy is used for filling in the missing entries. The objective is to recover, as much as possible, missing data in the given matrix.
Thus, when a factorization technique is applied to the partially or totally filled in matrix, instead of to the originally given input one, better results will be obtained. An evaluation study about the robustness to missing and noisy data is reported.
Experimental results obtained with synthetic and real video sequences are presented to show the viability of the proposed approach.
|
|
|
Marçal Rusiñol, Josep Llados, & Gemma Sanchez. (2010). Symbol Spotting in Vectorized Technical Drawings Through a Lookup Table of Region Strings. PAA - Pattern Analysis and Applications, 13(3), 321–331.
Abstract: In this paper, we address the problem of symbol spotting in technical document images applied to scanned and vectorized line drawings. Like any information spotting architecture, our approach has two components. First, symbols are decomposed in primitives which are compactly represented and second a primitive indexing structure aims to efficiently retrieve similar primitives. Primitives are encoded in terms of attributed strings representing closed regions. Similar strings are clustered in a lookup table so that the set median strings act as indexing keys. A voting scheme formulates hypothesis in certain locations of the line drawing image where there is a high presence of regions similar to the queried ones, and therefore, a high probability to find the queried graphical symbol. The proposed approach is illustrated in a framework consisting in spotting furniture symbols in architectural drawings. It has been proved to work even in the presence of noise and distortion introduced by the scanning and raster-to-vector processes.
|
|
|
Javier Vazquez, C. Alejandro Parraga, Maria Vanrell, & Ramon Baldrich. (2009). Color Constancy Algorithms: Psychophysical Evaluation on a New Dataset. Journal of Imaging Science and Technology, 53(3), 031105–9.
Abstract: The estimation of the illuminant of a scene from a digital image has been the goal of a large amount of research in computer vision. Color constancy algorithms have dealt with this problem by defining different heuristics to select a unique solution from within the feasible set. The performance of these algorithms has shown that there is still a long way to go to globally solve this problem as a preliminary step in computer vision. In general, performance evaluation has been done by comparing the angular error between the estimated chromaticity and the chromaticity of a canonical illuminant, which is highly dependent on the image dataset. Recently, some workers have used high-level constraints to estimate illuminants; in this case selection is based on increasing the performance on the subsequent steps of the systems. In this paper we propose a new performance measure, the perceptual angular error. It evaluates the performance of a color constancy algorithm according to the perceptual preferences of humans, or naturalness (instead of the actual optimal solution) and is independent of the visual task. We show the results of a new psychophysical experiment comparing solutions from three different color constancy algorithms. Our results show that in more than a half of the judgments the preferred solution is not the one closest to the optimal solution. Our experiments were performed on a new dataset of images acquired with a calibrated camera with an attached neutral grey sphere, which better copes with the illuminant variations of the scene.
|
|
|
Marçal Rusiñol, Agnes Borras, & Josep Llados. (2010). Relational Indexing of Vectorial Primitives for Symbol Spotting in Line-Drawing Images. PRL - Pattern Recognition Letters, 31(3), 188–201.
Abstract: This paper presents a symbol spotting approach for indexing by content a database of line-drawing images. As line-drawings are digital-born documents designed by vectorial softwares, instead of using a pixel-based approach, we present a spotting method based on vector primitives. Graphical symbols are represented by a set of vectorial primitives which are described by an off-the-shelf shape descriptor. A relational indexing strategy aims to retrieve symbol locations into the target documents by using a combined numerical-relational description of 2D structures. The zones which are likely to contain the queried symbol are validated by a Hough-like voting scheme. In addition, a performance evaluation framework for symbol spotting in graphical documents is proposed. The presented methodology has been evaluated with a benchmarking set of architectural documents achieving good performance results.
Keywords: Document image analysis and recognition, Graphics recognition, Symbol spotting ,Vectorial representations, Line-drawings
|
|
|
Eduard Vazquez, Theo Gevers, M. Lucassen, Joost Van de Weijer, & Ramon Baldrich. (2010). Saliency of Color Image Derivatives: A Comparison between Computational Models and Human Perception. JOSA A - Journal of the Optical Society of America A, 27(3), 613–621.
Abstract: In this paper, computational methods are proposed to compute color edge saliency based on the information content of color edges. The computational methods are evaluated on bottom-up saliency in a psychophysical experiment, and on a more complex task of salient object detection in real-world images. The psychophysical experiment demonstrates the relevance of using information theory as a saliency processing model and that the proposed methods are significantly better in predicting color saliency (with a human-method correspondence up to 74.75% and an observer agreement of 86.8%) than state-of-the-art models. Furthermore, results from salient object detection confirm that an early fusion of color and contrast provide accurate performance to compute visual saliency with a hit rate up to 95.2%.
|
|
|
O. Fors, J. Nuñez, Xavier Otazu, A. Prades, & Robert D. Cardinal. (2010). Improving the Ability of Image Sensors to Detect Faint Stars and Moving Objects Using Image Deconvolution Techniques. SENS - Sensors, 10(3), 1743–1752.
Abstract: Abstract: In this paper we show how the techniques of image deconvolution can increase the ability of image sensors as, for example, CCD imagers, to detect faint stars or faint orbital objects (small satellites and space debris). In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the used image sensor or to increase the effective telescope aperture by more than 30% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal-to-noise ratio of the image, which helps to discover and control dangerous objects as space debris or lost satellites. The benefits obtained using CCD detectors can be extrapolated to any kind of image sensors.
Keywords: image processing; image deconvolution; faint stars; space debris; wavelet transform
|
|
|
Alicia Fornes, Josep Llados, Gemma Sanchez, & Dimosthenis Karatzas. (2010). Rotation Invariant Hand-Drawn Symbol Recognition based on a Dynamic Time Warping Model. IJDAR - International Journal on Document Analysis and Recognition, 13(3), 229–241.
Abstract: One of the major difficulties of handwriting symbol recognition is the high variability among symbols because of the different writer styles. In this paper, we introduce a robust approach for describing and recognizing hand-drawn symbols tolerant to these writer style differences. This method, which is invariant to scale and rotation, is based on the dynamic time warping (DTW) algorithm. The symbols are described by vector sequences, a variation of the DTW distance is used for computing the matching distance, and K-Nearest Neighbor is used to classify them. Our approach has been evaluated in two benchmarking scenarios consisting of hand-drawn symbols. Compared with state-of-the-art methods for symbol recognition, our method shows higher tolerance to the irregular deformations induced by hand-drawn strokes.
|
|
|
Mathieu Nicolas Delalandre, Ernest Valveny, Tony Pridmore, & Dimosthenis Karatzas. (2010). Generation of Synthetic Documents for Performance Evaluation of Symbol Recognition & Spotting Systems. IJDAR - International Journal on Document Analysis and Recognition, 13(3), 187–207.
Abstract: This paper deals with the topic of performance evaluation of symbol recognition & spotting systems. We propose here a new approach to the generation of synthetic graphics documents containing non-isolated symbols in a real context. This approach is based on the definition of a set of constraints that permit us to place the symbols on a pre-defined background according to the properties of a particular domain (architecture, electronics, engineering, etc.). In this way, we can obtain a large amount of images resembling real documents by simply defining the set of constraints and providing a few pre-defined backgrounds. As documents are synthetically generated, the groundtruth (the location and the label of every symbol) becomes automatically available. We have applied this approach to the generation of a large database of architectural drawings and electronic diagrams, which shows the flexibility of the system. Performance evaluation experiments of a symbol localization system show that our approach permits to generate documents with different features that are reflected in variation of localization results.
|
|
|
Antonio Lopez, Joan Serrat, Cristina Cañero, Felipe Lumbreras, & T. Graf. (2010). Robust lane markings detection and road geometry computation. IJAT - International Journal of Automotive Technology, 11(3), 395–407.
Abstract: Detection of lane markings based on a camera sensor can be a low-cost solution to lane departure and curve-over-speed warnings. A number of methods and implementations have been reported in the literature. However, reliable detection is still an issue because of cast shadows, worn and occluded markings, variable ambient lighting conditions, for example. We focus on increasing detection reliability in two ways. First, we employed an image feature other than the commonly used edges: ridges, which we claim addresses this problem better. Second, we adapted RANSAC, a generic robust estimation method, to fit a parametric model of a pair of lane lines to the image features, based on both ridgeness and ridge orientation. In addition, the model was fitted for the left and right lane lines simultaneously to enforce a consistent result. Four measures of interest for driver assistance applications were directly computed from the fitted parametric model at each frame: lane width, lane curvature, and vehicle yaw angle and lateral offset with regard the lane medial axis. We qualitatively assessed our method in video sequences captured on several road types and under very different lighting conditions. We also quantitatively assessed it on synthetic but realistic video sequences for which road geometry and vehicle trajectory ground truth are known.
Keywords: lane markings
|
|
|
Jaume Garcia, Debora Gil, Luis Badiella, Aura Hernandez-Sabate, Francesc Carreras, Sandra Pujades, et al. (2010). A Normalized Framework for the Design of Feature Spaces Assessing the Left Ventricular Function. TMI - IEEE Transactions on Medical Imaging, 29(3), 733–745.
Abstract: A through description of the left ventricle functionality requires combining complementary regional scores. A main limitation is the lack of multiparametric normality models oriented to the assessment of regional wall motion abnormalities (RWMA). This paper covers two main topics involved in RWMA assessment. We propose a general framework allowing the fusion and comparison across subjects of different regional scores. Our framework is used to explore which combination of regional scores (including 2-D motion and strains) is better suited for RWMA detection. Our statistical analysis indicates that for a proper (within interobserver variability) identification of RWMA, models should consider motion and extreme strains.
|
|
|
Debora Gil, Jose Maria-Carazo, & Roberto Marabini. (2006). On the nature of 2D crystal unbending. Journal of Structural Biology, 156(3), 546–555.
Abstract: Crystal unbending, the process that aims to recover a perfect crystal from experimental data, is one of the more important steps in electron crystallography image processing. The unbending process involves three steps: estimation of the unit cell displacements from their ideal positions, extension of the deformation field to the whole image and transformation of the image in order to recover an ideal crystal. In this work, we present a systematic analysis of the second step oriented to address two issues. First, whether the unit cells remain undistorted and only the distance between them should be changed (rigid case) or should be modified with the same deformation suffered by the whole crystal (elastic case). Second, the performance of different extension algorithms (interpolation versus approximation) is explored. Our experiments show that there is no difference between elastic and rigid cases or among the extension algorithms. This implies that the deformation fields are constant over large areas. Furthermore, our results indicate that the main source of error is the transformation of the crystal image.
Keywords: Electron microscopy
|
|
|
Debora Gil, & Petia Radeva. (2004). Shape Restoration via a Regularized Curvature Flow. Journal of Mathematical Imaging and Vision, 21(3), 205–223.
Abstract: Any image filtering operator designed for automatic shape restoration should satisfy robustness (whatever the nature and degree of noise is) as well as non-trivial smooth asymptotic behavior. Moreover, a stopping criterion should be determined by characteristics of the evolved image rather than dependent on the number of iterations. Among the several PDE based techniques, curvature flows appear to be highly reliable for strongly noisy images compared to image diffusion processes.
In the present paper, we introduce a regularized curvature flow (RCF) that admits non-trivial steady states. It is based on a measure of the local curve smoothness that takes into account regularity of the curve curvature and serves as stopping term in the mean curvature flow. We prove that this measure decreases over the orbits of RCF, which endows the method with a natural stop criterion in terms of the magnitude of this measure. Further, in its discrete version it produces steady states consisting of piece-wise regular curves. Numerical experiments made on synthetic shapes corrupted with different kinds of noise show the abilities and limitations of each of the current geometric flows and the benefits of RCF. Finally, we present results on real images that illustrate the usefulness of the present approach in practical applications.
|
|
|
Josep Llados, Jaime Lopez-Krahe, & Enric Marti. (1997). A system to understand hand-drawn floor plans using subgraph isomorphism and Hough transform. In Machine Vision and Applications (Vol. 10, pp. 150–158).
Abstract: Presently, man-machine interface development is a widespread research activity. A system to understand hand drawn architectural drawings in a CAD environment is presented in this paper. To understand a document, we have to identify its building elements and their structural properties. An attributed graph structure is chosen as a symbolic representation of the input document and the patterns to recognize in it. An inexact subgraph isomorphism procedure using relaxation labeling techniques is performed. In this paper we focus on how to speed up the matching. There is a building element, the walls, characterized by a hatching pattern. Using a straight line Hough transform (SLHT)-based method, we recognize this pattern, characterized by parallel straight lines, and remove from the input graph the edges belonging to this pattern. The isomorphism is then applied to the remainder of the input graph. When all the building elements have been recognized, the document is redrawn, correcting the inaccurate strokes obtained from a hand-drawn input.
Keywords: Line drawings – Hough transform – Graph matching – CAD systems – Graphics recognition
|
|