Marçal Rusiñol, J. Chazalon, & Jean-Marc Ogier. (2014). Combining Focus Measure Operators to Predict OCR Accuracy in Mobile-Captured Document Images. In 11th IAPR International Workshop on Document Analysis and Systems (pp. 181–185).
Abstract: Mobile document image acquisition is a new trend raising serious issues in business document processing workflows. Such digitization procedure is unreliable, and integrates many distortions which must be detected as soon as possible, on the mobile, to avoid paying data transmission fees, and losing information due to the inability to re-capture later a document with temporary availability. In this context, out-of-focus blur is major issue: users have no direct control over it, and it seriously degrades OCR recognition. In this paper, we concentrate on the estimation of focus quality, to ensure a sufficient legibility of a document image for OCR processing. We propose two contributions to improve OCR accuracy prediction for mobile-captured document images. First, we present 24 focus measures, never tested on document images, which are fast to compute and require no training. Second, we show that a combination of those measures enables state-of-the art performance regarding the correlation with OCR accuracy. The resulting approach is fast, robust, and easy to implement in a mobile device. Experiments are performed on a public dataset, and precise details about image processing are given.
|
Javier Vazquez, C. Alejandro Parraga, & Maria Vanrell. (2009). Ordinal pairwise method for natural images comparison. PER - Perception, 38, 180.
Abstract: 38(Suppl.)ECVP Abstract Supplement
We developed a new psychophysical method to compare different colour appearance models when applied to natural scenes. The method was as follows: two images (processed by different algorithms) were displayed on a CRT monitor and observers were asked to select the most natural of them. The original images were gathered by means of a calibrated trichromatic digital camera and presented one on top of the other on a calibrated screen. The selection was made by pressing on a 6-button IR box, which allowed observers to consider not only the most natural but to rate their selection. The rating system allowed observers to register how much more natural was their chosen image (eg, much more, definitely more, slightly more), which gave us valuable extra information on the selection process. The results were analysed considering both the selection as a binary choice (using Thurstone's law of comparative judgement) and using Bradley-Terry method for ordinal comparison. Our results show a significant difference in the rating scales obtained. Although this method has been used in colour constancy algorithm comparisons, its uses are much wider, eg to compare algorithms of image compression, rendering, recolouring, etc.
|
C. Alejandro Parraga, Javier Vazquez, & Maria Vanrell. (2009). A new cone activation-based natural images dataset. PER - Perception, 36, 180.
Abstract: We generated a new dataset of digital natural images where each colour plane corresponds to the human LMS (long-, medium-, short-wavelength) cone activations. The images were chosen to represent five different visual environments (eg forest, seaside, mountain snow, urban, motorways) and were taken under natural illumination at different times of day. At the bottom-left corner of each picture there was a matte grey ball of approximately constant spectral reflectance (across the camera's response spectrum,) and nearly Lambertian reflective properties, which allows to compute (and remove, if necessary) the illuminant's colour and intensity. The camera (Sigma Foveon SD10) was calibrated by measuring its sensor's spectral responses using a set of 31 spectrally narrowband interference filters. This allowed conversion of the final camera-dependent RGB colour space into the Smith and Pokorny (1975) cone activation space by means of a polynomial transformation, optimised for a set of 1269 Munsell chip reflectances. This new method is an improvement over the usual 3 × 3 matrix transformation which is only accurate for spectrally-narrowband colours. The camera-to-LMS transformation can be recalculated to consider other non-human visual systems. The dataset is available to download from our website.
|
Neus Salvatella, E Fernandez-Nofrerias, Francesco Ciompi, O. Rodriguez-Leor, Xavier Carrillo, R. Hemetsberger, et al. (2010). Canvis de volum a la arteria radial despres de la administracio de dos tractaments vasodilatadors. Avaluacio mitjançant ecografia intravascular. In 22nd Congres Societat Catalana de Cardiologia, (179).
|
Eduardo Tusa, Arash Akbarinia, Raquel Gil Rodriguez, & Corina Barbalata. (2015). Real-Time Face Detection and Tracking Utilising OpenMP and ROS. In 3rd Asia-Pacific Conference on Computer Aided System Engineering (pp. 179–184).
Abstract: The first requisite of a robot to succeed in social interactions is accurate human localisation, i.e. subject detection and tracking. Later, it is estimated whether an interaction partner seeks attention, for example by interpreting the position and orientation of the body. In computer vision, these cues usually are obtained in colour images, whose qualities are degraded in ill illuminated social scenes. In these scenarios depth sensors offer a richer representation. Therefore, it is important to combine colour and depth information. The
second aspect that plays a fundamental role in the acceptance of social robots is their real-time-ability. Processing colour and depth images is computationally demanding. To overcome this we propose a parallelisation strategy of face detection and tracking based on two different architectures: message passing and shared memory. Our results demonstrate high accuracy in
low computational time, processing nine times more number of frames in a parallel implementation. This provides a real-time social robot interaction.
Keywords: RGB-D; Kinect; Human Detection and Tracking; ROS; OpenMP
|
Fernando Vilariño, Panagiota Spyridonos, Jordi Vitria, C. Malagelada, & Petia Radeva. (2006). Linear Radial Patterns Characterization for Automatic Detection of Tonic Intestinal Contractions. In .F. Mart ́ınez-Trinidad et al (Ed.), 11th Iberoamerican Congress on Pattern Recognition (Vol. 4225, 178–187). LNCS. Berlin Heidelberg: Springer Verlag.
Abstract: This work tackles the categorization of general linear radial patterns by means of the valleys and ridges detection and the use of descriptors of directional information, which are provided by steerable filters in different regions of the image. We successfully apply our proposal in the specific case of automatic detection of tonic contractions in video capsule endoscopy, which represent a paradigmatic example of linear radial patterns.
|
Yainuvis Socarras, David Vazquez, Antonio Lopez, David Geronimo, & Theo Gevers. (2012). Improving HOG with Image Segmentation: Application to Human Detection. In J. Blanc-Talon et al. (Ed.), 11th International Conference on Advanced Concepts for Intelligent Vision Systems (Vol. 7517, pp. 178–189). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we improve the histogram of oriented gradients (HOG), a core descriptor of state-of-the-art object detection, by the use of higher-level information coming from image segmentation. The idea is to re-weight the descriptor while computing it without increasing its size. The benefits of the proposal are two-fold: (i) to improve the performance of the detector by enriching the descriptor information and (ii) take advantage of the information of image segmentation, which in fact is likely to be used in other stages of the detection system such as candidate generation or refinement.
We test our technique in the INRIA person dataset, which was originally developed to test HOG, embedding it in a human detection system. The well-known segmentation method, mean-shift (from smaller to larger super-pixels), and different methods to re-weight the original descriptor (constant, region-luminance, color or texture-dependent) has been evaluated. We achieve performance improvements of 4:47% in detection rate through the use of differences of color between contour pixel neighborhoods as re-weighting function.
Keywords: Segmentation; Pedestrian Detection
|
Cristhian A. Aguilera-Carrasco, Angel Sappa, & Ricardo Toledo. (2015). LGHD: a Feature Descriptor for Matching Across Non-Linear Intensity Variations. In 22th IEEE International Conference on Image Processing (pp. 178–181).
|
Patricia Suarez, Dario Carpio, & Angel Sappa. (2021). Non-homogeneous Haze Removal Through a Multiple Attention Module Architecture. In 16th International Symposium on Visual Computing (Vol. 13018, 178–190). LNCS.
Abstract: This paper presents a novel attention based architecture to remove non-homogeneous haze. The proposed model is focused on obtaining the most representative characteristics of the image, at each learning cycle, by means of adaptive attention modules coupled with a residual learning convolutional network. The latter is based on the Res2Net model. The proposed architecture is trained with just a few set of images. Its performance is evaluated on a public benchmark—images from the non-homogeneous haze NTIRE 2021 challenge—and compared with state of the art approaches reaching the best result.
|
Esmitt Ramirez, Carles Sanchez, Agnes Borras, Marta Diez-Ferrer, Antoni Rosell, & Debora Gil. (2018). BronchoX: bronchoscopy exploration software for biopsy intervention planning. HTL - Healthcare Technology Letters, 177–182.
Abstract: Virtual bronchoscopy (VB) is a non-invasive exploration tool for intervention planning and navigation of possible pulmonary lesions (PLs). A VB software involves the location of a PL and the calculation of a route, starting from the trachea, to reach it. The selection of a VB software might be a complex process, and there is no consensus in the community of medical software developers in which is the best-suited system to use or framework to choose. The authors present Bronchoscopy Exploration (BronchoX), a VB software to plan biopsy interventions that generate physician-readable instructions to reach the PLs. The authors’ solution is open source, multiplatform, and extensible for future functionalities, designed by their multidisciplinary research and development group. BronchoX is a compound of different algorithms for segmentation, visualisation, and navigation of the respiratory tract. Performed results are a focus on the test the effectiveness of their proposal as an exploration software, also to measure its accuracy as a guiding system to reach PLs. Then, 40 different virtual planning paths were created to guide physicians until distal bronchioles. These results provide a functional software for BronchoX and demonstrate how following simple instructions is possible to reach distal lesions from the trachea.
|
Thanh Ha Do, Oriol Ramos Terrades, & Salvatore Tabbone. (2019). DSD: document sparse-based denoising algorithm. PAA - Pattern Analysis and Applications, 22(1), 177–186.
Abstract: In this paper, we present a sparse-based denoising algorithm for scanned documents. This method can be applied to any kind of scanned documents with satisfactory results. Unlike other approaches, the proposed approach encodes noise documents through sparse representation and visual dictionary learning techniques without any prior noise model. Moreover, we propose a precision parameter estimator. Experiments on several datasets demonstrate the robustness of the proposed approach compared to the state-of-the-art methods on document denoising.
Keywords: Document denoising; Sparse representations; Sparse dictionary learning; Document degradation models
|
Isabelle Guyon, Lisheng Sun Hosoya, Marc Boulle, Hugo Jair Escalante, Sergio Escalera, Zhengying Liu, et al. (2019). Analysis of the AutoML Challenge Series 2015-2018. In Automated Machine Learning (pp. 177–219). SSCML. Springer.
Abstract: The ChaLearn AutoML Challenge (The authors are in alphabetical order of last name, except the first author who did most of the writing and the second author who produced most of the numerical analyses and plots.) (NIPS 2015 – ICML 2016) consisted of six rounds of a machine learning competition of progressive difficulty, subject to limited computational resources. It was followed bya one-round AutoML challenge (PAKDD 2018). The AutoML setting differs from former model selection/hyper-parameter selection challenges, such as the one we previously organized for NIPS 2006: the participants aim to develop fully automated and computationally efficient systems, capable of being trained and tested without human intervention, with code submission. This chapter analyzes the results of these competitions and provides details about the datasets, which were not revealed to the participants. The solutions of the winners are systematically benchmarked over all datasets of all rounds and compared with canonical machine learning algorithms available in scikit-learn. All materials discussed in this chapter (data and code) have been made publicly available at http://automl.chalearn.org/.
|
Joan Mas, J.A. Jorge, Gemma Sanchez, & Josep Llados. (2008). Representing and Parsing Sketched Symbols using Adjacency Grammars and a Grid-Directed Parser. In J.M. Ogier J. L. W. Liu (Ed.), Graphics Recognition: Recent Advances and New Opportunities, (Vol. 5046, 176–187). LNCS.
|
Petia Radeva, Michal Drozdzal, Santiago Segui, Laura Igual, Carolina Malagelada, Fernando Azpiroz, et al. (2012). Active labeling: Application to wireless endoscopy analysis. In High Performance Computing and Simulation, International Conference on (pp. 174–181).
Abstract: Today, robust learners trained in a real supervised machine learning application should count with a rich collection of positive and negative examples. Although in many applications, it is not difficult to obtain huge amount of data, labeling those data can be a very expensive process, especially when dealing with data of high variability and complexity. A good example of such cases are data from medical imaging applications where annotating anomalies like tumors, polyps, atherosclerotic plaque or informative frames in wireless endoscopy need highly trained experts. Building a representative set of training data from medical videos (e.g. Wireless Capsule Endoscopy) means that thousands of frames to be labeled by an expert. It is quite normal that data in new videos come different and thus are not represented by the training set. In this paper, we review the main approaches on active learning and illustrate how active learning can help to reduce expert effort in constructing the training sets. We show that applying active learning criteria, the number of human interventions can be significantly reduced. The proposed system allows the annotation of informative/non-informative frames of Wireless Capsule Endoscopy video containing more than 30000 frames each one with less than 100 expert ”clicks”.
|
Jaume Garcia, David Rotger, Francesc Carreras, R.Leta, & Petia Radeva. (2003). Contrast echography segmentation and tracking by trained deformable models. In Proc. Computers in Cardiology (Vol. 30, pp. 173–176). Centre de Visió per Computador – Dept. Informàtica, UAB Edifici O – Campus UAB, 08193 Bellater.
Abstract: The objective of this work is to segment the human left ventricle myocardium (LVM) in contrast echocardiography imaging and thus track it along a cardiac cycle in order to extract quantitative data about heart function. Ultrasound images are hard to work with due to their speckle appearance. To overcome this we report the combination of active contour models (ACM) or snakes and active shape models (ASM). The ability of ACM in giving closed and smooth curves in addition to the power of the ASM in producing shapes similar to the ones learned, evoke to a robust algorithm. Meanwhile the snake is attracted towards image main features, ASM acts as a correction factor. The algorithm was tested independently on 180 frames and satisfying results were obtained: in 95% the maximum difference between automatic and experts segmentation was less than 12 pixels.
|