Marçal Rusiñol, R.Roset, Josep Llados, & C.Montaner. (2011). Automatic Index Generation of Digitized Map Series by Coordinate Extraction and Interpretation. ePER - e-Perimetron, 219–229.
Abstract: By means of computer vision algorithms scanned images of maps are processed in order to extract relevant geographic information from printed coordinate pairs. The meaningful information is then transformed into georeferencing information for each single map sheet, and the complete set is compiled to produce a graphical index sheet for the map series along with relevant metadata. The whole process is fully automated and trained to attain maximum effectivity and throughput.
|
Miguel Reyes, Jose Ramirez Moreno, Juan R Revilla, Petia Radeva, & Sergio Escalera. (2011). ADiBAS: Sistema Multisensor de Adquisicion Automatica de Datos Corporales Objetivos, Robustos y Fiables para el Analisis de la Postura y el Movimiento. In 6th Congreso Iberoamericano de Tecnologia de Apoyo a la Discapacidad (pp. 939–944).
Abstract: El análisis de la postura y del rango de movimiento son fundamentales para conocer la optimización del gesto y mejorar, de este modo, el rendimiento y la detección de posibles lesiones. Esta cuantificación es especialmente interesante en deportistas o en pacientes que presentan alguna lesión neurológica o del sistema musculo-esquelético, ya que permite conocer el proceso evolutivo de estos pacientes, evaluar la eficacia de la terapia aplicada y proponer, en caso necesario, una modificación del protocolo de tratamiento.
En este trabajo presentamos un sistema automático que permite, mediante una tecnología no invasiva, la captación automática de marcadores LED situados sobre el paciente y su posterior análisis con el fin de mostrar al especialista datos objetivos que permitan un mejor soporte diagnóstico. También se describe un
sistema analítico de la postura corporal sin marcadores, donde su ejecución durante secuencias dinámicas aporta un alto grado de naturalidad al paciente a la hora de realizar los ejercicios funcionales.
|
Antonio Hernandez, Carlo Gatta, Sergio Escalera, Laura Igual, Victoria Martin Yuste, & Petia Radeva. (2011). Accurate and Robust Fully-Automatic QCA: Method and Numerical Validation. In 14th International Conference on Medical Image Computing and Computer Assisted Intervention (Vol. 14, pp. 496–503). Springer.
Abstract: The Quantitative Coronary Angiography (QCA) is a methodology used to evaluate the arterial diseases and, in particular, the degree of stenosis. In this paper we propose AQCA, a fully automatic method for vessel segmentation based on graph cut theory. Vesselness, geodesic paths and a new multi-scale edgeness map are used to compute a globally optimal artery segmentation. We evaluate the method performance in a rigorous numerical way on two datasets. The method can detect an artery with precision 92.9 +/- 5% and sensitivity 94.2 +/- 6%. The average absolute distance error between detected and ground truth centerline is 1.13 +/- 0.11 pixels (about 0.27 +/- 0.025 mm) and the absolute relative error in the vessel caliber estimation is 2.93% with almost no bias. Moreover, the method can discriminate between arteries and catheter with an accuracy of 96.4%.
|
Eloi Puertas, Sergio Escalera, & Oriol Pujol. (2011). Multi-Class Multi-Scale Stacked Sequential Learning. In Carlo Sansone, Josef Kittler, & Fabio Roli (Eds.), 10th International Conference on Multiple Classifier Systems (Vol. 6713, pp. 197–206). Springer.
|
Oscar Amoros, Sergio Escalera, & Anna Puig. (2011). Adaboost GPU-based Classifier for Direct Volume Rendering. In International Conference on Computer Graphics Theory and Applications (pp. 215–219).
Abstract: In volume visualization, the voxel visibitity and materials are carried out through an interactive editing of Transfer Function. In this paper, we present a two-level GPU-based labeling method that computes in times of rendering a set of labeled structures using the Adaboost machine learning classifier. In a pre-processing step, Adaboost trains a binary classifier from a pre-labeled dataset and, in each sample, takes into account a set of features. This binary classifier is a weighted combination of weak classifiers, which can be expressed as simple decision functions estimated on a single feature values. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. We propose an alternative representation of these classifiers that allow a GPU-based parallelizated testing stage embedded into the visualization pipeline. The empirical results confirm the OpenCL-based classification of biomedical datasets as a tough problem where an opportunity for further research emerges.
|
Joan M. Nuñez. (2011). Computer vision techniques for characterization of finger joints in X-ray image (Dr. Fernando Vilariño and Dra. Debora Gil, Ed.) (Vol. 165). Master's thesis, , .
Abstract: Rheumatoid arthritis (RA) is an autoimmune inflammatory type of arthritis which mainly affects hands on its first stages. Though it is a chronic disease and there is no cure for it, treatments require an accurate assessment of illness evolution. Such assessment is based on evaluation of hand X-ray images by using one of the several available semi-quantitative methods. This task requires highly trained medical personnel. That is why the automation of the assessment would allow professionals to save time and effort. Two stages are involved in this task. Firstly, the joint detection, afterwards, the joint characterization. Unlike the little existing previous work, this contribution clearly separates those two stages and sets the foundations of a modular assessment system focusing on the characterization stage. A hand joint dataset is created and an accurate data analysis is achieved in order to identify relevant features. Since the sclerosis and the lower bone were decided to be the most important features, different computer vision techniques were used in order to develop a detector system for both of them. Joint space width measures are provided and their correlation with Sharp-Van der Heijde is verified
Keywords: Rheumatoid arthritis, X-ray, Sharp Van der Heijde, joint characterization, sclerosis detection, bone detection, edge, ridge
|
Mohammad Rouhani, & Angel Sappa. (2011). Implicit B-Spline Fitting Using the 3L Algorithm. In 18th IEEE International Conference on Image Processing (pp. 893–896).
|
Javier Vazquez. (2011). Colour Constancy in Natural Through Colour Naming and Sensor Sharpening (Maria Vanrell, & Graham D. Finlayson, Eds.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Colour is derived from three physical properties: incident light, object reflectance and sensor sensitivities. Incident light varies under natural conditions; hence, recovering scene illuminant is an important issue in computational colour. One way to deal with this problem under calibrated conditions is by following three steps, 1) building a narrow-band sensor basis to accomplish the diagonal model, 2) building a feasible set of illuminants, and 3) defining criteria to select the best illuminant. In this work we focus on colour constancy for natural images by introducing perceptual criteria in the first and third stages.
To deal with the illuminant selection step, we hypothesise that basic colour categories can be used as anchor categories to recover the best illuminant. These colour names are related to the way that the human visual system has evolved to encode relevant natural colour statistics. Therefore the recovered image provides the best representation of the scene labelled with the basic colour terms. We demonstrate with several experiments how this selection criterion achieves current state-of-art results in computational colour constancy. In addition to this result, we psychophysically prove that usual angular error used in colour constancy does not correlate with human preferences, and we propose a new perceptual colour constancy evaluation.
The implementation of this selection criterion strongly relies on the use of a diagonal
model for illuminant change. Consequently, the second contribution focuses on building an appropriate narrow-band sensor basis to represent natural images. We propose to use the spectral sharpening technique to compute a unique narrow-band basis optimised to represent a large set of natural reflectances under natural illuminants and given in the basis of human cones. The proposed sensors allow predicting unique hues and the World colour Survey data independently of the illuminant by using a compact singularity function. Additionally, we studied different families of sharp sensors to minimise different perceptual measures. This study brought us to extend the spherical sampling procedure from 3D to 6D.
Several research lines still remain open. One natural extension would be to measure the
effects of using the computed sharp sensors on the category hypothesis, while another might be to insert spatial contextual information to improve category hypothesis. Finally, much work still needs to be done to explore how individual sensors can be adjusted to the colours in a scene.
|
Ferran Diego. (2011). Probabilistic Alignment of Video Sequences Recorded by Moving Cameras (Joan Serrat, Ed.). Ph.D. thesis, Ediciones Graficas Rey, .
Abstract: Video alignment consists of integrating multiple video sequences recorded independently into a single video sequence. This means to register both in time (synchronize
frames) and space (image registration) so that the two videos sequences can be fused
or compared pixel–wise. In spite of being relatively unknown, many applications today may benefit from the availability of robust and efficient video alignment methods.
For instance, video surveillance requires to integrate video sequences that are recorded
of the same scene at different times in order to detect changes. The problem of aligning videos has been addressed before, but in the relatively simple cases of fixed or rigidly attached cameras and simultaneous acquisition. In addition, most works rely
on restrictive assumptions which reduce its difficulty such as linear time correspondence or the knowledge of the complete trajectories of corresponding scene points on the images; to some extent, these assumptions limit the practical applicability of the solutions developed until now. In this thesis, we focus on the challenging problem of aligning sequences recorded at different times from independent moving cameras following similar but not coincident trajectories. More precisely, this thesis covers four studies that advance the state-of-the-art in video alignment. First, we focus on analyzing and developing a probabilistic framework for video alignment, that is, a principled way to integrate multiple observations and prior information. In this way, two different approaches are presented to exploit the combination of several purely visual features (image–intensities, visual words and dense motion field descriptor), and
global positioning system (GPS) information. Second, we focus on reformulating the
problem into a single alignment framework since previous works on video alignment
adopt a divide–and–conquer strategy, i.e., first solve the synchronization, and then
register corresponding frames. This also generalizes the ’classic’ case of fixed geometric transform and linear time mapping. Third, we focus on exploiting directly the
time domain of the video sequences in order to avoid exhaustive cross–frame search.
This provides relevant information used for learning the temporal mapping between
pairs of video sequences. Finally, we focus on adapting these methods to the on–line
setting for road detection and vehicle geolocation. The qualitative and quantitative
results presented in this thesis on a variety of real–world pairs of video sequences show that the proposed method is: robust to varying imaging conditions, different image
content (e.g., incoming and outgoing vehicles), variations on camera velocity, and
different scenarios (indoor and outdoor) going beyond the state–of–the–art. Moreover, the on–line video alignment has been successfully applied for road detection and
vehicle geolocation achieving promising results.
|
Xavier Carrillo, E Fernandez-Nofrerias, Francesco Ciompi, Oriol Rodriguez-Leor, Petia Radeva, Neus Salvatella, et al. (2011). Changes in Radial Artery Volume Assessed Using Intravascular Ultrasound: A Comparison of Two Vasodilator Regimens in Transradial Coronary Intervention. JOIC - Journal of Invasive Cardiology, 23(10), 401–404.
Abstract: OBJECTIVES:
This study used intravascular ultrasound (IVUS) to evaluate radial artery volume changes after intraarterial administration of nitroglycerin and/or verapamil.
BACKGROUND:
Radial artery spasm, which is associated with radial artery size, is the main limitation of the transradial approach in percutaneous coronary interventions (PCI).
METHODS:
This prospective, randomized study compared the effect of two intra-arterial vasodilator regimens on radial artery volume: 0.2 mg of nitroglycerin plus 2.5 mg of verapamil (Group 1; n = 15) versus 2.5 mg of verapamil alone (Group 2; n = 15). Radial artery lumen volume was assessed using IVUS at two time points: at baseline (5 minutes after sheath insertion) and post-vasodilator (1 minute after drug administration). The luminal volume of the radial artery was computed using ECOC Random Fields (ECOC-RF), a technique used for automatic segmentation of luminal borders in longitudinal cut images from IVUS sequences.
RESULTS:
There was a significant increase in arterial lumen volume in both groups, with an increase from 451 ± 177 mm³ to 508 ± 192 mm³ (p = 0.001) in Group 1 and from 456 ± 188 mm³ to 509 ± 170 mm³ (p = 0.001) in Group 2. There were no significant differences between the groups in terms of absolute volume increase (58 mm³ versus 53 mm³, respectively; p = 0.65) or in relative volume increase (14% versus 20%, respectively; p = 0.69).
CONCLUSIONS:
Administration of nitroglycerin plus verapamil or verapamil alone to the radial artery resulted in similar increases in arterial lumen volume according to ECOC-RF IVUS measurements.
Keywords: radial; vasodilator treatment; percutaneous coronary intervention; IVUS; volumetric IVUS analysis
|
Francesco Ciompi, A. Palaioroutas, M. Loeve, Oriol Pujol, Petia Radeva, H. Tiddens, et al. (2011). Lung Tissue Classification in Severe Advanced Cystic Fibrosis from CT Scans. In In MICCAI 2011 4th International Workshop on Pulmonary Image Analysis.
|
Wenjuan Gong, Jürgen Brauer, Michael Arens, & Jordi Gonzalez. (2011). Modeling vs. Learning Approaches for Monocular 3D Human Pose Estimation. In 1st IEEE International Workshop on Performance Evaluation on Recognition of Human Actions and Pose Estimation Methods.
|
Jordi Gonzalez, Josep M. Gonfaus, Carles Fernandez, & Xavier Roca. (2011). Exploiting Natural-Language Interaction in Video Surveillance Systems. In V&L Net Workshop on Vision and Language.
|
Lluis Pere de las Heras, Joan Mas, Gemma Sanchez, & Ernest Valveny. (2011). Descriptor-based Svm Wall Detector. In 9th International Workshop on Graphic Recognition.
Abstract: Architectural floorplans exhibit a large variability in notation. Therefore, segmenting and identifying the elements of any kind of plan becomes a challenging task for approaches based on grouping structural primitives obtained by vectorization. Recently, a patch-based segmentation method working at pixel level and relying on the construction of a visual vocabulary has been proposed showing its adaptability to different notations by automatically learning the visual appearance of the elements in each different notation. In this paper we describe an evolution of this new approach in two directions: firstly we evaluate different features to obtain the description of every patch. Secondly, we train an SVM classifier to obtain the category of every patch instead of constructing a visual vocabulary. These modifications of the method have been tested for wall detection on two datasets of architectural floorplans with different notations and compared with the results obtained with the original approach.
|
Marçal Rusiñol, V. Poulain d'Andecy, Dimosthenis Karatzas, & Josep Llados. (2011). Classification of Administrative Document Images by Logo Identification. In In proceedings of 9th IAPR Workshop on Graphic Recognition.
Abstract: This paper is focused on the categorization of administrative document images (such as invoices) based on the recognition of the supplier's graphical logo. Two different methods are proposed, the first one uses a bag-of-visual-words model whereas the second one tries to locate logo images described by the blurred shape model descriptor within documents by a sliding-window technique. Preliminar results are reported with a dataset of real administrative documents.
|