Jaime Moreno, Xavier Otazu, & Maria Vanrell. (2010). Contribution of CIWaM in JPEG2000 Quantization for Color Images. In Proceedings of The CREATE 2010 Conference (132–136).
Abstract: The aim of this work is to explain how to apply perceptual concepts to define a perceptual pre-quantizer and to improve JPEG2000 compressor. The approach consists in quantizing wavelet transform coefficients using some of the human visual system behavior properties. Noise is fatal to image compression performance, because it can be both annoying for the observer and consumes excessive bandwidth when the imagery is transmitted. Perceptual pre-quantization reduces unperceivable details and thus improve both visual impression and transmission properties. The comparison between JPEG2000 without and with perceptual pre-quantization shows that the latter is not favorable in PSNR, but the recovered image is more compressed at the same or even better visual quality measured with a weighted PSNR. Perceptual criteria were taken from the CIWaM(ChromaticInductionWaveletModel).
|
Joaquin Salas, Wendy Avalos, Rafael Castañeda, & Mario Maya. (2006). A machine-vision system to measure the parameters describing the performance of a Foucault pendulum. Machine Vision and Applications, 133–138.
|
Jaume Garcia, Debora Gil, Francesc Carreras, Sandra Pujades, & R.Leta. (2007). Modelització 4-Dimensional de la Funció Siatólica del Ventricle Esquerre. In XIX Congrés de la Societat Catalana de Cardiologia de Barcelona (pp. 133–134). Barcelona (Spain).
Abstract: L’evolució tecnològica en el tractament de les imatges mèdiques permet reconstruir, amb el software apropiat, imatges tridimensionals de les estructures cardiovasculars i dotar-les de moviment. Les imatges 4D resultants faciliten l’estudi de la fisiopatologia de la insuficiència cardíaca en base als transtorns de l’activació electromecànica ventricular, el que pot ser d’interès en el procés de selecció de pacients candidats a teràpies de resincronització. Presentem els resultats preliminars de la reconstrucció 4D del ventricle esquerre (VE) a partir de les seqüències de tagging miocàrdic del VE.
|
Debora Gil, Petia Radeva, & Josefina Mauri. (2002). Ivus Segmentation Via a Regularized Curvature Flow. In X Congreso Anual de la Sociedad Española de Ingeniería Biomédica CASEIB 2002 (pp. 133–136). Saragossa, Espanya.
Abstract: Cardiac diseases are diagnosed and treated through a study of the morphology and dynamics of cardiac arteries. In- travascular Ultrasound (IVUS) imaging is of high interest to physicians since it provides both information. At the current state-of-the-art in image segmentation, a robust detection of the arterial lumen in IVUS demands manual intervention or ECG-gating. Manual intervention is a tedious and time consuming task that requires experienced observers, meanwhile ECG-gating is an acquisition technique not available in all clinical centers. We introduce a parametric algorithm that detects the arterial luminal border in in vivo sequences. The method consist in smoothing the sequences’ level surfaces under a regularized mean curvature flow that admits non-trivial steady states. The flow is based on a measure of the surface local smoothness that takes into account regularity of the surface curvature.
|
Francisco Alvaro, Francisco Cruz, Joan Andreu Sanchez, Oriol Ramos Terrades, & Jose Miguel Bemedi. (2013). Page Segmentation of Structured Documents Using 2D Stochastic Context-Free Grammars. In 6th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 7887, pp. 133–140). LNCS. Springer Berlin Heidelberg.
Abstract: In this paper we define a bidimensional extension of Stochastic Context-Free Grammars for page segmentation of structured documents. Two sets of text classification features are used to perform an initial classification of each zone of the page. Then, the page segmentation is obtained as the most likely hypothesis according to a grammar. This approach is compared to Conditional Random Fields and results show significant improvements in several cases. Furthermore, grammars provide a detailed segmentation that allowed a semantic evaluation which also validates this model.
|
Patricia Suarez, & Angel Sappa. (2023). Toward a Thermal Image-Like Representation. In Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (pp. 133–140).
Abstract: This paper proposes a novel model to obtain thermal image-like representations to be used as an input in any thermal image compressive sensing approach (e.g., thermal image: filtering, enhancing, super-resolution). Thermal images offer interesting information about the objects in the scene, in addition to their temperature. Unfortunately, in most of the cases thermal cameras acquire low resolution/quality images. Hence, in order to improve these images, there are several state-of-the-art approaches that exploit complementary information from a low-cost channel (visible image) to increase the image quality of an expensive channel (infrared image). In these SOTA approaches visible images are fused at different levels without paying attention the images acquire information at different bands of the spectral. In this paper a novel approach is proposed to generate thermal image-like representations from a low cost visible images, by means of a contrastive cycled GAN network. Obtained representations (synthetic thermal image) can be later on used to improve the low quality thermal image of the same scene. Experimental results on different datasets are presented.
|
Jorge Bernal, F. Javier Sanchez, & Fernando Vilariño. (2011). A Region Segmentation Method for Colonoscopy Images Using a Model of Polyp Appearance. In Mario João and Hernández J. and S. Vitrià (Ed.), 5th Iberian Conference on Pattern Recognition and Image Analysis (Vol. 6669, pp. 134–143 ). LNCS.
Abstract: This work aims at the segmentation of colonoscopy images into a minimum number of informative regions. Our method performs in a way such, if a polyp is present in the image, it will be exclusively and totally contained in a single region. This result can be used in later stages to classify regions as polyp-containing candidates. The output of the algorithm also defines which regions can be considered as non-informative. The algorithm starts with a high number of initial regions and merges them taking into account the model of polyp appearance obtained from available data. The results show that our segmentations of polyp regions are more accurate than state-of-the-art methods.
Keywords: Colonoscopy, Polyp Detection, Region Merging, Region Segmentation.
|
Mathieu Nicolas Delalandre, Jean-Marc Ogier, & Josep Llados. (2008). A Fast Cbir System of Old Ornamental Letter. In J.M. Ogier J. L. W. Liu (Ed.), Graphics Reognition: Recent Advances and New Opportunities (Vol. 5046, 135–144). LNCS.
|
Marc Castello, Jordi Gonzalez, Ariel Amato, Pau Baiget, Carles Fernandez, Josep M. Gonfaus, et al. (2013). Exploiting Multimodal Interaction Techniques for Video-Surveillance. In Multimodal Interaction in Image and Video Applications Intelligent Systems Reference Library (Vol. 48, pp. 135–151). Springer Berlin Heidelberg.
Abstract: In this paper we present an example of a video surveillance application that exploits Multimodal Interactive (MI) technologies. The main objective of the so-called VID-Hum prototype was to develop a cognitive artificial system for both the detection and description of a particular set of human behaviours arising from real-world events. The main procedure of the prototype described in this chapter entails: (i) adaptation, since the system adapts itself to the most common behaviours (qualitative data) inferred from tracking (quantitative data) thus being able to recognize abnormal behaviors; (ii) feedback, since an advanced interface based on Natural Language understanding allows end-users the communicationwith the prototype by means of conceptual sentences; and (iii) multimodality, since a virtual avatar has been designed to describe what is happening in the scene, based on those textual interpretations generated by the prototype. Thus, the MI methodology has provided an adequate framework for all these cooperating processes.
|
Jaume Gibert, Ernest Valveny, Horst Bunke, & Alicia Fornes. (2012). On the Correlation of Graph Edit Distance and L1 Distance in the Attribute Statistics Embedding Space. In Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR International Workshop (Vol. 7626, pp. 135–143). LNCS. Springer-Berlag, Berlin.
Abstract: Graph embeddings in vector spaces aim at assigning a pattern vector to every graph so that the problems of graph classification and clustering can be solved by using data processing algorithms originally developed for statistical feature vectors. An important requirement graph features should fulfil is that they reproduce as much as possible the properties among objects in the graph domain. In particular, it is usually desired that distances between pairs of graphs in the graph domain closely resemble those between their corresponding vectorial representations. In this work, we analyse relations between the edit distance in the graph domain and the L1 distance of the attribute statistics based embedding, for which good classification performance has been reported on various datasets. We show that there is actually a high correlation between the two kinds of distances provided that the corresponding parameter values that account for balancing the weight between node and edge based features are properly selected.
|
Lluis Pere de las Heras, David Fernandez, Alicia Fornes, Ernest Valveny, Gemma Sanchez, & Josep Llados. (2014). Runlength Histogram Image Signature for Perceptual Retrieval of Architectural Floor Plans. In Graphics Recognition. Current Trends and Challenges (Vol. 8746, pp. 135–146). LNCS. Springer Berlin Heidelberg.
Abstract: This paper proposes a runlength histogram signature as a perceptual descriptor of architectural plans in a retrieval scenario. The style of an architectural drawing is characterized by the perception of lines, shapes and texture. Such visual stimuli are the basis for defining semantic concepts as space properties, symmetry, density, etc. We propose runlength histograms extracted in vertical, horizontal and diagonal directions as a characterization of line and space properties in floorplans, so it can be roughly associated to a description of walls and room structure. A retrieval application illustrates the performance of the proposed approach, where given a plan as a query, similar ones are obtained from a database. A ground truth based on human observation has been constructed to validate the hypothesis. Additional retrieval results on sketched building’s facades are reported qualitatively in this paper. Its good description and its adaptability to two different sketch drawings despite its simplicity shows the interest of the proposed approach and opens a challenging research line in graphics recognition.
Keywords: Graphics recognition; Graphics retrieval; Image classification
|
Naveen Onkarappa, & Angel Sappa. (2014). Speed and Texture: An Empirical Study on Optical-Flow Accuracy in ADAS Scenarios. TITS - IEEE Transactions on Intelligent Transportation Systems, 15(1), 136–147.
Abstract: IF: 3.064
Increasing mobility in everyday life has led to the concern for the safety of automotives and human life. Computer vision has become a valuable tool for developing driver assistance applications that target such a concern. Many such vision-based assisting systems rely on motion estimation, where optical flow has shown its potential. A variational formulation of optical flow that achieves a dense flow field involves a data term and regularization terms. Depending on the image sequence, the regularization has to appropriately be weighted for better accuracy of the flow field. Because a vehicle can be driven in different kinds of environments, roads, and speeds, optical-flow estimation has to be accurately computed in all such scenarios. In this paper, we first present the polar representation of optical flow, which is quite suitable for driving scenarios due to the possibility that it offers to independently update regularization factors in different directional components. Then, we study the influence of vehicle speed and scene texture on optical-flow accuracy. Furthermore, we analyze the relationships of these specific characteristics on a driving scenario (vehicle speed and road texture) with the regularization weights in optical flow for better accuracy. As required by the work in this paper, we have generated several synthetic sequences along with ground-truth flow fields.
|
Miguel Angel Bautista, Antonio Hernandez, Sergio Escalera, Laura Igual, Oriol Pujol, Josep Moya, et al. (2016). A Gesture Recognition System for Detecting Behavioral Patterns of ADHD. TSMCB - IEEE Transactions on System, Man and Cybernetics, Part B, 46(1), 136–147.
Abstract: We present an application of gesture recognition using an extension of Dynamic Time Warping (DTW) to recognize behavioural patterns of Attention Deficit Hyperactivity Disorder (ADHD). We propose an extension of DTW using one-class classifiers in order to be able to encode the variability of a gesture category, and thus, perform an alignment between a gesture sample and a gesture class. We model the set of gesture samples of a certain gesture category using either GMMs or an approximation of Convex Hulls. Thus, we add a theoretical contribution to classical warping path in DTW by including local modeling of intra-class gesture variability. This methodology is applied in a clinical context, detecting a group of ADHD behavioural patterns defined by experts in psychology/psychiatry, to provide support to clinicians in the diagnose procedure. The proposed methodology is tested on a novel multi-modal dataset (RGB plus Depth) of ADHD children recordings with behavioural patterns. We obtain satisfying results when compared to standard state-of-the-art approaches in the DTW context.
Keywords: Gesture Recognition; ADHD; Gaussian Mixture Models; Convex Hulls; Dynamic Time Warping; Multi-modal RGB-Depth data
|
Asma Bensalah, Antonio Parziale, Giuseppe De Gregorio, Angelo Marcelli, Alicia Fornes, & Josep Llados. (2023). I Can’t Believe It’s Not Better: In-air Movement for Alzheimer Handwriting Synthetic Generation. In 21st International Graphonomics Conference (136–148).
Abstract: During recent years, there here has been a boom in terms of deep learning use for handwriting analysis and recognition. One main application for handwriting analysis is early detection and diagnosis in the health field. Unfortunately, most real case problems still suffer a scarcity of data, which makes difficult the use of deep learning-based models. To alleviate this problem, some works resort to synthetic data generation. Lately, more works are directed towards guided data synthetic generation, a generation that uses the domain and data knowledge to generate realistic data that can be useful to train deep learning models. In this work, we combine the domain knowledge about the Alzheimer’s disease for handwriting and use it for a more guided data generation. Concretely, we have explored the use of in-air movements for synthetic data generation.
|
Javier Marin, David Vazquez, David Geronimo, & Antonio Lopez. (2010). Learning Appearance in Virtual Scenarios for Pedestrian Detection. In 23rd IEEE Conference on Computer Vision and Pattern Recognition (137–144).
Abstract: Detecting pedestrians in images is a key functionality to avoid vehicle-to-pedestrian collisions. The most promising detectors rely on appearance-based pedestrian classifiers trained with labelled samples. This paper addresses the following question: can a pedestrian appearance model learnt in virtual scenarios work successfully for pedestrian detection in real images? (Fig. 1). Our experiments suggest a positive answer, which is a new and relevant conclusion for research in pedestrian detection. More specifically, we record training sequences in virtual scenarios and then appearance-based pedestrian classifiers are learnt using HOG and linear SVM. We test such classifiers in a publicly available dataset provided by Daimler AG for pedestrian detection benchmarking. This dataset contains real world images acquired from a moving car. The obtained result is compared with the one given by a classifier learnt using samples coming from real images. The comparison reveals that, although virtual samples were not specially selected, both virtual and real based training give rise to classifiers of similar performance.
Keywords: Pedestrian Detection; Domain Adaptation
|