|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2010). Traffic sign recognition system with β -correction. MVA - Machine Vision and Applications, 21(2), 99–111.
Abstract: Traffic sign classification represents a classical application of multi-object recognition processing in uncontrolled adverse environments. Lack of visibility, illumination changes, and partial occlusions are just a few problems. In this paper, we introduce a novel system for multi-class classification of traffic signs based on error correcting output codes (ECOC). ECOC is based on an ensemble of binary classifiers that are trained on bi-partition of classes. We classify a wide set of traffic signs types using robust error correcting codings. Moreover, we introduce the novel β-correction decoding strategy that outperforms the state-of-the-art decoding techniques, classifying a high number of classes with great success.
|
|
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2010). On the Decoding Process in Ternary Error-Correcting Output Codes. TPAMI - IEEE on Pattern Analysis and Machine Intelligence, 32(1), 120–134.
Abstract: A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-correcting output codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a ldquodo not carerdquo symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI machine learning repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
|
|
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2010). Error-Correcting Output Codes Library. JMLR - Journal of Machine Learning Research, 11, 661–664.
Abstract: (Feb):661−664
In this paper, we present an open source Error-Correcting Output Codes (ECOC) library. The ECOC framework is a powerful tool to deal with multi-class categorization problems. This library contains both state-of-the-art coding (one-versus-one, one-versus-all, dense random, sparse random, DECOC, forest-ECOC, and ECOC-ONE) and decoding designs (hamming, euclidean, inverse hamming, laplacian, β-density, attenuated, loss-based, probabilistic kernel-based, and loss-weighted) with the parameters defined by the authors, as well as the option to include your own coding, decoding, and base classifier.
|
|
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2010). Re-coding ECOCs without retraining. PRL - Pattern Recognition Letters, 31(7), 555–562.
Abstract: A standard way to deal with multi-class categorization problems is by the combination of binary classifiers in a pairwise voting procedure. Recently, this classical approach has been formalized in the Error-Correcting Output Codes (ECOC) framework. In the ECOC framework, the one-versus-one coding demonstrates to achieve higher performance than the rest of coding designs. The binary problems that we train in the one-versus-one strategy are significantly smaller than in the rest of designs, and usually easier to be learnt, taking into account the smaller overlapping between classes. However, a high percentage of the positions coded by zero of the coding matrix, which implies a high sparseness degree, does not codify meta-class membership information. In this paper, we show that using the training data we can redefine without re-training, in a problem-dependent way, the one-versus-one coding matrix so that the new coded information helps the system to increase its generalization capability. Moreover, the new re-coding strategy is generalized to be applied over any binary code. The results over several UCI Machine Learning repository data sets and two real multi-class problems show that performance improvements can be obtained re-coding the classical one-versus-one and Sparse random designs compared to different state-of-the-art ECOC configurations.
|
|
|
Sergio Escalera, Oriol Pujol, Petia Radeva, Jordi Vitria, & Maria Teresa Anguera. (2010). Automatic Detection of Dominance and Expected Interest. EURASIPJ - EURASIP Journal on Advances in Signal Processing, , 12.
Abstract: Article ID 491819
Social Signal Processing is an emergent area of research that focuses on the analysis of social constructs. Dominance and interest are two of these social constructs. Dominance refers to the level of influence a person has in a conversation. Interest, when referred in terms of group interactions, can be defined as the degree of engagement that the members of a group collectively display during their interaction. In this paper, we argue that only using behavioral motion information, we are able to predict the interest of observers when looking at face-to-face interactions as well as the dominant people. First, we propose a simple set of movement-based features from body, face, and mouth activity in order to define a higher set of interaction indicators. The considered indicators are manually annotated by observers. Based on the opinions obtained, we define an automatic binary dominance detection problem and a multiclass interest quantification problem. Error-Correcting Output Codes framework is used to learn to rank the perceived observer's interest in face-to-face interactions meanwhile Adaboost is used to solve the dominant detection problem. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers in both dominance and interest detection problems.
|
|
|
Sergio Escalera, Petia Radeva, Jordi Vitria, Xavier Baro, & Bogdan Raducanu. (2010). Modelling and Analyzing Multimodal Dyadic Interactions Using Social Networks. In 12th International Conference on Multimodal Interfaces and 7th Workshop on Machine Learning for Multimodal Interaction..
Abstract: Social network analysis became a common technique used to model and quantify the properties of social interactions. In this paper, we propose an integrated framework to explore the characteristics of a social network extracted from
multimodal dyadic interactions. First, speech detection is performed through an audio/visual fusion scheme based on stacked sequential learning. In the audio domain, speech is detected through clusterization of audio features. Clusters
are modelled by means of an One-state Hidden Markov Model containing a diagonal covariance Gaussian Mixture Model. In the visual domain, speech detection is performed through differential-based feature extraction from the segmented
mouth region, and a dynamic programming matching procedure. Second, in order to model the dyadic interactions, we employed the Influence Model whose states
encode the previous integrated audio/visual data. Third, the social network is extracted based on the estimated influences. For our study, we used a set of videos belonging to New York Times’ Blogging Heads opinion blog. The results
are reported both in terms of accuracy of the audio/visual data fusion and centrality measures used to characterize the social network.
Keywords: Social interaction; Multimodal fusion, Influence model; Social network analysis
|
|
|
Sergio Escalera, R. M. Martinez, Jordi Vitria, Petia Radeva, & Maria Teresa Anguera. (2010). Deteccion automatica de la dominancia en conversaciones diadicas. EP - Escritos de Psicologia, 3(2), 41–45.
Abstract: Dominance is referred to the level of influence that a person has in a conversation. Dominance is an important research area in social psychology, but the problem of its automatic estimation is a very recent topic in the contexts of social and wearable computing. In this paper, we focus on the dominance detection of visual cues. We estimate the correlation among observers by categorizing the dominant people in a set of face-to-face conversations. Different dominance indicators from gestural communication are defined, manually annotated, and compared to the observers' opinion. Moreover, these indicators are automatically extracted from video sequences and learnt by using binary classifiers. Results from the three analyses showed a high correlation and allows the categorization of dominant people in public discussion video sequences.
Keywords: Dominance detection; Non-verbal communication; Visual features
|
|
|
Sergio Vera. (2010). Finger joint modelling from hand X-ray images for assessing rheumatoid arthritis (Vol. 164). Master's thesis, , Bellaterra 01893, Barcelona, Spain.
Abstract: Rheumatoid arthritis is an autoimmune, systemic, inflammatory disorder that mainly af- fects bone joints. While there is no cure for this disease, continuous advances on palliative treatments require frequent verification of patient’s illness evolution. Such evolution is mea- sured through several available semi-quantitative methods that require evaluation of hand and foot X-ray images. Accurate assessment is a time consuming task that requires highly trained personnel. This hinders a generalized use in clinical practice for early diagnose and disease follow-up. In the context of the automatization of such evaluation methods we present a method for detection and characterization of finger joints in hand radiography images. Several measures for assessing the reduction of joint space width are proposed. We compare for the first time such measures to the Van der Heijde score, the gold standard method for rheumatoid arthritis assessment. The proposed method outperforms existing strategies with a detection rate above 95%. Our comparison to Van der Heijde index shows a promising correlation that encourages further research.
Keywords: Rheumatoid arthritis; joint detection; X-ray; Van der Heijde score
|
|
|
Shida Beigpour, & Joost Van de Weijer. (2010). Photo-Realistic Color Alteration for Architecture and Design. In Proceedings of The CREATE 2010 Conference (84–88).
Abstract: As color is a strong stimuli we receive from the exterior world, choosing the right color can prove crucial in creating the desired architecture and desing. We propose a framework to apply a realistic color change on both objects and their illuminant lights for snapshots of architectural designs, in order to visualize and choose the right color before actully applying the change in the real world. The proposed framework is based on the laws of physics in order to accomplish realistic and physically plausible results.
|
|
|
Simone Balocco, Carlo Gatta, Oriol Pujol, J. Mauri, & Petia Radeva. (2010). SRBF: Speckle Reducing Bilateral Filtering. UMB - Ultrasound in Medicine and Biology, 36(8), 1353–1363.
Abstract: Speckle noise negatively affects medical ultrasound image shape interpretation and boundary detection. Speckle removal filters are widely used to selectively remove speckle noise without destroying important image features to enhance object boundaries. In this article, a fully automatic bilateral filter tailored to ultrasound images is proposed. The edge preservation property is obtained by embedding noise statistics in the filter framework. Consequently, the filter is able to tackle the multiplicative behavior modulating the smoothing strength with respect to local statistics. The in silico experiments clearly showed that the speckle reducing bilateral filter (SRBF) has superior performances to most of the state of the art filtering methods. The filter is tested on 50 in vivo US images and its influence on a segmentation task is quantified. The results using SRBF filtered data sets show a superior performance to using oriented anisotropic diffusion filtered images. This improvement is due to the adaptive support of SRBF and the embedded noise statistics, yielding a more homogeneous smoothing. SRBF results in a fully automatic, fast and flexible algorithm potentially suitable in wide ranges of speckle noise sizes, for different medical applications (IVUS, B-mode, 3-D matrix array US).
|
|
|
Simone Balocco, O. Basset, G. Courbebaisse, E. Boni, Alejandro F. Frangi, P. Tortoli, et al. (2010). Estimation Of Viscoelastic Properties Of Vessel Walls Using a Computational Model and Doppler Ultrasound. PMB - Physics in Medicine and Biology, 55(12), 3557–3575.
Abstract: Human arteries affected by atherosclerosis are characterized by altered wall viscoelastic properties. The possibility of noninvasively assessing arterial viscoelasticity in vivo would significantly contribute to the early diagnosis and prevention of this disease. This paper presents a noniterative technique to estimate the viscoelastic parameters of a vascular wall Zener model. The approach requires the simultaneous measurement of flow variations and wall displacements, which can be provided by suitable ultrasound Doppler instruments. Viscoelastic parameters are estimated by fitting the theoretical constitutive equations to the experimental measurements using an ARMA parameter approach. The accuracy and sensitivity of the proposed method are tested using reference data generated by numerical simulations of arterial pulsation in which the physiological conditions and the viscoelastic parameters of the model can be suitably varied. The estimated values quantitatively agree with the reference values, showing that the only parameter affected by changing the physiological conditions is viscosity, whose relative error was about 27% even when a poor signal-to-noise ratio is simulated. Finally, the feasibility of the method is illustrated through three measurements made at different flow regimes on a cylindrical vessel phantom, yielding a parameter mean estimation error of 25%.
|
|
|
Simone Balocco, O. Camara, E. Vivas, T. Sola, L. Guimaraens, H. A. van Andel, et al. (2010). Feasibility of Estimating Regional Mechanical Properties of Cerebral Aneurysms In Vivo. MEDPHYS - Medical Physics, 37(4), 1689–1706.
Abstract: PURPOSE:
In this article, the authors studied the feasibility of estimating regional mechanical properties in cerebral aneurysms, integrating information extracted from imaging and physiological data with generic computational models of the arterial wall behavior.
METHODS:
A data assimilation framework was developed to incorporate patient-specific geometries into a given biomechanical model, whereas wall motion estimates were obtained from applying registration techniques to a pair of simulated MR images and guided the mechanical parameter estimation. A simple incompressible linear and isotropic Hookean model coupled with computational fluid-dynamics was employed as a first approximation for computational purposes. Additionally, an automatic clustering technique was developed to reduce the number of parameters to assimilate at the optimization stage and it considerably accelerated the convergence of the simulations. Several in silico experiments were designed to assess the influence of aneurysm geometrical characteristics and the accuracy of wall motion estimates on the mechanical property estimates. Hence, the proposed methodology was applied to six real cerebral aneurysms and tested against a varying number of regions with different elasticity, different mesh discretization, imaging resolution, and registration configurations.
RESULTS:
Several in silico experiments were conducted to investigate the feasibility of the proposed workflow, results found suggesting that the estimation of the mechanical properties was mainly influenced by the image spatial resolution and the chosen registration configuration. According to the in silico experiments, the minimal spatial resolution needed to extract wall pulsation measurements with enough accuracy to guide the proposed data assimilation framework was of 0.1 mm.
CONCLUSIONS:
Current routine imaging modalities do not have such a high spatial resolution and therefore the proposed data assimilation framework cannot currently be used on in vivo data to reliably estimate regional properties in cerebral aneurysms. Besides, it was observed that the incorporation of fluid-structure interaction in a biomechanical model with linear and isotropic material properties did not have a substantial influence in the final results.
|
|
|
Sophie Wuerger, Kaida Xiao, Chenyang Fu, & Dimosthenis Karatzas. (2010). Colour-opponent mechanisms are not affected by age-related chromatic sensitivity changes. OPO - Ophthalmic and Physiological Optics, 30(5), 635–659.
Abstract: The purpose of this study was to assess whether age-related chromatic sensitivity changes are associated with corresponding changes in hue perception in a large sample of colour-normal observers over a wide age range (n = 185; age range: 18-75 years). In these observers we determined both the sensitivity along the protan, deutan and tritan line; and settings for the four unique hues, from which the characteristics of the higher-order colour mechanisms can be derived. We found a significant decrease in chromatic sensitivity due to ageing, in particular along the tritan line. From the unique hue settings we derived the cone weightings associated with the colour mechanisms that are at equilibrium for the four unique hues. We found that the relative cone weightings (w(L) /w(M) and w(L) /w(S)) associated with the unique hues were independent of age. Our results are consistent with previous findings that the unique hues are rather constant with age while chromatic sensitivity declines. They also provide evidence in favour of the hypothesis that higher-order colour mechanisms are equipped with flexible cone weightings, as opposed to fixed weights. The mechanism underlying this compensation is still poorly understood.
|
|
|
Susana Alvarez, Anna Salvatella, Maria Vanrell, & Xavier Otazu. (2010). 3D Texton Spaces for color-texture retrieval. In A.C. Campilho and M.S. Kamel (Ed.), 7th International Conference on Image Analysis and Recognition (Vol. 6111, 354–363). LNCS. Springer Berlin Heidelberg.
Abstract: Color and texture are visual cues of different nature, their integration in an useful visual descriptor is not an easy problem. One way to combine both features is to compute spatial texture descriptors independently on each color channel. Another way is to do the integration at the descriptor level. In this case the problem of normalizing both cues arises. In this paper we solve the latest problem by fusing color and texture through distances in texton spaces. Textons are the attributes of image blobs and they are responsible for texture discrimination as defined in Julesz’s Texton theory. We describe them in two low-dimensional and uniform spaces, namely, shape and color. The dissimilarity between color texture images is computed by combining the distances in these two spaces. Following this approach, we propose our TCD descriptor which outperforms current state of art methods in the two different approaches mentioned above, early combination with LBP and late combination with MPEG-7. This is done on an image retrieval experiment over a highly diverse texture dataset from Corel.
|
|
|
Susana Alvarez, Anna Salvatella, Maria Vanrell, & Xavier Otazu. (2010). Perceptual color texture codebooks for retrieving in highly diverse texture datasets. In 20th International Conference on Pattern Recognition (866–869).
Abstract: Color and texture are visual cues of different nature, their integration in a useful visual descriptor is not an obvious step. One way to combine both features is to compute texture descriptors independently on each color channel. A second way is integrate the features at a descriptor level, in this case arises the problem of normalizing both cues. A significant progress in the last years in object recognition has provided the bag-of-words framework that again deals with the problem of feature combination through the definition of vocabularies of visual words. Inspired in this framework, here we present perceptual textons that will allow to fuse color and texture at the level of p-blobs, which is our feature detection step. Feature representation is based on two uniform spaces representing the attributes of the p-blobs. The low-dimensionality of these text on spaces will allow to bypass the usual problems of previous approaches. Firstly, no need for normalization between cues; and secondly, vocabularies are directly obtained from the perceptual properties of text on spaces without any learning step. Our proposal improve current state-of-art of color-texture descriptors in an image retrieval experiment over a highly diverse texture dataset from Corel.
|
|