|
J. Oliver, Ricardo Toledo, J. Pujol, J. Sorribes, & E. Valderrama. (2009). Un ABP basado en la robotica para las ingenierias informaticas.
|
|
|
Oscar Camara, Estanislao Oubel, Gemma Piella, Simone Balocco, Mathieu De Craene, & Alejandro F. Frangi. (2009). Multi-sequence Registration of Cine, Tagged and Delay-Enhancement MRI with Shift Correction and Steerable Pyramid-Based Detagging. In 5th International Conference on Functional Imaging and Modeling of the Heart (Vol. 5528, 330–338). LNCS. Springer Berlin Heidelberg.
Abstract: In this work, we present a registration framework for cardiac cine MRI (cMRI), tagged (tMRI) and delay-enhancement MRI (deMRI), where the two main issues to find an accurate alignment between these images have been taking into account: the presence of tags in tMRI and respiration artifacts in all sequences. A steerable pyramid image decomposition has been used for detagging purposes since it is suitable to extract high-order oriented structures by directional adaptive filtering. Shift correction of cMRI is achieved by firstly maximizing the similarity between the Long Axis and Short Axis cMRI. Subsequently, these shift-corrected images are used as target images in a rigid registration procedure with their corresponding tMRI/deMRI in order to correct their shift. The proposed registration framework has been evaluated by 840 registration tests, considerably improving the alignment of the MR images (mean RMS error of 2.04mm vs. 5.44mm).
|
|
|
Fadi Dornaika, & Bogdan Raducanu. (2009). Simultaneous 3D face pose and person-specific shape estimation from a single image using a holistic approach. In IEEE Workshop on Applications of Computer Vision.
Abstract: This paper presents a new approach for the simultaneous estimation of the 3D pose and specific shape of a previously unseen face from a single image. The face pose is not limited to a frontal view. We describe a holistic approach based on a deformable 3D model and a learned statistical facial texture model. Rather than obtaining a person-specific facial surface, the goal of this work is to compute person-specific 3D face shape in terms of a few control parameters that are used by many applications. The proposed holistic approach estimates the 3D pose parameters as well as the face shape control parameters by registering the warped texture to a statistical face texture, which is carried out by a stochastic and genetic optimizer. The proposed approach has several features that make it very attractive: (i) it uses a single grey-scale image, (ii) it is person-independent, (iii) it is featureless (no facial feature extraction is required), and (iv) its learning stage is easy. The proposed approach lends itself nicely to 3D face tracking and face gesture recognition in monocular videos. We describe extensive experiments that show the feasibility and robustness of the proposed approach.
|
|
|
Bogdan Raducanu, & Fadi Dornaika. (2009). Natural Facial Expression Recognition Using Dynamic and Static Schemes. In 5th International Symposium on Visual Computing (Vol. 5875, 730–739). LNCS. Springer Berlin Heidelberg.
Abstract: Affective computing is at the core of a new paradigm in HCI and AI represented by human-centered computing. Within this paradigm, it is expected that machines will be enabled with perceiving capabilities, making them aware about users’ affective state. The current paper addresses the problem of facial expression recognition from monocular videos sequences. We propose a dynamic facial expression recognition scheme, which is proven to be very efficient. Furthermore, it is conveniently compared with several static-based systems adopting different magnitude of facial expression. We provide evaluations of performance using Linear Discriminant Analysis (LDA), Non parametric Discriminant Analysis (NDA), and Support Vector Machines (SVM). We also provide performance evaluations using arbitrary test video sequences.
|
|
|
Sergio Escalera, Oriol Pujol, J. Mauri, & Petia Radeva. (2009). Intravascular Ultrasound Tissue Characterization with Sub-class Error-Correcting Output Codes. Journal of Signal Processing Systems, 55(1-3), 35–47.
Abstract: Intravascular ultrasound (IVUS) represents a powerful imaging technique to explore coronary vessels and to study their morphology and histologic properties. In this paper, we characterize different tissues based on radial frequency, texture-based, and combined features. To deal with the classification of multiple tissues, we require the use of robust multi-class learning techniques. In this sense, error-correcting output codes (ECOC) show to robustly combine binary classifiers to solve multi-class problems. In this context, we propose a strategy to model multi-class classification tasks using sub-classes information in the ECOC framework. The new strategy splits the classes into different sub-sets according to the applied base classifier. Complex IVUS data sets containing overlapping data are learnt by splitting the original set of classes into sub-classes, and embedding the binary problems in a problem-dependent ECOC design. The method automatically characterizes different tissues, showing performance improvements over the state-of-the-art ECOC techniques for different base classifiers. Furthermore, the combination of RF and texture-based features also shows improvements over the state-of-the-art approaches.
|
|
|
Anjan Dutta, & Zeynep Akata. (2019). Semantically Tied Paired Cycle Consistency for Zero-Shot Sketch-based Image Retrieval. In 32nd IEEE Conference on Computer Vision and Pattern Recognition (pp. 5089–5098).
Abstract: Zero-shot sketch-based image retrieval (SBIR) is an emerging task in computer vision, allowing to retrieve natural images relevant to sketch queries that might not been seen in the training phase. Existing works either require aligned sketch-image pairs or inefficient memory fusion layer for mapping the visual information to a semantic space. In this work, we propose a semantically aligned paired cycle-consistent generative (SEM-PCYC) model for zero-shot SBIR, where each branch maps the visual information to a common semantic space via an adversarial training. Each of these branches maintains a cycle consistency that only requires supervision at category levels, and avoids the need of highly-priced aligned sketch-image pairs. A classification criteria on the generators' outputs ensures the visual to semantic space mapping to be discriminating. Furthermore, we propose to combine textual and hierarchical side information via a feature selection auto-encoder that selects discriminating side information within a same end-to-end model. Our results demonstrate a significant boost in zero-shot SBIR performance over the state-of-the-art on the challenging Sketchy and TU-Berlin datasets.
|
|
|
Oriol Pujol, Eloi Puertas, & Carlo Gatta. (2009). Multi-scale Stacked Sequential Learning. In 8th International Workshop of Multiple Classifier Systems (Vol. 5519, 262–271). Springer Berlin Heidelberg.
Abstract: One of the most widely used assumptions in supervised learning is that data is independent and identically distributed. This assumption does not hold true in many real cases. Sequential learning is the discipline of machine learning that deals with dependent data such that neighboring examples exhibit some kind of relationship. In the literature, there are different approaches that try to capture and exploit this correlation, by means of different methodologies. In this paper we focus on meta-learning strategies and, in particular, the stacked sequential learning approach. The main contribution of this work is two-fold: first, we generalize the stacked sequential learning. This generalization reflects the key role of neighboring interactions modeling. Second, we propose an effective and efficient way of capturing and exploiting sequential correlations that takes into account long-range interactions by means of a multi-scale pyramidal decomposition of the predicted labels. Additionally, this new method subsumes the standard stacked sequential learning approach. We tested the proposed method on two different classification tasks: text lines classification in a FAQ data set and image classification. Results on these tasks clearly show that our approach outperforms the standard stacked sequential learning. Moreover, we show that the proposed method allows to control the trade-off between the detail and the desired range of the interactions.
|
|
|
Daniel Ponsa, & Antonio Lopez. (2009). Seguimiento Visual de Contornos Computerizado.
|
|
|
Ferran Diego, Daniel Ponsa, Joan Serrat, & Antonio Lopez. (2009). Video alignment for automotive applications.
Keywords: video alignment
|
|
|
Jose Manuel Alvarez, & Antonio Lopez. (2009). Model-based road detection using shadowless features and on-line learning.
|
|
|
Xavier Boix, Josep M. Gonfaus, Fahad Shahbaz Khan, Joost Van de Weijer, Andrew Bagdanov, Marco Pedersoli, et al. (2009). Combining local and global bag-of-word representations for semantic segmentation. In Workshop on The PASCAL Visual Object Classes Challenge.
|
|
|
Arjan Gijsenij, Theo Gevers, & Joost Van de Weijer. (2010). Generalized Gamut Mapping using Image Derivative Structures for Color Constancy. IJCV - International Journal of Computer Vision, 86(2-3), 127–139.
Abstract: The gamut mapping algorithm is one of the most promising methods to achieve computational color constancy. However, so far, gamut mapping algorithms are restricted to the use of pixel values to estimate the illuminant. Therefore, in this paper, gamut mapping is extended to incorporate the statistical nature of images. It is analytically shown that the proposed gamut mapping framework is able to include any linear filter output. The main focus is on the local n-jet describing the derivative structure of an image. It is shown that derivatives have the advantage over pixel values to be invariant to disturbing effects (i.e. deviations of the diagonal model) such as saturated colors and diffuse light. Further, as the n-jet based gamut mapping has the ability to use more information than pixel values alone, the combination of these algorithms are more stable than the regular gamut mapping algorithm. Different methods of combining are proposed. Based on theoretical and experimental results conducted on large scale data sets of hyperspectral, laboratory and realworld scenes, it can be derived that (1) in case of deviations of the diagonal model, the derivative-based approach outperforms the pixel-based gamut mapping, (2) state-of-the-art algorithms are outperformed by the n-jet based gamut mapping, (3) the combination of the different n-jet based gamut
|
|
|
Eduard Vazquez, Theo Gevers, M. Lucassen, Joost Van de Weijer, & Ramon Baldrich. (2010). Saliency of Color Image Derivatives: A Comparison between Computational Models and Human Perception. JOSA A - Journal of the Optical Society of America A, 27(3), 613–621.
Abstract: In this paper, computational methods are proposed to compute color edge saliency based on the information content of color edges. The computational methods are evaluated on bottom-up saliency in a psychophysical experiment, and on a more complex task of salient object detection in real-world images. The psychophysical experiment demonstrates the relevance of using information theory as a saliency processing model and that the proposed methods are significantly better in predicting color saliency (with a human-method correspondence up to 74.75% and an observer agreement of 86.8%) than state-of-the-art models. Furthermore, results from salient object detection confirm that an early fusion of color and contrast provide accurate performance to compute visual saliency with a hit rate up to 95.2%.
|
|
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2010). Traffic sign recognition system with β -correction. MVA - Machine Vision and Applications, 21(2), 99–111.
Abstract: Traffic sign classification represents a classical application of multi-object recognition processing in uncontrolled adverse environments. Lack of visibility, illumination changes, and partial occlusions are just a few problems. In this paper, we introduce a novel system for multi-class classification of traffic signs based on error correcting output codes (ECOC). ECOC is based on an ensemble of binary classifiers that are trained on bi-partition of classes. We classify a wide set of traffic signs types using robust error correcting codings. Moreover, we introduce the novel β-correction decoding strategy that outperforms the state-of-the-art decoding techniques, classifying a high number of classes with great success.
|
|
|
Sergio Escalera, Oriol Pujol, & Petia Radeva. (2010). On the Decoding Process in Ternary Error-Correcting Output Codes. TPAMI - IEEE on Pattern Analysis and Machine Intelligence, 32(1), 120–134.
Abstract: A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-correcting output codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a ldquodo not carerdquo symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI machine learning repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.
|
|