|
David Rotger, Petia Radeva, E Fernandez-Nofrerias, & J. Mauri. (2007). Blood Detection In IVUS Longitudinal Cuts Using AdaBoost With a Novel Feature Stability Criterion. In Artificial Intelligence Research and Development. Proceedings of the 10th International Conference of the ACIA (Vol. 163, 197–204).
|
|
|
Alex Goldhoorn, Arnau Ramisa, Ramon Lopez de Mantaras, & Ricardo Toledo. (2007). Using the Average Landmark Vector Method for Robot Homing. In Artificial Intelligence Research and Development, Proceedings of the 10th International Conference of the ACIA (Vol. 163, 331–338).
|
|
|
Sergio Escalera, Alicia Fornes, Oriol Pujol, Josep Llados, & Petia Radeva. (2007). Multi-class Binary Object Categorization using Blurred Shape Models. In Progress in Pattern Recognition, Image Analysis and Applications, 12th Iberoamerican Congress on Pattern (Vol. 4756, 773–782). LCNS.
|
|
|
Carles Fernandez, Pau Baiget, Xavier Roca, & Jordi Gonzalez. (2007). Natural Language Descriptions of Human Behavior from Video Sequences. In Advances in Artificial Intelligence, 30th Annual Conference on Artificial Intelligence (Vol. 4667, 279–292). LNCS.
|
|
|
Carme Julia, Angel Sappa, Felipe Lumbreras, & Antonio Lopez. (2008). Recovery of Surface Normals and Reflectance from Different Lighting Conditions. In 5th International Conference on Image Analysis and Recognition (Vol. 5112, 315–325). LNCS.
|
|
|
Sergio Escalera, Oriol Pujol, J. Mauri, & Petia Radeva. (2008). IVUS Tissue Characterization with Sub-class Error-correcting Output Codes. In Computer Vision and Pattern Recognition Workshops, 2008. CVPR Workshops 2008. IEEE Computer Society Conference on, pp. 1–8, 23–28 juny 2008..
Abstract: Intravascular ultrasound (IVUS) represents a powerful imaging technique to explore coronary vessels and to study their morphology and histologic properties. In this paper, we characterize different tissues based on Radio Frequency, texture-based, slope-based, and combined features. To deal with the classification of multiple tissues, we require the use of robust multi-class learning techniques. In this context, we propose a strategy to model multi-class classification tasks using sub-classes information in the ECOC framework. The new strategy splits the classes into different subsets according to the applied base classifier. Complex IVUS data sets containing overlapping data are learnt by splitting the original set of classes into sub-classes, and embedding the binary problems in a problem-dependent ECOC design. The method automatically characterizes different tissues, showing performance improvements over the state-of-the-art ECOC techniques for different base classifiers and feature sets.
|
|
|
Agata Lapedriza, David Masip, & Jordi Vitria. (2008). On the Use of Independent Tasks for Face Recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition (1–6).
|
|
|
Fadi Dornaika, & Angel Sappa. (2007). Improving Appearance-Based 3D Face Tracking Using Sparse Stereo Data. In H. Araujo and J. Jorge A. R. J. Braz (Ed.), Advances in Computer Graphics and Computer Vision, (354–366). Springer Verlag.
|
|
|
Partha Pratim Roy, Umapada Pal, & Josep Llados. (2008). Recognition of Multi-oriented Touching Characters in Graphical Documents. In Computer Vision, Graphics & Image Processing, 2008. Sixth Indian Conference on, (Vol. 16, 297–304).
|
|
|
Muhammad Muzzamil Luqman, Jean-Yves Ramel, & Josep Llados. (2012). Improving Fuzzy Multilevel Graph Embedding through Feature Selection Technique. In Structural, Syntactic, and Statistical Pattern Recognition, Joint IAPR International Workshop (Vol. 7626, pp. 243–253). LNCS. Springer Berlin Heidelberg.
Abstract: Graphs are the most powerful, expressive and convenient data structures but there is a lack of efficient computational tools and algorithms for processing them. The embedding of graphs into numeric vector spaces permits them to access the state-of-the-art computational efficient statistical models and tools. In this paper we take forward our work on explicit graph embedding and present an improvement to our earlier proposed method, named “fuzzy multilevel graph embedding – FMGE”, through feature selection technique. FMGE achieves the embedding of attributed graphs into low dimensional vector spaces by performing a multilevel analysis of graphs and extracting a set of global, structural and elementary level features. Feature selection permits FMGE to select the subset of most discriminating features and to discard the confusing ones for underlying graph dataset. Experimental results for graph classification experimentation on IAM letter, GREC and fingerprint graph databases, show improvement in the performance of FMGE.
|
|
|
Muhammad Muzzamil Luqman, Thierry Brouard, Jean-Yves Ramel, & Josep Llados. (2012). Recherche de sous-graphes par encapsulation floue des cliques d'ordre 2: Application à la localisation de contenu dans les images de documents graphiques. In Colloque International Francophone sur l'Écrit et le Document (pp. 149–162).
|
|
|
Mehdi Mirza-Mohammadi, Sergio Escalera, & Petia Radeva. (2009). Contextual-Guided Bag-of-Visual-Words Model for Multi-class Object Categorization. In 13th International Conference on Computer Analysis of Images and Patterns (Vol. 5702, 748–756). LNCS. Springer Berlin Heidelberg.
Abstract: Bag-of-words model (BOW) is inspired by the text classification problem, where a document is represented by an unsorted set of contained words. Analogously, in the object categorization problem, an image is represented by an unsorted set of discrete visual words (BOVW). In these models, relations among visual words are performed after dictionary construction. However, close object regions can have far descriptions in the feature space, being grouped as different visual words. In this paper, we present a method for considering geometrical information of visual words in the dictionary construction step. Object interest regions are obtained by means of the Harris-Affine detector and then described using the SIFT descriptor. Afterward, a contextual-space and a feature-space are defined, and a merging process is used to fuse feature words based on their proximity in the contextual-space. Moreover, we use the Error Correcting Output Codes framework to learn the new dictionary in order to perform multi-class classification. Results show significant classification improvements when spatial information is taken into account in the dictionary construction step.
|
|
|
Wenjuan Gong, Andrew Bagdanov, Xavier Roca, & Jordi Gonzalez. (2010). Automatic Key Pose Selection for 3D Human Action Recognition. In 6th International Conference on Articulated Motion and Deformable Objects (Vol. 6169, 290–299). Springer Verlag.
Abstract: This article describes a novel approach to the modeling of human actions in 3D. The method we propose is based on a “bag of poses” model that represents human actions as histograms of key-pose occurrences over the course of a video sequence. Actions are first represented as 3D poses using a sequence of 36 direction cosines corresponding to the angles 12 joints form with the world coordinate frame in an articulated human body model. These pose representations are then projected to three-dimensional, action-specific principal eigenspaces which we refer to as aSpaces. We introduce a method for key-pose selection based on a local-motion energy optimization criterion and we show that this method is more stable and more resistant to noisy data than other key-poses selection criteria for action recognition.
|
|
|
Susana Alvarez, Anna Salvatella, Maria Vanrell, & Xavier Otazu. (2010). 3D Texton Spaces for color-texture retrieval. In A.C. Campilho and M.S. Kamel (Ed.), 7th International Conference on Image Analysis and Recognition (Vol. 6111, 354–363). LNCS. Springer Berlin Heidelberg.
Abstract: Color and texture are visual cues of different nature, their integration in an useful visual descriptor is not an easy problem. One way to combine both features is to compute spatial texture descriptors independently on each color channel. Another way is to do the integration at the descriptor level. In this case the problem of normalizing both cues arises. In this paper we solve the latest problem by fusing color and texture through distances in texton spaces. Textons are the attributes of image blobs and they are responsible for texture discrimination as defined in Julesz’s Texton theory. We describe them in two low-dimensional and uniform spaces, namely, shape and color. The dissimilarity between color texture images is computed by combining the distances in these two spaces. Following this approach, we propose our TCD descriptor which outperforms current state of art methods in the two different approaches mentioned above, early combination with LBP and late combination with MPEG-7. This is done on an image retrieval experiment over a highly diverse texture dataset from Corel.
|
|
|
R. de Nijs, Sebastian Ramos, Gemma Roig, Xavier Boix, Luc Van Gool, & K. Kühnlenz. (2012). On-line Semantic Perception Using Uncertainty. In International Conference on Intelligent Robots and Systems (pp. 4185–4191).
Abstract: Visual perception capabilities are still highly unreliable in unconstrained settings, and solutions might not beaccurate in all regions of an image. Awareness of the uncertainty of perception is a fundamental requirement for proper high level decision making in a robotic system. Yet, the uncertainty measure is often sacrificed to account for dependencies between object/region classifiers. This is the case of Conditional Random Fields (CRFs), the success of which stems from their ability to infer the most likely world configuration, but they do not directly allow to estimate the uncertainty of the solution. In this paper, we consider the setting of assigning semantic labels to the pixels of an image sequence. Instead of using a CRF, we employ a Perturb-and-MAP Random Field, a recently introduced probabilistic model that allows performing fast approximate sampling from its probability density function. This allows to effectively compute the uncertainty of the solution, indicating the reliability of the most likely labeling in each region of the image. We report results on the CamVid dataset, a standard benchmark for semantic labeling of urban image sequences. In our experiments, we show the benefits of exploiting the uncertainty by putting more computational effort on the regions of the image that are less reliable, and use more efficient techniques for other regions, showing little decrease of performance
Keywords: Semantic Segmentation
|
|