David Aldavert, Ricardo Toledo, Arnau Ramisa, & Ramon Lopez de Mantaras. (2009). Efficient Object Pixel-Level Categorization using Bag of Features: Advances in Visual Computing. In 5th International Symposium on Visual Computing (Vol. 5875, 44–55). Springer Berlin Heidelberg.
Abstract: In this paper we present a pixel-level object categorization method suitable to be applied under real-time constraints. Since pixels are categorized using a bag of features scheme, the major bottleneck of such an approach would be the feature pooling in local histograms of visual words. Therefore, we propose to bypass this time-consuming step and directly obtain the score from a linear Support Vector Machine classifier. This is achieved by creating an integral image of the components of the SVM which can readily obtain the classification score for any image sub-window with only 10 additions and 2 products, regardless of its size. Besides, we evaluated the performance of two efficient feature quantization methods: the Hierarchical K-Means and the Extremely Randomized Forest. All experiments have been done in the Graz02 database, showing comparable, or even better results to related work with a lower computational cost.
|
Pierluigi Casale, Oriol Pujol, & Petia Radeva. (2010). Embedding Random Projections in Regularized Gradient Boosting Machines. In Supervised and Unsupervised Ensemble Methods and their Applications in the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (44–53).
|
Sergio Escalera, Oriol Pujol, Petia Radeva, & Jordi Vitria. (2009). Measuring Interest of Human Dyadic Interactions. In 12th International Conference of the Catalan Association for Artificial Intelligence (Vol. 202, pp. 45–54).
Abstract: In this paper, we argue that only using behavioural motion information, we are able to predict the interest of observers when looking at face-to-face interactions. We propose a set of movement-related features from body, face, and mouth activity in order to define a set of higher level interaction features, such as stress, activity, speaking engagement, and corporal engagement. Error-Correcting Output Codes framework with an Adaboost base classifier is used to learn to rank the perceived observer's interest in face-to-face interactions. The automatic system shows good correlation between the automatic categorization results and the manual ranking made by the observers. In particular, the learning system shows that stress features have a high predictive power for ranking interest of observers when looking at of face-to-face interactions.
|
Jose Manuel Alvarez, Theo Gevers, & Antonio Lopez. (2010). Learning photometric invariance for object detection. IJCV - International Journal of Computer Vision, 90(1), 45–61.
Abstract: Impact factor: 3.508 (the last available from JCR2009SCI). Position 4/103 in the category Computer Science, Artificial Intelligence. Quartile
Color is a powerful visual cue in many computer vision applications such as image segmentation and object recognition. However, most of the existing color models depend on the imaging conditions that negatively affect the performance of the task at hand. Often, a reflection model (e.g., Lambertian or dichromatic reflectance) is used to derive color invariant models. However, this approach may be too restricted to model real-world scenes in which different reflectance mechanisms can hold simultaneously.
Therefore, in this paper, we aim to derive color invariance by learning from color models to obtain diversified color invariant ensembles. First, a photometrical orthogonal and non-redundant color model set is computed composed of both color variants and invariants. Then, the proposed method combines these color models to arrive at a diversified color ensemble yielding a proper balance between invariance (repeatability) and discriminative power (distinctiveness). To achieve this, our fusion method uses a multi-view approach to minimize the estimation error. In this way, the proposed method is robust to data uncertainty and produces properly diversified color invariant ensembles. Further, the proposed method is extended to deal with temporal data by predicting the evolution of observations over time.
Experiments are conducted on three different image datasets to validate the proposed method. Both the theoretical and experimental results show that the method is robust against severe variations in imaging conditions. The method is not restricted to a certain reflection model or parameter tuning, and outperforms state-of-the-art detection techniques in the field of object, skin and road recognition. Considering sequential data, the proposed method (extended to deal with future observations) outperforms the other methods
Keywords: road detection
|
Santiago Segui, Michal Drozdzal, Petia Radeva, & Jordi Vitria. (2010). Severe Motility Diagnosis using WCE. In Medical Image Computing in Catalunya: Graduate Student Workshop (45–46).
|
Miquel Ferrer, I. Bardaji, Ernest Valveny, Dimosthenis Karatzas, & Horst Bunke. (2013). Median Graph Computation by Means of Graph Embedding into Vector Spaces. In Yun Fu, & Yungian Ma (Eds.), Graph Embedding for Pattern Analysis (pp. 45–72). Springer New York.
Abstract: In pattern recognition [8, 14], a key issue to be addressed when designing a system is how to represent input patterns. Feature vectors is a common option. That is, a set of numerical features describing relevant properties of the pattern are computed and arranged in a vector form. The main advantages of this kind of representation are computational simplicity and a well sound mathematical foundation. Thus, a large number of operations are available to work with vectors and a large repository of algorithms for pattern analysis and classification exist. However, the simple structure of feature vectors might not be the best option for complex patterns where nonnumerical features or relations between different parts of the pattern become relevant.
|
Marc Bolaños, Maite Garolera, & Petia Radeva. (2013). Active labeling application applied to food-related object recognition. In 5th International Workshop on Multimedia for Cooking & Eating Activities (pp. 45–50).
Abstract: Every day, lifelogging devices, available for recording different aspects of our daily life, increase in number, quality and functions, just like the multiple applications that we give to them. Applying wearable devices to analyse the nutritional habits of people is a challenging application based on acquiring and analyzing life records in long periods of time. However, to extract the information of interest related to the eating patterns of people, we need automatic methods to process large amount of life-logging data (e.g. recognition of food-related objects). Creating a rich set of manually labeled samples to train the algorithms is slow, tedious and subjective. To address this problem, we propose a novel method in the framework of Active Labeling for construct- ing a training set of thousands of images. Inspired by the hierarchical sampling method for active learning [6], we propose an Active forest that organizes hierarchically the data for easy and fast labeling. Moreover, introducing a classifier into the hierarchical structures, as well as transforming the feature space for better data clustering, additionally im- prove the algorithm. Our method is successfully tested to label 89.700 food-related objects and achieves significant reduction in expert time labelling.
Active labeling application applied to food-related object recognition ResearchGate. Available from: http://www.researchgate.net/publication/262252017Activelabelingapplicationappliedtofood-relatedobjectrecognition [accessed Jul 14, 2015].
|
Katerine Diaz, Jesus Martinez del Rincon, & Aura Hernandez-Sabate. (2017). Decremental generalized discriminative common vectors applied to images classification. KBS - Knowledge-Based Systems, 131, 46–57.
Abstract: In this paper, a novel decremental subspace-based learning method called Decremental Generalized Discriminative Common Vectors method (DGDCV) is presented. The method makes use of the concept of decremental learning, which we introduce in the field of supervised feature extraction and classification. By efficiently removing unnecessary data and/or classes for a knowledge base, our methodology is able to update the model without recalculating the full projection or accessing to the previously processed training data, while retaining the previously acquired knowledge. The proposed method has been validated in 6 standard face recognition datasets, showing a considerable computational gain without compromising the accuracy of the model.
Keywords: Decremental learning; Generalized Discriminative Common Vectors; Feature extraction; Linear subspace methods; Classification
|
Daniel Ponsa, & Antonio Lopez. (2007). Feature Selection Based on a New Formulation of the Minimal-Redundancy-Maximal-Relevance Criterion. In 3rd Iberian Conference on Pattern Recognition and Image Analysis, LNCS 4477 (pp. 47–54).
|
J.A.Perez, Enric Marti, & Juan J.Villanueva. (1992). Interfase de Usuario de Entrada de Datos 3D en un CAD de Cartografía Urbana a partir de Pares Estereoscópicos. In II Congreso Español de Informática Gráfica (pp. 47–60).
|
Abel Gonzalez-Garcia, Robert Benavente, Olivier Penacchio, Javier Vazquez, Maria Vanrell, & C. Alejandro Parraga. (2013). Coloresia: An Interactive Colour Perception Device for the Visually Impaired. In Multimodal Interaction in Image and Video Applications (Vol. 48, pp. 47–66). Springer Berlin Heidelberg.
Abstract: A significative percentage of the human population suffer from impairments in their capacity to distinguish or even see colours. For them, everyday tasks like navigating through a train or metro network map becomes demanding. We present a novel technique for extracting colour information from everyday natural stimuli and presenting it to visually impaired users as pleasant, non-invasive sound. This technique was implemented inside a Personal Digital Assistant (PDA) portable device. In this implementation, colour information is extracted from the input image and categorised according to how human observers segment the colour space. This information is subsequently converted into sound and sent to the user via speakers or headphones. In the original implementation, it is possible for the user to send its feedback to reconfigure the system, however several features such as these were not implemented because the current technology is limited.We are confident that the full implementation will be possible in the near future as PDA technology improves.
|
Mirko Arnold, Anarta Ghosh, Gerard Lacey, Stephen Patchett, & Hugh Mulcahy. (2009). Indistinct frame detection in colonoscopy videos. In Machine Vision and Image Processing Conference (pp. 47–52).
|
L. Rothacker, Marçal Rusiñol, Josep Llados, & G.A. Fink. (2014). A Two-stage Approach to Segmentation-Free Query-by-example Word Spotting. Manuscript Cultures, 47–58.
Abstract: With the ongoing progress in digitization, huge document collections and archives have become available to a broad audience. Scanned document images can be transmitted electronically and studied simultaneously throughout the world. While this is very beneficial, it is often impossible to perform automated searches on these document collections. Optical character recognition usually fails when it comes to handwritten or historic documents. In order to address the need for exploring document collections rapidly, researchers are working on word spotting. In query-by-example word spotting scenarios, the user selects an exemplary occurrence of the query word in a document image. The word spotting system then retrieves all regions in the collection that are visually similar to the given example of the query word. The best matching regions are presented to the user and no actual transcription is required.
An important property of a word spotting system is the computational speed with which queries can be executed. In our previous work, we presented a relatively slow but high-precision method. In the present work, we will extend this baseline system to an integrated two-stage approach. In a coarse-grained first stage, we will filter document images efficiently in order to identify regions that are likely to contain the query word. In the fine-grained second stage, these regions will be analyzed with our previously presented high-precision method. Finally, we will report recognition results and query times for the well-known George Washington
benchmark in our evaluation. We achieve state-of-the-art recognition results while the query times can be reduced to 50% in comparison with our baseline.
|
Ester Fornells, Manuel De Armas, Maria Teresa Anguera, Sergio Escalera, Marcos Antonio Catalán, & Josep Moya. (2018). Desarrollo del proyecto del Consell Comarcal del Baix Llobregat “Buen Trato a las personas mayores y aquellas en situación de fragilidad con sufrimiento emocional: Hacia un envejecimiento saludable”. Informaciones Psiquiatricas, 47–59.
|
Naveen Onkarappa, & Angel Sappa. (2013). A Novel Space Variant Image Representation. JMIV - Journal of Mathematical Imaging and Vision, 47(1-2), 48–59.
Abstract: Traditionally, in machine vision images are represented using cartesian coordinates with uniform sampling along the axes. On the contrary, biological vision systems represent images using polar coordinates with non-uniform sampling. For various advantages provided by space-variant representations many researchers are interested in space-variant computer vision. In this direction the current work proposes a novel and simple space variant representation of images. The proposed representation is compared with the classical log-polar mapping. The log-polar representation is motivated by biological vision having the characteristic of higher resolution at the fovea and reduced resolution at the periphery. On the contrary to the log-polar, the proposed new representation has higher resolution at the periphery and lower resolution at the fovea. Our proposal is proved to be a better representation in navigational scenarios such as driver assistance systems and robotics. The experimental results involve analysis of optical flow fields computed on both proposed and log-polar representations. Additionally, an egomotion estimation application is also shown as an illustrative example. The experimental analysis comprises results from synthetic as well as real sequences.
Keywords: Space-variant representation; Log-polar mapping; Onboard vision applications
|