Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–12] |
Records | |||||
---|---|---|---|---|---|
Author | Jaume Amores | ||||
Title | Multiple Instance Classification: review, taxonomy and comparative study | Type | Journal Article | ||
Year | 2013 | Publication | Artificial Intelligence | Abbreviated Journal | AI |
Volume | 201 | Issue | Pages | 81-105 | |
Keywords | Multi-instance learning; Codebook; Bag-of-Words | ||||
Abstract | Multiple Instance Learning (MIL) has become an important topic in the pattern recognition community, and many solutions to this problemhave been proposed until now. Despite this fact, there is a lack of comparative studies that shed light into the characteristics and behavior of the different methods. In this work we provide such an analysis focused on the classification task (i.e.,leaving out other learning tasks such as regression). In order to perform our study, we implemented
fourteen methods grouped into three different families. We analyze the performance of the approaches across a variety of well-known databases, and we also study their behavior in synthetic scenarios in order to highlight their characteristics. As a result of this analysis, we conclude that methods that extract global bag-level information show a clearly superior performance in general. In this sense, the analysis permits us to understand why some types of methods are more successful than others, and it permits us to establish guidelines in the design of new MIL methods. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier Science Publishers Ltd. Essex, UK | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0004-3702 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ADAS; 601.042; 600.057 | Approved | no | ||
Call Number | Admin @ si @ Amo2013 | Serial | 2273 | ||
Permanent link to this record | |||||
Author | Enric Marti; Ferran Poveda; Antoni Gurgui; Jaume Rocarias; Debora Gil; Aura Hernandez-Sabate | ||||
Title | Una experiencia de estructura, funcionamiento y evaluación de la asignatura de graficos por computador con metodologia de aprendizaje basado en proyectos | Type | Miscellaneous | ||
Year | 2013 | Publication | IV Congreso Internacional UNIVEST | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | IV Congreso Internacional UNIVEST | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | UNIVEST | ||
Notes | IAM; ADAS | Approved | no | ||
Call Number | Admin @ si @ MPG2013b | Serial | 2384 | ||
Permanent link to this record | |||||
Author | Sezer Karaoglu; Jan van Gemert; Theo Gevers | ||||
Title | Con-text: text detection using background connectivity for fine-grained object classification | Type | Conference Article | ||
Year | 2013 | Publication | 21ST ACM International Conference on Multimedia | Abbreviated Journal | |
Volume | Issue | Pages | 757-760 | ||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ACM-MM | ||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ KGG2013 | Serial | 2369 | ||
Permanent link to this record | |||||
Author | Jasper Uilings; Koen E.A. van de Sande; Theo Gevers; Arnold Smeulders | ||||
Title | Selective Search for Object Recognition | Type | Journal Article | ||
Year | 2013 | Publication | International Journal of Computer Vision | Abbreviated Journal | IJCV |
Volume | 104 | Issue | 2 | Pages | 154-171 |
Keywords | |||||
Abstract | This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 % recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http://disi.unitn.it/~uijlings/SelectiveSearch.html). | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0920-5691 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ USG2013 | Serial | 2362 | ||
Permanent link to this record | |||||
Author | Zeynep Yucel; Albert Ali Salah; Çetin Meriçli; Tekin Meriçli; Roberto Valenti; Theo Gevers | ||||
Title | Joint Attention by Gaze Interpolation and Saliency | Type | Journal | ||
Year | 2013 | Publication | IEEE Transactions on cybernetics | Abbreviated Journal | T-CIBER |
Volume | 43 | Issue | 3 | Pages | 829-842 |
Keywords | |||||
Abstract | Joint attention, which is the ability of coordination of a common point of reference with the communicating party, emerges as a key factor in various interaction scenarios. This paper presents an image-based method for establishing joint attention between an experimenter and a robot. The precise analysis of the experimenter's eye region requires stability and high-resolution image acquisition, which is not always available. We investigate regression-based interpolation of the gaze direction from the head pose of the experimenter, which is easier to track. Gaussian process regression and neural networks are contrasted to interpolate the gaze direction. Then, we combine gaze interpolation with image-based saliency to improve the target point estimates and test three different saliency schemes. We demonstrate the proposed method on a human-robot interaction scenario. Cross-subject evaluations, as well as experiments under adverse conditions (such as dimmed or artificial illumination or motion blur), show that our method generalizes well and achieves rapid gaze estimation for establishing joint attention. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 2168-2267 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | ALTRES;ISE | Approved | no | ||
Call Number | Admin @ si @ YSM2013 | Serial | 2363 | ||
Permanent link to this record | |||||
Author | Sergio Escalera | ||||
Title | Multi-Modal Human Behaviour Analysis from Visual Data Sources | Type | Journal | ||
Year | 2013 | Publication | ERCIM News journal | Abbreviated Journal | ERCIM |
Volume | 95 | Issue | Pages | 21-22 | |
Keywords | |||||
Abstract | The Human Pose Recovery and Behaviour Analysis group (HuPBA), University of Barcelona, is developing a line of research on multi-modal analysis of humans in visual data. The novel technology is being applied in several scenarios with high social impact, including sign language recognition, assisted technology and supported diagnosis for the elderly and people with mental/physical disabilities, fitness conditioning, and Human Computer Interaction. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0926-4981 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | HuPBA;MILAB | Approved | no | ||
Call Number | Admin @ si @ Esc2013 | Serial | 2361 | ||
Permanent link to this record | |||||
Author | Ivan Huerta; Ariel Amato; Xavier Roca; Jordi Gonzalez | ||||
Title | Exploiting Multiple Cues in Motion Segmentation Based on Background Subtraction | Type | Journal Article | ||
Year | 2013 | Publication | Neurocomputing | Abbreviated Journal | NEUCOM |
Volume | 100 | Issue | Pages | 183–196 | |
Keywords | Motion segmentation; Shadow suppression; Colour segmentation; Edge segmentation; Ghost detection; Background subtraction | ||||
Abstract | This paper presents a novel algorithm for mobile-object segmentation from static background scenes, which is both robust and accurate under most of the common problems found in motionsegmentation. In our first contribution, a case analysis of motionsegmentation errors is presented taking into account the inaccuracies associated with different cues, namely colour, edge and intensity. Our second contribution is an hybrid architecture which copes with the main issues observed in the case analysis by fusing the knowledge from the aforementioned three cues and a temporal difference algorithm. On one hand, we enhance the colour and edge models to solve not only global and local illumination changes (i.e. shadows and highlights) but also the camouflage in intensity. In addition, local information is also exploited to solve the camouflage in chroma. On the other hand, the intensity cue is applied when colour and edge cues are not available because their values are beyond the dynamic range. Additionally, temporal difference scheme is included to segment motion where those three cues cannot be reliably computed, for example in those background regions not visible during the training period. Lastly, our approach is extended for handling ghost detection. The proposed method obtains very accurate and robust motionsegmentation results in multiple indoor and outdoor scenarios, while outperforming the most-referred state-of-art approaches. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ HAR2013 | Serial | 1808 | ||
Permanent link to this record | |||||
Author | Bhaskar Chakraborty; Andrew Bagdanov; Jordi Gonzalez; Xavier Roca | ||||
Title | Human Action Recognition Using an Ensemble of Body-Part Detectors | Type | Journal Article | ||
Year | 2013 | Publication | Expert Systems | Abbreviated Journal | EXSY |
Volume | 30 | Issue | 2 | Pages | 101-114 |
Keywords | Human action recognition;body-part detection;hidden Markov model | ||||
Abstract | This paper describes an approach to human action recognition based on a probabilistic optimization model of body parts using hidden Markov model (HMM). Our method is able to distinguish between similar actions by only considering the body parts having major contribution to the actions, for example, legs for walking, jogging and running; arms for boxing, waving and clapping. We apply HMMs to model the stochastic movement of the body parts for action recognition. The HMM construction uses an ensemble of body-part detectors, followed by grouping of part detections, to perform human identification. Three example-based body-part detectors are trained to detect three components of the human body: the head, legs and arms. These detectors cope with viewpoint changes and self-occlusions through the use of ten sub-classifiers that detect body parts over a specific range of viewpoints. Each sub-classifier is a support vector machine trained on features selected for the discriminative power for each particular part/viewpoint combination. Grouping of these detections is performed using a simple geometric constraint model that yields a viewpoint-invariant human detector. We test our approach on three publicly available action datasets: the KTH dataset, Weizmann dataset and HumanEva dataset. Our results illustrate that with a simple and compact representation we can achieve robust recognition of human actions comparable to the most complex, state-of-the-art methods. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | ISE | Approved | no | ||
Call Number | Admin @ si @ CBG2013 | Serial | 1809 | ||
Permanent link to this record | |||||
Author | Kaida Xiao; Chenyang Fu; D.Mylonas; Dimosthenis Karatzas; S. Wuerger | ||||
Title | Unique Hue Data for Colour Appearance Models. Part ii: Chromatic Adaptation Transform | Type | Journal Article | ||
Year | 2013 | Publication | Color Research & Application | Abbreviated Journal | CRA |
Volume | 38 | Issue | 1 | Pages | 22-29 |
Keywords | |||||
Abstract | Unique hue settings of 185 observers under three room-lighting conditions were used to evaluate the accuracy of full and mixed chromatic adaptation transform models of CIECAM02 in terms of unique hue reproduction. Perceptual hue shifts in CIECAM02 were evaluated for both models with no clear difference using the current Commission Internationale de l'Éclairage (CIE) recommendation for mixed chromatic adaptation ratio. Using our large dataset of unique hue data as a benchmark, an optimised parameter is proposed for chromatic adaptation under mixed illumination conditions that produces more accurate results in unique hue reproduction. © 2011 Wiley Periodicals, Inc. Col Res Appl, 2013 | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ XFM2013 | Serial | 1822 | ||
Permanent link to this record | |||||
Author | S.Grau; Ana Puig; Sergio Escalera; Maria Salamo | ||||
Title | Intelligent Interactive Volume Classification | Type | Conference Article | ||
Year | 2013 | Publication | Pacific Graphics | Abbreviated Journal | |
Volume | 32 | Issue | 7 | Pages | 23-28 |
Keywords | |||||
Abstract | This paper defines an intelligent and interactive framework to classify multiple regions of interest from the original data on demand, without requiring any preprocessing or previous segmentation. The proposed intelligent and interactive approach is divided in three stages: visualize, training and testing. First, users visualize and label some samples directly on slices of the volume. Training and testing are based on a framework of Error Correcting Output Codes and Adaboost classifiers that learn to classify each region the user has painted. Later, at the testing stage, each classifier is directly applied on the rest of samples and combined to perform multi-class labeling, being used in the final rendering. We also parallelized the training stage using a GPU-based implementation for
obtaining a rapid interaction and classification. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-3-905674-50-7 | Medium | ||
Area | Expedition | Conference | PG | ||
Notes | HuPBA; 600.046;MILAB | Approved | no | ||
Call Number | Admin @ si @ GPE2013b | Serial | 2355 | ||
Permanent link to this record | |||||
Author | Jorge Bernal; David Vazquez (eds) | ||||
Title | Computer vision Trends and Challenges | Type | Book Whole | ||
Year | 2013 | Publication | Computer vision Trends and Challenges | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | CVCRD; Computer Vision | ||||
Abstract | This book contains the papers presented at the Eighth CVC Workshop on Computer Vision Trends and Challenges (CVCR&D'2013). The workshop was held at the Computer Vision Center (Universitat Autònoma de Barcelona), the October 25th, 2013. The CVC workshops provide an excellent opportunity for young researchers and project engineers to share new ideas and knowledge about the progress of their work, and also, to discuss about challenges and future perspectives. In addition, the workshop is the welcome event for new people that recently have joined the institute.
The program of CVCR&D is organized in a single-track single-day workshop. It comprises several sessions dedicated to specific topics. For each session, a doctor working on the topic introduces the general research lines. The PhD students expose their specific research. A poster session will be held for open questions. Session topics cover the current research lines and development projects of the CVC: Medical Imaging, Medical Imaging, Color & Texture Analysis, Object Recognition, Image Sequence Evaluation, Advanced Driver Assistance Systems, Machine Vision, Document Analysis, Pattern Recognition and Applications. We want to thank all paper authors and Program Committee members. Their contribution shows that the CVC has a dynamic, active, and promising scientific community. We hope you all enjoy this Eighth workshop and we are looking forward to meeting you and new people next year in the Ninth CVCR&D. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | Jorge Bernal; David Vazquez | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | 978-84-940902-2-6 | Medium | ||
Area | Expedition | Conference | |||
Notes | Approved | no | |||
Call Number | ADAS @ adas @ BeV2013 | Serial | 2339 | ||
Permanent link to this record | |||||
Author | Ernest Valveny; Oriol Ramos Terrades; Joan Mas; Marçal Rusiñol | ||||
Title | Interactive Document Retrieval and Classification. | Type | Book Chapter | ||
Year | 2013 | Publication | Multimodal Interaction in Image and Video Applications | Abbreviated Journal | |
Volume | 48 | Issue | Pages | 17-30 | |
Keywords | |||||
Abstract | In this chapter we describe a system for document retrieval and classification following the interactive-predictive framework. In particular, the system addresses two different scenarios of document analysis: document classification based on visual appearance and logo detection. These two classical problems of document analysis are formulated following the interactive-predictive model, taking the user interaction into account to make easier the process of annotating and labelling the documents. A system implementing this model in a real scenario is presented and analyzed. This system also takes advantage of active learning techniques to speed up the task of labelling the documents. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | Angel Sappa; Jordi Vitria | |
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 1868-4394 | ISBN | 978-3-642-35931-6 | Medium | |
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ VRM2013 | Serial | 2341 | ||
Permanent link to this record | |||||
Author | Lluis Pere de las Heras; Joan Mas; Gemma Sanchez; Ernest Valveny | ||||
Title | Notation-invariant patch-based wall detector in architectural floor plans | Type | Book Chapter | ||
Year | 2013 | Publication | Graphics Recognition. New Trends and Challenges | Abbreviated Journal | |
Volume | 7423 | Issue | Pages | 79--88 | |
Keywords | |||||
Abstract | Architectural floor plans exhibit a large variability in notation. Therefore, segmenting and identifying the elements of any kind of plan becomes a challenging task for approaches based on grouping structural primitives obtained by vectorization. Recently, a patch-based segmentation method working at pixel level and relying on the construction of a visual vocabulary has been proposed in [1], showing its adaptability to different notations by automatically learning the visual appearance of the elements in each different notation. This paper presents an evolution of that previous work, after analyzing and testing several alternatives for each of the different steps of the method: Firstly, an automatic plan-size normalization process is done. Secondly we evaluate different features to obtain the description of every patch. Thirdly, we train an SVM classifier to obtain the category of every patch instead of constructing a visual vocabulary. These variations of the method have been tested for wall detection on two datasets of architectural floor plans with different notations. After studying in deep each of the steps in the process pipeline, we are able to find the best system configuration, which highly outperforms the results on wall segmentation obtained by the original paper. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Springer Berlin Heidelberg | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | 0302-9743 | ISBN | 978-3-642-36823-3 | Medium | |
Area | Expedition | Conference | |||
Notes | DAG; 600.045; 600.056; 605.203 | Approved | no | ||
Call Number | Admin @ si @ HMS2013 | Serial | 2322 | ||
Permanent link to this record | |||||
Author | Albert Gordo; Florent Perronnin; Ernest Valveny | ||||
Title | Large-scale document image retrieval and classification with runlength histograms and binary embeddings | Type | Journal Article | ||
Year | 2013 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 46 | Issue | 7 | Pages | 1898-1905 |
Keywords | visual document descriptor; compression; large-scale; retrieval; classification | ||||
Abstract | We present a new document image descriptor based on multi-scale runlength
histograms. This descriptor does not rely on layout analysis and can be computed efficiently. We show how this descriptor can achieve state-of-theart results on two very different public datasets in classification and retrieval tasks. Moreover, we show how we can compress and binarize these descriptors to make them suitable for large-scale applications. We can achieve state-ofthe- art results in classification using binary descriptors of as few as 16 to 64 bits. |
||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Elsevier | Place of Publication | Editor | ||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0031-3203 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG; 600.042; 600.045; 605.203 | Approved | no | ||
Call Number | Admin @ si @ GPV2013 | Serial | 2306 | ||
Permanent link to this record | |||||
Author | Albert Gordo; Alicia Fornes; Ernest Valveny | ||||
Title | Writer identification in handwritten musical scores with bags of notes | Type | Journal Article | ||
Year | 2013 | Publication | Pattern Recognition | Abbreviated Journal | PR |
Volume | 46 | Issue | 5 | Pages | 1337-1345 |
Keywords | |||||
Abstract | Writer Identification is an important task for the automatic processing of documents. However, the identification of the writer in graphical documents is still challenging. In this work, we adapt the Bag of Visual Words framework to the task of writer identification in handwritten musical scores. A vanilla implementation of this method already performs comparably to the state-of-the-art. Furthermore, we analyze the effect of two improvements of the representation: a Bhattacharyya embedding, which improves the results at virtually no extra cost, and a Fisher Vector representation that very significantly improves the results at the cost of a more complex and costly representation. Experimental evaluation shows results more than 20 points above the state-of-the-art in a new, challenging dataset. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | 0031-3203 | ISBN | Medium | ||
Area | Expedition | Conference | |||
Notes | DAG | Approved | no | ||
Call Number | Admin @ si @ GFV2013 | Serial | 2307 | ||
Permanent link to this record |