|
Records |
Links |
|
Author |
Fahad Shahbaz Khan; Joost Van de Weijer; Sadiq Ali; Michael Felsberg |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Evaluating the impact of color on texture recognition |
Type |
Conference Article |
|
Year |
2013 |
Publication |
15th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
8047 |
Issue |
|
Pages |
154-162 |
|
|
Keywords |
Color; Texture; image representation |
|
|
Abstract |
State-of-the-art texture descriptors typically operate on grey scale images while ignoring color information. A common way to obtain a joint color-texture representation is to combine the two visual cues at the pixel level. However, such an approach provides sub-optimal results for texture categorisation task.
In this paper we investigate how to optimally exploit color information for texture recognition. We evaluate a variety of color descriptors, popular in image classification, for texture categorisation. In addition we analyze different fusion approaches to combine color and texture cues. Experiments are conducted on the challenging scenes and 10 class texture datasets. Our experiments clearly suggest that in all cases color names provide the best performance. Late fusion is the best strategy to combine color and texture. By selecting the best color descriptor with optimal fusion strategy provides a gain of 5% to 8% compared to texture alone on scenes and texture datasets. |
|
|
Address |
York; UK; August 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-40260-9 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
CAIP |
|
|
Notes |
CIC; 600.048 |
Approved |
no |
|
|
Call Number |
Admin @ si @ KWA2013 |
Serial |
2263 |
|
Permanent link to this record |
|
|
|
|
Author |
Naveen Onkarappa; Angel Sappa |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Laplacian Derivative based Regularization for Optical Flow Estimation in Driving Scenario |
Type |
Conference Article |
|
Year |
2013 |
Publication |
15th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
8048 |
Issue |
|
Pages |
483-490 |
|
|
Keywords |
Optical flow; regularization; Driver Assistance Systems; Performance Evaluation |
|
|
Abstract |
Existing state of the art optical flow approaches, which are evaluated on standard datasets such as Middlebury, not necessarily have a similar performance when evaluated on driving scenarios. This drop on performance is due to several challenges arising on real scenarios during driving. Towards this direction, in this paper, we propose a modification to the regularization term in a variational optical flow formulation, that notably improves the results, specially in driving scenarios. The proposed modification consists on using the Laplacian derivatives of flow components in the regularization term instead of gradients of flow components. We show the improvements in results on a standard real image sequences dataset (KITTI). |
|
|
Address |
York; UK; August 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-40245-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
CAIP |
|
|
Notes |
ADAS; 600.055; 601.215 |
Approved |
no |
|
|
Call Number |
Admin @ si @ OnS2013b |
Serial |
2244 |
|
Permanent link to this record |
|
|
|
|
Author |
Marcelo D. Pistarelli; Angel Sappa; Ricardo Toledo |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Multispectral Stereo Image Correspondence |
Type |
Conference Article |
|
Year |
2013 |
Publication |
15th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
8048 |
Issue |
|
Pages |
217-224 |
|
|
Keywords |
|
|
|
Abstract |
This paper presents a novel multispectral stereo image correspondence approach. It is evaluated using a stereo rig constructed with a visible spectrum camera and a long wave infrared spectrum camera. The novelty of the proposed approach lies on the usage of Hough space as a correspondence search domain. In this way it avoids searching for correspondence in the original multispectral image domains, where information is low correlated, and a common domain is used. The proposed approach is intended to be used in outdoor urban scenarios, where images contain large amount of edges. These edges are used as distinctive characteristics for the matching in the Hough space. Experimental results are provided showing the validity of the proposed approach. |
|
|
Address |
York; uk; August 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Springer Berlin Heidelberg |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0302-9743 |
ISBN |
978-3-642-40245-6 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
CAIP |
|
|
Notes |
ADAS; 600.055 |
Approved |
no |
|
|
Call Number |
Admin @ si @ PST2013 |
Serial |
2561 |
|
Permanent link to this record |
|
|
|
|
Author |
Eduardo Aguilar; Petia Radeva |
![goto web page url](img/www.gif)
|
|
Title |
Class-Conditional Data Augmentation Applied to Image Classification |
Type |
Conference Article |
|
Year |
2019 |
Publication |
18th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
11679 |
Issue |
|
Pages |
182-192 |
|
|
Keywords |
CNNs; Data augmentation; Deep learning; Epistemic uncertainty; Image classification; Food recognition |
|
|
Abstract |
Image classification is widely researched in the literature, where models based on Convolutional Neural Networks (CNNs) have provided better results. When data is not enough, CNN models tend to be overfitted. To deal with this, often, traditional techniques of data augmentation are applied, such as: affine transformations, adjusting the color balance, among others. However, we argue that some techniques of data augmentation may be more appropriate for some of the classes. In order to select the techniques that work best for particular class, we propose to explore the epistemic uncertainty for the samples within each class. From our experiments, we can observe that when the data augmentation is applied class-conditionally, we improve the results in terms of accuracy and also reduce the overall epistemic uncertainty. To summarize, in this paper we propose a class-conditional data augmentation procedure that allows us to obtain better results and improve robustness of the classification in the face of model uncertainty. |
|
|
Address |
Salermo; Italy; September 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
CAIP |
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ AgR2019 |
Serial |
3366 |
|
Permanent link to this record |
|
|
|
|
Author |
Estefania Talavera; Nicolai Petkov; Petia Radeva |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Unsupervised Routine Discovery in Egocentric Photo-Streams |
Type |
Conference Article |
|
Year |
2019 |
Publication |
18th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
11678 |
Issue |
|
Pages |
576-588 |
|
|
Keywords |
Routine discovery; Lifestyle; Egocentric vision; Behaviour analysis |
|
|
Abstract |
The routine of a person is defined by the occurrence of activities throughout different days, and can directly affect the person’s health. In this work, we address the recognition of routine related days. To do so, we rely on egocentric images, which are recorded by a wearable camera and allow to monitor the life of the user from a first-person view perspective. We propose an unsupervised model that identifies routine related days, following an outlier detection approach. We test the proposed framework over a total of 72 days in the form of photo-streams covering around 2 weeks of the life of 5 different camera wearers. Our model achieves an average of 76% Accuracy and 68% Weighted F-Score for all the users. Thus, we show that our framework is able to recognise routine related days and opens the door to the understanding of the behaviour of people. |
|
|
Address |
Salermo; Italy; September 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
CAIP |
|
|
Notes |
MILAB; no proj |
Approved |
no |
|
|
Call Number |
Admin @ si @ TPR2019a |
Serial |
3367 |
|
Permanent link to this record |
|
|
|
|
Author |
Javad Zolfaghari Bengar; Bogdan Raducanu; Joost Van de Weijer |
![goto web page url](img/www.gif)
|
|
Title |
When Deep Learners Change Their Mind: Learning Dynamics for Active Learning |
Type |
Conference Article |
|
Year |
2021 |
Publication |
19th International Conference on Computer Analysis of Images and Patterns |
Abbreviated Journal |
|
|
|
Volume |
13052 |
Issue |
1 |
Pages |
403-413 |
|
|
Keywords |
|
|
|
Abstract |
Active learning aims to select samples to be annotated that yield the largest performance improvement for the learning algorithm. Many methods approach this problem by measuring the informativeness of samples and do this based on the certainty of the network predictions for samples. However, it is well-known that neural networks are overly confident about their prediction and are therefore an untrustworthy source to assess sample informativeness. In this paper, we propose a new informativeness-based active learning method. Our measure is derived from the learning dynamics of a neural network. More precisely we track the label assignment of the unlabeled data pool during the training of the algorithm. We capture the learning dynamics with a metric called label-dispersion, which is low when the network consistently assigns the same label to the sample during the training of the network and high when the assigned label changes frequently. We show that label-dispersion is a promising predictor of the uncertainty of the network, and show on two benchmark datasets that an active learning algorithm based on label-dispersion obtains excellent results. |
|
|
Address |
September 2021 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
CAIP |
|
|
Notes |
LAMP; OR |
Approved |
no |
|
|
Call Number |
Admin @ si @ ZRV2021 |
Serial |
3673 |
|
Permanent link to this record |
|
|
|
|
Author |
Bogdan Raducanu; Jordi Vitria |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Incremental Subspace Learning for Cognitive Visual Processes |
Type |
Conference Article |
|
Year |
2007 |
Publication |
Advances in Brain, Vision and Artificial Intelligence, 2nd International Symposium |
Abbreviated Journal |
|
|
|
Volume |
4729 |
Issue |
|
Pages |
214–223 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Naples (Italy) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
BVAI’07 |
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaV2007b |
Serial |
901 |
|
Permanent link to this record |
|
|
|
|
Author |
Jaume Amores; N. Sebe; Petia Radeva |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Class-Specific Binaryy Correlograms for Object Recognition |
Type |
Conference Article |
|
Year |
2007 |
Publication |
British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Warwick (UK) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
BMVC’07 |
|
|
Notes |
ADAS;MILAB |
Approved |
no |
|
|
Call Number |
ADAS @ adas @ ASR2007a |
Serial |
923 |
|
Permanent link to this record |
|
|
|
|
Author |
Marçal Rusiñol; Lluis Gomez; A. Landman; M. Silva Constenla; Dimosthenis Karatzas |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Automatic Structured Text Reading for License Plates and Utility Meters |
Type |
Conference Article |
|
Year |
2019 |
Publication |
BMVC Workshop on Visual Artificial Intelligence and Entrepreneurship |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Reading text in images has attracted interest from computer vision researchers for
many years. Our technology focuses on the extraction of structured text – such as serial
numbers, machine readings, product codes, etc. – so that it is able to center its attention just on the relevant textual elements. It is conceived to work in an end-to-end fashion, bypassing any explicit text segmentation stage. In this paper we present two different industrial use cases where we have applied our automatic structured text reading technology. In the first one, we demonstrate an outstanding performance when reading license plates compared to the current state of the art. In the second one, we present results on our solution for reading utility meters. The technology is commercialized by a recently created spin-off company, and both solutions are at different stages of integration with final clients. |
|
|
Address |
Cardiff; UK; September 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
BMVC-VAIE19 |
|
|
Notes |
DAG; 600.129 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RGL2019 |
Serial |
3283 |
|
Permanent link to this record |
|
|
|
|
Author |
German Ros; J. Guerrero; Angel Sappa; Daniel Ponsa; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Fast and Robust l1-averaging-based Pose Estimation for Driving Scenarios |
Type |
Conference Article |
|
Year |
2013 |
Publication |
24th British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
SLAM |
|
|
Abstract |
Robust visual pose estimation is at the core of many computer vision applications, being fundamental for Visual SLAM and Visual Odometry problems. During the last decades, many approaches have been proposed to solve these problems, being RANSAC one of the most accepted and used. However, with the arrival of new challenges, such as large driving scenarios for autonomous vehicles, along with the improvements in the data gathering frameworks, new issues must be considered. One of these issues is the capability of a technique to deal with very large amounts of data while meeting the realtime
constraint. With this purpose in mind, we present a novel technique for the problem of robust camera-pose estimation that is more suitable for dealing with large amount of data, which additionally, helps improving the results. The method is based on a combination of a very fast coarse-evaluation function and a robust ℓ1-averaging procedure. Such scheme leads to high-quality results while taking considerably less time than RANSAC.
Experimental results on the challenging KITTI Vision Benchmark Suite are provided, showing the validity of the proposed approach. |
|
|
Address |
Bristol; UK; September 2013 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
BMVC |
|
|
Notes |
ADAS |
Approved |
no |
|
|
Call Number |
Admin @ si @ RGS2013b; ADAS @ adas @ |
Serial |
2274 |
|
Permanent link to this record |
|
|
|
|
Author |
Jon Almazan; Albert Gordo; Alicia Fornes; Ernest Valveny |
![download PDF file pdf](img/file_PDF.gif)
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Efficient Exemplar Word Spotting |
Type |
Conference Article |
|
Year |
2012 |
Publication |
23rd British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
67.1- 67.11 |
|
|
Keywords |
|
|
|
Abstract |
In this paper we propose an unsupervised segmentation-free method for word spotting in document images.
Documents are represented with a grid of HOG descriptors, and a sliding window approach is used to locate the document regions that are most similar to the query. We use the exemplar SVM framework to produce a better representation of the query in an unsupervised way. Finally, the document descriptors are precomputed and compressed with Product Quantization. This offers two advantages: first, a large number of documents can be kept in RAM memory at the same time. Second, the sliding window becomes significantly faster since distances between quantized HOG descriptors can be precomputed. Our results significantly outperform other segmentation-free methods in the literature, both in accuracy and in speed and memory usage. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
1-901725-46-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
BMVC |
|
|
Notes |
DAG |
Approved |
no |
|
|
Call Number |
DAG @ dag @ AGF2012 |
Serial |
1984 |
|
Permanent link to this record |
|
|
|
|
Author |
Naila Murray; Luca Marchesotti; Florent Perronnin |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Learning to Rank Images using Semantic and Aesthetic Labels |
Type |
Conference Article |
|
Year |
2012 |
Publication |
23rd British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
110.1-110.10 |
|
|
Keywords |
|
|
|
Abstract |
Most works on image retrieval from text queries have addressed the problem of retrieving semantically relevant images. However, the ability to assess the aesthetic quality of an image is an increasingly important differentiating factor for search engines. In this work, given a semantic query, we are interested in retrieving images which are semantically relevant and score highly in terms of aesthetics/visual quality. We use large-margin classifiers and rankers to learn statistical models capable of ordering images based on the aesthetic and semantic information. In particular, we compare two families of approaches: while the first one attempts to learn a single ranker which takes into account both semantic and aesthetic information, the second one learns separate semantic and aesthetic models. We carry out a quantitative and qualitative evaluation on a recently-published large-scale dataset and we show that the second family of techniques significantly outperforms the first one. |
|
|
Address |
Guildford, London |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
1-901725-46-4 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
BMVC |
|
|
Notes |
CIC |
Approved |
no |
|
|
Call Number |
Admin @ si @ MMP2012b |
Serial |
2027 |
|
Permanent link to this record |
|
|
|
|
Author |
Pedro Martins; Paulo Carvalho; Carlo Gatta |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Context Aware Keypoint Extraction for Robust Image Representation |
Type |
Conference Article |
|
Year |
2012 |
Publication |
23rd British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
100.1 - 100.12 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
BMVC |
|
|
Notes |
MILAB |
Approved |
no |
|
|
Call Number |
Admin @ si @ MCG2012a |
Serial |
2140 |
|
Permanent link to this record |
|
|
|
|
Author |
A. Ruiz; Joost Van de Weijer; Xavier Binefa |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Regularized Multi-Concept MIL for weakly-supervised facial behavior categorization |
Type |
Conference Article |
|
Year |
2014 |
Publication |
25th British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
We address the problem of estimating high-level semantic labels for videos of recorded people by means of analysing their facial expressions. This problem, to which we refer as facial behavior categorization, is a weakly-supervised learning problem where we do not have access to frame-by-frame facial gesture annotations but only weak-labels at the video level are available. Therefore, the goal is to learn a set of discriminative expressions and how they determine the video weak-labels. Facial behavior categorization can be posed as a Multi-Instance-Learning (MIL) problem and we propose a novel MIL method called Regularized Multi-Concept MIL to solve it. In contrast to previous approaches applied in facial behavior analysis, RMC-MIL follows a Multi-Concept assumption which allows different facial expressions (concepts) to contribute differently to the video-label. Moreover, to handle with the high-dimensional nature of facial-descriptors, RMC-MIL uses a discriminative approach to model the concepts and structured sparsity regularization to discard non-informative features. RMC-MIL is posed as a convex-constrained optimization problem where all the parameters are jointly learned using the Projected-Quasi-Newton method. In our experiments, we use two public data-sets to show the advantages of the Regularized Multi-Concept approach and its improvement compared to existing MIL methods. RMC-MIL outperforms state-of-the-art results in the UNBC data-set for pain detection. |
|
|
Address |
Nottingham; UK; September 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
BMVC |
|
|
Notes |
LAMP; CIC; 600.074; 600.079 |
Approved |
no |
|
|
Call Number |
Admin @ si @ RWB2014 |
Serial |
2508 |
|
Permanent link to this record |
|
|
|
|
Author |
Jiaolong Xu; Sebastian Ramos; David Vazquez; Antonio Lopez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Incremental Domain Adaptation of Deformable Part-based Models |
Type |
Conference Article |
|
Year |
2014 |
Publication |
25th British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Pedestrian Detection; Part-based models; Domain Adaptation |
|
|
Abstract |
Nowadays, classifiers play a core role in many computer vision tasks. The underlying assumption for learning classifiers is that the training set and the deployment environment (testing) follow the same probability distribution regarding the features used by the classifiers. However, in practice, there are different reasons that can break this constancy assumption. Accordingly, reusing existing classifiers by adapting them from the previous training environment (source domain) to the new testing one (target domain)
is an approach with increasing acceptance in the computer vision community. In this paper we focus on the domain adaptation of deformable part-based models (DPMs) for object detection. In particular, we focus on a relatively unexplored scenario, i.e. incremental domain adaptation for object detection assuming weak-labeling. Therefore, our algorithm is ready to improve existing source-oriented DPM-based detectors as soon as a little amount of labeled target-domain training data is available, and keeps improving as more of such data arrives in a continuous fashion. For achieving this, we follow a multiple
instance learning (MIL) paradigm that operates in an incremental per-image basis. As proof of concept, we address the challenging scenario of adapting a DPM-based pedestrian detector trained with synthetic pedestrians to operate in real-world scenarios. The obtained results show that our incremental adaptive models obtain equally good accuracy results as the batch learned models, while being more flexible for handling continuously arriving target-domain data. |
|
|
Address |
Nottingham; uk; September 2014 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
BMVA Press |
Place of Publication |
|
Editor |
Valstar, Michel and French, Andrew and Pridmore, Tony |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference ![sorted by Conference field, descending order (down)](img/sort_desc.gif) |
BMVC |
|
|
Notes |
ADAS; 600.057; 600.054; 600.076 |
Approved |
no |
|
|
Call Number |
XRV2014c; ADAS @ adas @ xrv2014c |
Serial |
2455 |
|
Permanent link to this record |