|
Records |
Links |
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
C. Alejandro Parraga |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Colours and Colour Vision: An Introductory Survey |
Type |
Journal Article |
|
Year |
2017 |
Publication |
Perception |
Abbreviated Journal |
PER |
|
|
Volume |
46 |
Issue |
5 |
Pages |
640-641 |
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
NEUROBIT; no menciona |
Approved |
no |
|
|
Call Number |
Par2017 |
Serial |
3101 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Boris N. Oreshkin; Pau Rodriguez; Alexandre Lacoste |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
TADAM: Task dependent adaptive metric for improved few-shot learning |
Type |
Conference Article |
|
Year |
2018 |
Publication |
32nd Annual Conference on Neural Information Processing Systems |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100. |
|
|
Address |
Montreal; Canada; December 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
NIPS |
|
|
Notes |
ISE; 600.098; 600.119 |
Approved |
no |
|
|
Call Number |
Admin @ si @ ORL2018 |
Serial |
3140 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bonifaz Stuhr; Jurgen Brauer; Bernhard Schick; Jordi Gonzalez |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Masked Discriminators for Content-Consistent Unpaired Image-to-Image Translation |
Type |
Miscellaneous |
|
Year |
2023 |
Publication |
Arxiv |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
A common goal of unpaired image-to-image translation is to preserve content consistency between source images and translated images while mimicking the style of the target domain. Due to biases between the datasets of both domains, many methods suffer from inconsistencies caused by the translation process. Most approaches introduced to mitigate these inconsistencies do not constrain the discriminator, leading to an even more ill-posed training setup. Moreover, none of these approaches is designed for larger crop sizes. In this work, we show that masking the inputs of a global discriminator for both domains with a content-based mask is sufficient to reduce content inconsistencies significantly. However, this strategy leads to artifacts that can be traced back to the masking process. To reduce these artifacts, we introduce a local discriminator that operates on pairs of small crops selected with a similarity sampling strategy. Furthermore, we apply this sampling strategy to sample global input crops from the source and target dataset. In addition, we propose feature-attentive denormalization to selectively incorporate content-based statistics into the generator stream. In our experiments, we show that our method achieves state-of-the-art performance in photorealistic sim-to-real translation and weather translation and also performs well in day-to-night translation. Additionally, we propose the cKVD metric, which builds on the sKVD metric and enables the examination of translation quality at the class or category level. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ SBS2023 |
Serial |
3863 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bonifaz Stuhr |
![find book details (via ISBN) isbn](img/isbn.gif)
|
|
Title |
Towards Unsupervised Representation Learning: Learning, Evaluating and Transferring Visual Representations |
Type |
Book Whole |
|
Year |
2023 |
Publication |
PhD Thesis, Universitat Autonoma de Barcelona-CVC |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Unsupervised representation learning aims at finding methods that learn representations from data without annotation-based signals. Abstaining from annotations not only leads to economic benefits but may – and to some extent already does – result in advantages regarding the representation’s structure, robustness, and generalizability to different tasks. In the long run, unsupervised methods are expected to surpass their supervised counterparts due to the reduction of human intervention and the inherently more general setup that does not bias the optimization towards an objective originating from specific annotation-based signals. While major advantages of unsupervised representation learning have been recently observed in natural language processing, supervised methods still dominate in vision domains for most tasks. In this dissertation, we contribute to the field of unsupervised (visual) representation learning from three perspectives: (i) Learning representations: We design unsupervised, backpropagation-free Convolutional Self-Organizing Neural Networks (CSNNs) that utilize self-organization- and Hebbian-based learning rules to learn convolutional kernels and masks to achieve deeper backpropagation-free models. Thereby, we observe that backpropagation-based and -free methods can suffer from an objective function mismatch between the unsupervised pretext task and the target task. This mismatch can lead to performance decreases for the target task. (ii) Evaluating representations: We build upon the widely used (non-)linear evaluation protocol to define pretext- and target-objective-independent metrics for measuring the objective function mismatch. With these metrics, we evaluate various pretext and target tasks and disclose dependencies of the objective function mismatch concerning different parts of the training and model setup. (iii) Transferring representations: We contribute CARLANE, the first 3-way sim-to-real domain adaptation benchmark for 2D lane detection. We adopt several well-known unsupervised domain adaptation methods as baselines and propose a method based on prototypical cross-domain self-supervised learning. Finally, we focus on pixel-based unsupervised domain adaptation and contribute a content-consistent unpaired image-to-image translation method that utilizes masks, global and local discriminators, and similarity sampling to mitigate content inconsistencies, as well as feature-attentive denormalization to fuse content-based statistics into the generator stream. In addition, we propose the cKVD metric to incorporate class-specific content inconsistencies into perceptual metrics for measuring translation quality. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
Ph.D. thesis |
|
|
Publisher |
IMPRIA |
Place of Publication |
|
Editor |
Jordi Gonzalez;Jurgen Brauer |
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
978-84-126409-6-0 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
ISE |
Approved |
no |
|
|
Call Number |
Admin @ si @ Stu2023 |
Serial |
3966 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bojana Gajic; Ramon Baldrich |
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Cross-domain fashion image retrieval |
Type |
Conference Article |
|
Year |
2018 |
Publication |
CVPR 2018 Workshop on Women in Computer Vision (WiCV 2018, 4th Edition) |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
19500-19502 |
|
|
Keywords |
|
|
|
Abstract |
Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products
in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task. |
|
|
Address |
Salt Lake City, USA; 22 June 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPRW |
|
|
Notes |
CIC; 600.087 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3709 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bojana Gajic; Eduard Vazquez; Ramon Baldrich |
![goto web page url](img/www.gif)
|
|
Title |
Evaluation of Deep Image Descriptors for Texture Retrieval |
Type |
Conference Article |
|
Year |
2017 |
Publication |
Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
251-257 |
|
|
Keywords |
Texture Representation; Texture Retrieval; Convolutional Neural Networks; Psychophysical Evaluation |
|
|
Abstract |
The increasing complexity learnt in the layers of a Convolutional Neural Network has proven to be of great help for the task of classification. The topic has received great attention in recently published literature.
Nonetheless, just a handful of works study low-level representations, commonly associated with lower layers. In this paper, we explore recent findings which conclude, counterintuitively, the last layer of the VGG convolutional network is the best to describe a low-level property such as texture. To shed some light on this issue, we are proposing a psychophysical experiment to evaluate the adequacy of different layers of the VGG network for texture retrieval. Results obtained suggest that, whereas the last convolutional layer is a good choice for a specific task of classification, it might not be the best choice as a texture descriptor, showing a very poor performance on texture retrieval. Intermediate layers show the best performance, showing a good combination of basic filters, as in the primary visual cortex, and also a degree of higher level information to describe more complex textures. |
|
|
Address |
Porto, Portugal; 27 February – 1 March 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISIGRAPP |
|
|
Notes |
CIC; 600.087 |
Approved |
no |
|
|
Call Number |
Admin @ si @ |
Serial |
3710 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bojana Gajic; Ariel Amato; Ramon Baldrich; Joost Van de Weijer; Carlo Gatta |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Area Under the ROC Curve Maximization for Metric Learning |
Type |
Conference Article |
|
Year |
2022 |
Publication |
CVPR 2022 Workshop on Efficien Deep Learning for Computer Vision (ECV 2022, 5th Edition) |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
Training; Computer vision; Conferences; Area measurement; Benchmark testing; Pattern recognition |
|
|
Abstract |
Most popular metric learning losses have no direct relation with the evaluation metrics that are subsequently applied to evaluate their performance. We hypothesize that training a metric learning model by maximizing the area under the ROC curve (which is a typical performance measure of recognition systems) can induce an implicit ranking suitable for retrieval problems. This hypothesis is supported by previous work that proved that a curve dominates in ROC space if and only if it dominates in Precision-Recall space. To test this hypothesis, we design and maximize an approximated, derivable relaxation of the area under the ROC curve. The proposed AUC loss achieves state-of-the-art results on two large scale retrieval benchmark datasets (Stanford Online Products and DeepFashion In-Shop). Moreover, the AUC loss achieves comparable performance to more complex, domain specific, state-of-the-art methods for vehicle re-identification. |
|
|
Address |
New Orleans, USA; 20 June 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CVPRW |
|
|
Notes |
CIC; LAMP; |
Approved |
no |
|
|
Call Number |
Admin @ si @ GAB2022 |
Serial |
3700 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bojana Gajic; Ariel Amato; Ramon Baldrich; Carlo Gatta |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
Bag of Negatives for Siamese Architectures |
Type |
Conference Article |
|
Year |
2019 |
Publication |
30th British Machine Vision Conference |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
Training a Siamese architecture for re-identification with a large number of identities is a challenging task due to the difficulty of finding relevant negative samples efficiently. In this work we present Bag of Negatives (BoN), a method for accelerated and improved training of Siamese networks that scales well on datasets with a very large number of identities. BoN is an efficient and loss-independent method, able to select a bag of high quality negatives, based on a novel online hashing strategy. |
|
|
Address |
Cardiff; United Kingdom; September 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
BMVC |
|
|
Notes |
CIC; 600.140; 600.118 |
Approved |
no |
|
|
Call Number |
Admin @ si @ GAB2019b |
Serial |
3263 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bogdan Raducanu; Jordi Vitria; D. Gatica-Perez |
![goto web page (via DOI) doi](img/doi.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
You are Fired! Nonverbal Role Analysis in Competitive Meetings |
Type |
Conference Article |
|
Year |
2009 |
Publication |
IEEE International Conference on Audio, Speech and Signal Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
1949–1952 |
|
|
Keywords |
|
|
|
Abstract |
This paper addresses the problem of social interaction analysis in competitive meetings, using nonverbal cues. For our study, we made use of ldquoThe Apprenticerdquo reality TV show, which features a competition for a real, highly paid corporate job. Our analysis is centered around two tasks regarding a person's role in a meeting: predicting the person with the highest status and predicting the fired candidates. The current study was carried out using nonverbal audio cues. Results obtained from the analysis of a full season of the show, representing around 90 minutes of audio data, are very promising (up to 85.7% of accuracy in the first case and up to 92.8% in the second case). Our approach is based only on the nonverbal interaction dynamics during the meeting without relying on the spoken words. |
|
|
Address |
Taipei, Taiwan |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
1520-6149 |
ISBN |
978-1-4244-2353-8 |
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICASSP |
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RVG2009 |
Serial |
1154 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bogdan Raducanu; Jordi Vitria; Ales Leonardis |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Online pattern recognition and machine learning techniques for computer-vision: Theory and applications |
Type |
Journal Article |
|
Year |
2010 |
Publication |
Image and Vision Computing |
Abbreviated Journal |
IMAVIS |
|
|
Volume |
28 |
Issue |
7 |
Pages |
1063–1064 |
|
|
Keywords |
|
|
|
Abstract |
(Editorial for the Special Issue on Online pattern recognition and machine learning techniques)
In real life, visual learning is supposed to be a continuous process. This paradigm has found its way also in artificial vision systems. There is an increasing trend in pattern recognition represented by online learning approaches, which aims at continuously updating the data representation when new information arrives. Starting with a minimal dataset, the initial knowledge is expanded by incorporating incoming instances, which may have not been previously available or foreseen at the system’s design stage. An interesting characteristic of this strategy is that the train and test phases take place simultaneously. Given the increasing interest in this subject, the aim of this special issue is to be a landmark event in the development of online learning techniques and their applications with the hope that it will capture the interest of a wider audience and will attract even more researchers. We received 19 contributions, of which 9 have been accepted for publication, after having been subjected to usual peer review process. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
Elsevier |
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
0262-8856 |
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RVL2010 |
Serial |
1280 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bogdan Raducanu; Jordi Vitria |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Real-Time Face Tracking for Context-Aware Computing |
Type |
Miscellaneous |
|
Year |
2005 |
Publication |
8th Catalan Conference on Artificial Intelligence (CCIA 2005) (published in |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaV2005a |
Serial |
560 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bogdan Raducanu; Jordi Vitria |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Real-Time Face Tracking for Context-Aware Computing |
Type |
Book Chapter |
|
Year |
2005 |
Publication |
Artificial Intelligence Research and Development, IOS Press, 91–98 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Amsterdam |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaV2005b |
Serial |
616 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bogdan Raducanu; Jordi Vitria |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A Robust Particle Filter-based Face Tracker Using a Combination of Color and Geometric Information |
Type |
Report |
|
Year |
2005 |
Publication |
CVC Technical Report #90 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
CVC (UAB) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaV2005c |
Serial |
617 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bogdan Raducanu; Jordi Vitria |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Aprendiendo a Aprender: de Maquinas Listas a Maquinas Inteligentes |
Type |
Miscellaneous |
|
Year |
2006 |
Publication |
Campus Multidisciplinario en Percepcion e Inteligencia (Antionio Fernandez–Caballero et al., eds.), 1: 34–45 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
Albacete (Spain) |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaV2006b |
Serial |
714 |
|
Permanent link to this record |
|
|
|
|
Author ![sorted by Author field, descending order (down)](img/sort_desc.gif) |
Bogdan Raducanu; Jordi Vitria |
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
A Robust Particle Filter-Based Face Tracker Using Combination of Color and Geometric Information |
Type |
Book Chapter |
|
Year |
2006 |
Publication |
International Conference on Image Analysis and Recognition (ICIAR´06), LNCS 4141 (A. Campilho et al., eds.), 1: 922–933 |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
|
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes |
OR;MV |
Approved |
no |
|
|
Call Number |
BCNPCL @ bcnpcl @ RaV2006c |
Serial |
715 |
|
Permanent link to this record |