Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–20] |
Records | |||||
---|---|---|---|---|---|
Author | Arnau Baro; Carles Badal; Pau Torras; Alicia Fornes | ||||
Title | Handwritten Historical Music Recognition through Sequence-to-Sequence with Attention Mechanism | Type | Conference Article | ||
Year | 2022 | Publication | 3rd International Workshop on Reading Music Systems (WoRMS2021) | Abbreviated Journal | |
Volume | Issue | Pages | 55-59 | ||
Keywords | Optical Music Recognition; Digits; Image Classification | ||||
Abstract | Despite decades of research in Optical Music Recognition (OMR), the recognition of old handwritten music scores remains a challenge because of the variabilities in the handwriting styles, paper degradation, lack of standard notation, etc. Therefore, the research in OMR systems adapted to the particularities of old manuscripts is crucial to accelerate the conversion of music scores existing in archives into digital libraries, fostering the dissemination and preservation of our music heritage. In this paper we explore the adaptation of sequence-to-sequence models with attention mechanism (used in translation and handwritten text recognition) and the generation of specific synthetic data for recognizing old music scores. The experimental validation demonstrates that our approach is promising, especially when compared with long short-term memory neural networks. | ||||
Address | July 23, 2021, Alicante (Spain) | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WoRMS | ||
Notes | DAG; 600.121; 600.162; 602.230; 600.140 | Approved | no | ||
Call Number | Admin @ si @ BBT2022 | Serial | 3734 | ||
Permanent link to this record | |||||
Author | Giuseppe De Gregorio; Sanket Biswas; Mohamed Ali Souibgui; Asma Bensalah; Josep Llados; Alicia Fornes; Angelo Marcelli | ||||
Title | A Few Shot Multi-representation Approach for N-Gram Spotting in Historical Manuscripts | Type | Conference Article | ||
Year | 2022 | Publication | Frontiers in Handwriting Recognition. International Conference on Frontiers in Handwriting Recognition (ICFHR2022) | Abbreviated Journal | |
Volume | 13639 | Issue | Pages | 3-12 | |
Keywords | N-gram spotting; Few-shot learning; Multimodal understanding; Historical handwritten collections | ||||
Abstract | Despite recent advances in automatic text recognition, the performance remains moderate when it comes to historical manuscripts. This is mainly because of the scarcity of available labelled data to train the data-hungry Handwritten Text Recognition (HTR) models. The Keyword Spotting System (KWS) provides a valid alternative to HTR due to the reduction in error rate, but it is usually limited to a closed reference vocabulary. In this paper, we propose a few-shot learning paradigm for spotting sequences of a few characters (N-gram) that requires a small amount of labelled training data. We exhibit that recognition of important n-grams could reduce the system’s dependency on vocabulary. In this case, an out-of-vocabulary (OOV) word in an input handwritten line image could be a sequence of n-grams that belong to the lexicon. An extensive experimental evaluation of our proposed multi-representation approach was carried out on a subset of Bentham’s historical manuscript collections to obtain some really promising results in this direction. | ||||
Address | December 04 – 07, 2022; Hyderabad, India | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICFHR | ||
Notes | DAG; 600.121; 600.162; 602.230; 600.140 | Approved | no | ||
Call Number | Admin @ si @ GBS2022 | Serial | 3733 | ||
Permanent link to this record | |||||
Author | Giacomo Magnifico; Beata Megyesi; Mohamed Ali Souibgui; Jialuo Chen; Alicia Fornes | ||||
Title | Lost in Transcription of Graphic Signs in Ciphers | Type | Conference Article | ||
Year | 2022 | Publication | International Conference on Historical Cryptology (HistoCrypt 2022) | Abbreviated Journal | |
Volume | Issue | Pages | 153-158 | ||
Keywords | transcription of ciphers; hand-written text recognition of symbols; graphic signs | ||||
Abstract | Hand-written Text Recognition techniques with the aim to automatically identify and transcribe hand-written text have been applied to historical sources including ciphers. In this paper, we compare the performance of two machine learning architectures, an unsupervised method based on clustering and a deep learning method with few-shot learning. Both models are tested on seen and unseen data from historical ciphers with different symbol sets consisting of various types of graphic signs. We compare the models and highlight their differences in performance, with their advantages and shortcomings. | ||||
Address | Amsterdam, Netherlands, June 20-22, 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | HystoCrypt | ||
Notes | DAG; 600.121; 600.162; 602.230; 600.140 | Approved | no | ||
Call Number | Admin @ si @ MBS2022 | Serial | 3731 | ||
Permanent link to this record | |||||
Author | Mohamed Ali Souibgui; Sanket Biswas; Sana Khamekhem Jemni; Yousri Kessentini; Alicia Fornes; Josep Llados; Umapada Pal | ||||
Title | DocEnTr: An End-to-End Document Image Enhancement Transformer | Type | Conference Article | ||
Year | 2022 | Publication | 26th International Conference on Pattern Recognition | Abbreviated Journal | |
Volume | Issue | Pages | 1699-1705 | ||
Keywords | Degradation; Head; Optical character recognition; Self-supervised learning; Benchmark testing; Transformers; Magnetic heads | ||||
Abstract | Document images can be affected by many degradation scenarios, which cause recognition and processing difficulties. In this age of digitization, it is important to denoise them for proper usage. To address this challenge, we present a new encoder-decoder architecture based on vision transformers to enhance both machine-printed and handwritten document images, in an end-to-end fashion. The encoder operates directly on the pixel patches with their positional information without the use of any convolutional layers, while the decoder reconstructs a clean image from the encoded patches. Conducted experiments show a superiority of the proposed model compared to the state-of the-art methods on several DIBCO benchmarks. Code and models will be publicly available at: https://github.com/dali92002/DocEnTR | ||||
Address | August 21-25, 2022 , Montréal Québec | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPR | ||
Notes | DAG; 600.121; 600.162; 602.230; 600.140 | Approved | no | ||
Call Number | Admin @ si @ SBJ2022 | Serial | 3730 | ||
Permanent link to this record | |||||
Author | Jose Elias Yauri; Aura Hernandez-Sabate; Pau Folch; Debora Gil | ||||
Title | Mental Workload Detection Based on EEG Analysis | Type | Conference Article | ||
Year | 2021 | Publication | Artificial Intelligent Research and Development. Proceedings 23rd International Conference of the Catalan Association for Artificial Intelligence. | Abbreviated Journal | |
Volume | 339 | Issue | Pages | 268-277 | |
Keywords | Cognitive states; Mental workload; EEG analysis; Neural Networks. | ||||
Abstract | The study of mental workload becomes essential for human work efficiency, health conditions and to avoid accidents, since workload compromises both performance and awareness. Although workload has been widely studied using several physiological measures, minimising the sensor network as much as possible remains both a challenge and a requirement.
Electroencephalogram (EEG) signals have shown a high correlation to specific cognitive and mental states like workload. However, there is not enough evidence in the literature to validate how well models generalize in case of new subjects performing tasks of a workload similar to the ones included during model’s training. In this paper we propose a binary neural network to classify EEG features across different mental workloads. Two workloads, low and medium, are induced using two variants of the N-Back Test. The proposed model was validated in a dataset collected from 16 subjects and shown a high level of generalization capability: model reported an average recall of 81.81% in a leave-one-out subject evaluation. |
||||
Address | Virtual; October 20-22 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CCIA | ||
Notes | IAM; 600.139; 600.118; 600.145 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3723 | ||
Permanent link to this record | |||||
Author | Bojana Gajic; Eduard Vazquez; Ramon Baldrich | ||||
Title | Evaluation of Deep Image Descriptors for Texture Retrieval | Type | Conference Article | ||
Year | 2017 | Publication | Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2017) | Abbreviated Journal | |
Volume | Issue | Pages | 251-257 | ||
Keywords | Texture Representation; Texture Retrieval; Convolutional Neural Networks; Psychophysical Evaluation | ||||
Abstract | The increasing complexity learnt in the layers of a Convolutional Neural Network has proven to be of great help for the task of classification. The topic has received great attention in recently published literature.
Nonetheless, just a handful of works study low-level representations, commonly associated with lower layers. In this paper, we explore recent findings which conclude, counterintuitively, the last layer of the VGG convolutional network is the best to describe a low-level property such as texture. To shed some light on this issue, we are proposing a psychophysical experiment to evaluate the adequacy of different layers of the VGG network for texture retrieval. Results obtained suggest that, whereas the last convolutional layer is a good choice for a specific task of classification, it might not be the best choice as a texture descriptor, showing a very poor performance on texture retrieval. Intermediate layers show the best performance, showing a good combination of basic filters, as in the primary visual cortex, and also a degree of higher level information to describe more complex textures. |
||||
Address | Porto, Portugal; 27 February – 1 March 2017 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISIGRAPP | ||
Notes | CIC; 600.087 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3710 | ||
Permanent link to this record | |||||
Author | Bojana Gajic; Ramon Baldrich | ||||
Title | Cross-domain fashion image retrieval | Type | Conference Article | ||
Year | 2018 | Publication | CVPR 2018 Workshop on Women in Computer Vision (WiCV 2018, 4th Edition) | Abbreviated Journal | |
Volume | Issue | Pages | 19500-19502 | ||
Keywords | |||||
Abstract | Cross domain image retrieval is a challenging task that implies matching images from one domain to their pairs from another domain. In this paper we focus on fashion image retrieval, which involves matching an image of a fashion item taken by users, to the images of the same item taken in controlled condition, usually by professional photographer. When facing this problem, we have different products
in train and test time, and we use triplet loss to train the network. We stress the importance of proper training of simple architecture, as well as adapting general models to the specific task. |
||||
Address | Salt Lake City, USA; 22 June 2018 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | CIC; 600.087 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3709 | ||
Permanent link to this record | |||||
Author | Gemma Rotger; Francesc Moreno-Noguer; Felipe Lumbreras; Antonio Agudo | ||||
Title | Single view facial hair 3D reconstruction | Type | Conference Article | ||
Year | 2019 | Publication | 9th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 11867 | Issue | Pages | 423-436 | |
Keywords | 3D Vision; Shape Reconstruction; Facial Hair Modeling | ||||
Abstract | n this work, we introduce a novel energy-based framework that addresses the challenging problem of 3D reconstruction of facial hair from a single RGB image. To this end, we identify hair pixels over the image via texture analysis and then determine individual hair fibers that are modeled by means of a parametric hair model based on 3D helixes. We propose to minimize an energy composed of several terms, in order to adapt the hair parameters that better fit the image detections. The final hairs respond to the resulting fibers after a post-processing step where we encourage further realism. The resulting approach generates realistic facial hair fibers from solely an RGB image without assuming any training data nor user interaction. We provide an experimental evaluation on real-world pictures where several facial hair styles and image conditions are observed, showing consistent results and establishing a comparison with respect to competing approaches. | ||||
Address | Madrid; July 2019 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IbPRIA | ||
Notes | MSIAU; 600.086; 600.130; 600.122 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3707 | ||
Permanent link to this record | |||||
Author | Alex Gomez-Villa; Bartlomiej Twardowski; Lu Yu; Andrew Bagdanov; Joost Van de Weijer | ||||
Title | Continually Learning Self-Supervised Representations With Projected Functional Regularization | Type | Conference Article | ||
Year | 2022 | Publication | CVPR 2022 Workshop on Continual Learning (CLVision, 3rd Edition) | Abbreviated Journal | |
Volume | Issue | Pages | 3866-3876 | ||
Keywords | Computer vision; Conferences; Self-supervised learning; Image representation; Pattern recognition | ||||
Abstract | Recent self-supervised learning methods are able to learn high-quality image representations and are closing the gap with supervised approaches. However, these methods are unable to acquire new knowledge incrementally – they are, in fact, mostly used only as a pre-training phase over IID data. In this work we investigate self-supervised methods in continual learning regimes without any replay
mechanism. We show that naive functional regularization,also known as feature distillation, leads to lower plasticity and limits continual learning performance. Instead, we propose Projected Functional Regularization in which a separate temporal projection network ensures that the newly learned feature space preserves information of the previous one, while at the same time allowing for the learning of new features. This prevents forgetting while maintaining the plasticity of the learner. Comparison with other incremental learning approaches applied to self-supervision demonstrates that our method obtains competitive performance in different scenarios and on multiple datasets. |
||||
Address | New Orleans, USA; 20 June 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | LAMP: 600.147; 600.120 | Approved | no | ||
Call Number | Admin @ si @ GTY2022 | Serial | 3704 | ||
Permanent link to this record | |||||
Author | Javad Zolfaghari Bengar; Joost Van de Weijer; Laura Lopez-Fuentes; Bogdan Raducanu | ||||
Title | Class-Balanced Active Learning for Image Classification | Type | Conference Article | ||
Year | 2022 | Publication | Winter Conference on Applications of Computer Vision | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Active learning aims to reduce the labeling effort that is required to train algorithms by learning an acquisition function selecting the most relevant data for which a label should be requested from a large unlabeled data pool. Active learning is generally studied on balanced datasets where an equal amount of images per class is available. However, real-world datasets suffer from severe imbalanced classes, the so called long-tail distribution. We argue that this further complicates the active learning process, since the imbalanced data pool can result in suboptimal classifiers. To address this problem in the context of active learning, we proposed a general optimization framework that explicitly takes class-balancing into account. Results on three datasets showed that the method is general (it can be combined with most existing active learning algorithms) and can be effectively applied to boost the performance of both informative and representative-based active learning methods. In addition, we showed that also on balanced datasets
our method 1 generally results in a performance gain. |
||||
Address | Virtual; Waikoloa; Hawai; USA; January 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | WACV | ||
Notes | LAMP; 602.200; 600.147; 600.120 | Approved | no | ||
Call Number | Admin @ si @ ZWL2022 | Serial | 3703 | ||
Permanent link to this record | |||||
Author | Bojana Gajic; Ariel Amato; Ramon Baldrich; Joost Van de Weijer; Carlo Gatta | ||||
Title | Area Under the ROC Curve Maximization for Metric Learning | Type | Conference Article | ||
Year | 2022 | Publication | CVPR 2022 Workshop on Efficien Deep Learning for Computer Vision (ECV 2022, 5th Edition) | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Training; Computer vision; Conferences; Area measurement; Benchmark testing; Pattern recognition | ||||
Abstract | Most popular metric learning losses have no direct relation with the evaluation metrics that are subsequently applied to evaluate their performance. We hypothesize that training a metric learning model by maximizing the area under the ROC curve (which is a typical performance measure of recognition systems) can induce an implicit ranking suitable for retrieval problems. This hypothesis is supported by previous work that proved that a curve dominates in ROC space if and only if it dominates in Precision-Recall space. To test this hypothesis, we design and maximize an approximated, derivable relaxation of the area under the ROC curve. The proposed AUC loss achieves state-of-the-art results on two large scale retrieval benchmark datasets (Stanford Online Products and DeepFashion In-Shop). Moreover, the AUC loss achieves comparable performance to more complex, domain specific, state-of-the-art methods for vehicle re-identification. | ||||
Address | New Orleans, USA; 20 June 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | CIC; LAMP; | Approved | no | ||
Call Number | Admin @ si @ GAB2022 | Serial | 3700 | ||
Permanent link to this record | |||||
Author | Josep Brugues Pujolras; Lluis Gomez; Dimosthenis Karatzas | ||||
Title | A Multilingual Approach to Scene Text Visual Question Answering | Type | Conference Article | ||
Year | 2022 | Publication | Document Analysis Systems.15th IAPR International Workshop, (DAS2022) | Abbreviated Journal | |
Volume | Issue | Pages | 65-79 | ||
Keywords | Scene text; Visual question answering; Multilingual word embeddings; Vision and language; Deep learning | ||||
Abstract | Scene Text Visual Question Answering (ST-VQA) has recently emerged as a hot research topic in Computer Vision. Current ST-VQA models have a big potential for many types of applications but lack the ability to perform well on more than one language at a time due to the lack of multilingual data, as well as the use of monolingual word embeddings for training. In this work, we explore the possibility to obtain bilingual and multilingual VQA models. In that regard, we use an already established VQA model that uses monolingual word embeddings as part of its pipeline and substitute them by FastText and BPEmb multilingual word embeddings that have been aligned to English. Our experiments demonstrate that it is possible to obtain bilingual and multilingual VQA models with a minimal loss in performance in languages not used during training, as well as a multilingual model trained in multiple languages that match the performance of the respective monolingual baselines. | ||||
Address | La Rochelle, France; May 22–25, 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 611.004; 600.155; 601.002 | Approved | no | ||
Call Number | Admin @ si @ BGK2022b | Serial | 3695 | ||
Permanent link to this record | |||||
Author | Adria Molina; Lluis Gomez; Oriol Ramos Terrades; Josep Llados | ||||
Title | A Generic Image Retrieval Method for Date Estimation of Historical Document Collections | Type | Conference Article | ||
Year | 2022 | Publication | Document Analysis Systems.15th IAPR International Workshop, (DAS2022) | Abbreviated Journal | |
Volume | 13237 | Issue | Pages | 583–597 | |
Keywords | Date estimation; Document retrieval; Image retrieval; Ranking loss; Smooth-nDCG | ||||
Abstract | Date estimation of historical document images is a challenging problem, with several contributions in the literature that lack of the ability to generalize from one dataset to others. This paper presents a robust date estimation system based in a retrieval approach that generalizes well in front of heterogeneous collections. We use a ranking loss function named smooth-nDCG to train a Convolutional Neural Network that learns an ordination of documents for each problem. One of the main usages of the presented approach is as a tool for historical contextual retrieval. It means that scholars could perform comparative analysis of historical images from big datasets in terms of the period where they were produced. We provide experimental evaluation on different types of documents from real datasets of manuscript and newspaper images. | ||||
Address | La Rochelle, France; May 22–25, 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DAS | ||
Notes | DAG; 600.140; 600.121 | Approved | no | ||
Call Number | Admin @ si @ MGR2022 | Serial | 3694 | ||
Permanent link to this record | |||||
Author | Mohamed Ramzy Ibrahim; Robert Benavente; Felipe Lumbreras; Daniel Ponsa | ||||
Title | 3DRRDB: Super Resolution of Multiple Remote Sensing Images using 3D Residual in Residual Dense Blocks | Type | Conference Article | ||
Year | 2022 | Publication | CVPR 2022 Workshop on IEEE Perception Beyond the Visible Spectrum workshop series (PBVS, 18th Edition) | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | Training; Solid modeling; Three-dimensional displays; PSNR; Convolution; Superresolution; Pattern recognition | ||||
Abstract | The rapid advancement of Deep Convolutional Neural Networks helped in solving many remote sensing problems, especially the problems of super-resolution. However, most state-of-the-art methods focus more on Single Image Super-Resolution neglecting Multi-Image Super-Resolution. In this work, a new proposed 3D Residual in Residual Dense Blocks model (3DRRDB) focuses on remote sensing Multi-Image Super-Resolution for two different single spectral bands. The proposed 3DRRDB model explores the idea of 3D convolution layers in deeply connected Dense Blocks and the effect of local and global residual connections with residual scaling in Multi-Image Super-Resolution. The model tested on the Proba-V challenge dataset shows a significant improvement above the current state-of-the-art models scoring a Corrected Peak Signal to Noise Ratio (cPSNR) of 48.79 dB and 50.83 dB for Near Infrared (NIR) and RED Bands respectively. Moreover, the proposed 3DRRDB model scores a Corrected Structural Similarity Index Measure (cSSIM) of 0.9865 and 0.9909 for NIR and RED bands respectively. | ||||
Address | New Orleans, USA; 19 June 2022 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | CVPRW | ||
Notes | MSIAU; 600.130 | Approved | no | ||
Call Number | Admin @ si @ IBL2022 | Serial | 3693 | ||
Permanent link to this record | |||||
Author | Shiqi Yang; Yaxing Wang; Joost Van de Weijer; Luis Herranz; Shangling Jui | ||||
Title | Exploiting the Intrinsic Neighborhood Structure for Source-free Domain Adaptation | Type | Conference Article | ||
Year | 2021 | Publication | Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS 2021) | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Domain adaptation (DA) aims to alleviate the domain shift between source domain and target domain. Most DA methods require access to the source data, but often that is not possible (e.g. due to data privacy or intellectual property). In this paper, we address the challenging source-free domain adaptation (SFDA) problem, where the source pretrained model is adapted to the target domain in the absence of source data. Our method is based on the observation that target data, which might no longer align with the source domain classifier, still forms clear clusters. We capture this intrinsic structure by defining local affinity of the target data, and encourage label consistency among data with high local affinity. We observe that higher affinity should be assigned to reciprocal neighbors, and propose a self regularization loss to decrease the negative impact of noisy neighbors. Furthermore, to aggregate information with more context, we consider expanded neighborhoods with small affinity values. In the experimental results we verify that the inherent structure of the target features is an important source of information for domain adaptation. We demonstrate that this local structure can be efficiently captured by considering the local neighbors, the reciprocal neighbors, and the expanded neighborhood. Finally, we achieve state-of-the-art performance on several 2D image and 3D point cloud recognition datasets. Code is available in https://github.com/Albert0147/SFDA_neighbors. | ||||
Address | Online; December 7-10, 2021 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NIPS | ||
Notes | LAMP; 600.147; 600.141 | Approved | no | ||
Call Number | Admin @ si @ | Serial | 3691 | ||
Permanent link to this record |