toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author (down) Victor Ponce edit  url
openurl 
  Title Evolutionary Bags of Space-Time Features for Human Analysis Type Book Whole
  Year 2016 Publication PhD Thesis Universitat de Barcelona, UOC and CVC Abbreviated Journal  
  Volume Issue Pages  
  Keywords Computer algorithms; Digital image processing; Digital video; Analysis of variance; Dynamic programming; Evolutionary computation; Gesture  
  Abstract The representation (or feature) learning has been an emerging concept in the last years, since it collects a set of techniques that are present in any theoretical or practical methodology referring to artificial intelligence. In computer vision, a very common representation has adopted the form of the well-known Bag of Visual Words. This representation appears implicitly in most approaches where images are described, and is also present in a huge number of areas and domains: image content retrieval, pedestrian detection, human-computer interaction, surveillance, e-health, and social computing, amongst others. The early stages of this dissertation provide an approach for learning visual representations inside evolutionary algorithms, which consists of evolving weighting schemes to improve the BoVW representations for the task of recognizing categories of videos and images. Thus, we demonstrate the applicability of the most common weighting schemes, which are often used in text mining but are less frequently found in computer vision tasks. Beyond learning these visual representations, we provide an approach based on fusion strategies for learning spatiotemporal representations, from multimodal data obtained by depth sensors. Besides, we specially aim at the evolutionary and dynamic modelling, where the temporal factor is present in the nature of the data, such as video sequences of gestures and actions. Indeed, we explore the effects of probabilistic modelling for those approaches based on dynamic programming, so as to handle the temporal deformation and variance amongst video sequences of different categories. Finally, we integrate dynamic programming and generative models into an evolutionary computation framework, with the aim of learning Bags of SubGestures (BoSG) representations and hence to improve the generalization capability of standard gesture recognition approaches. The results obtained in the experimentation demonstrate, first, that evolutionary algorithms are useful for improving the representation of BoVW approaches in several datasets for recognizing categories in still images and video sequences. On the other hand, our experimentation reveals that both, the use of dynamic programming and generative models to align video sequences, and the representations obtained from applying fusion strategies in multimodal data, entail an enhancement on the performance when recognizing some gesture categories. Furthermore, the combination of evolutionary algorithms with models based on dynamic programming and generative approaches results, when aiming at the classification of video categories on large video datasets, in a considerable improvement over standard gesture and action recognition approaches. Finally, we demonstrate the applications of these representations in several domains for human analysis: classification of images where humans may be present, action and gesture recognition for general applications, and in particular for conversational settings within the field of restorative justice  
  Address June 2016  
  Corporate Author Thesis Ph.D. thesis  
  Publisher Ediciones Graficas Rey Place of Publication Editor Sergio Escalera;Xavier Baro;Hugo Jair Escalante  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA Approved no  
  Call Number Pon2016 Serial 2814  
Permanent link to this record
 

 
Author (down) Victor M. Campello; Polyxeni Gkontra; Cristian Izquierdo; Carlos Martin-Isla; Alireza Sojoudi; Peter M. Full; Klaus Maier-Hein; Yao Zhang; Zhiqiang He; Jun Ma; Mario Parreno; Alberto Albiol; Fanwei Kong; Shawn C. Shadden; Jorge Corral Acero; Vaanathi Sundaresan; Mina Saber; Mustafa Elattar; Hongwei Li; Bjoern Menze; Firas Khader; Christoph Haarburger; Cian M. Scannell; Mitko Veta; Adam Carscadden; Kumaradevan Punithakumar; Xiao Liu; Sotirios A. Tsaftaris; Xiaoqiong Huang; Xin Yang; Lei Li; Xiahai Zhuang; David Vilades; Martin L. Descalzo; Andrea Guala; Lucia La Mura; Matthias G. Friedrich; Ria Garg; Julie Lebel; Filipe Henriques; Mahir Karakas; Ersin Cavus; Steffen E. Petersen; Sergio Escalera; Santiago Segui; Jose F. Rodriguez Palomares; Karim Lekadir edit  url
doi  openurl
  Title Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation: The M&Ms Challenge Type Journal Article
  Year 2021 Publication IEEE Transactions on Medical Imaging Abbreviated Journal TMI  
  Volume 40 Issue 12 Pages 3543-3554  
  Keywords  
  Abstract The emergence of deep learning has considerably advanced the state-of-the-art in cardiac magnetic resonance (CMR) segmentation. Many techniques have been proposed over the last few years, bringing the accuracy of automated segmentation close to human performance. However, these models have been all too often trained and validated using cardiac imaging samples from single clinical centres or homogeneous imaging protocols. This has prevented the development and validation of models that are generalizable across different clinical centres, imaging conditions or scanner vendors. To promote further research and scientific benchmarking in the field of generalizable deep learning for cardiac segmentation, this paper presents the results of the Multi-Centre, Multi-Vendor and Multi-Disease Cardiac Segmentation (M&Ms) Challenge, which was recently organized as part of the MICCAI 2020 Conference. A total of 14 teams submitted different solutions to the problem, combining various baseline models, data augmentation strategies, and domain adaptation techniques. The obtained results indicate the importance of intensity-driven data augmentation, as well as the need for further research to improve generalizability towards unseen scanner vendors or new imaging protocols. Furthermore, we present a new resource of 375 heterogeneous CMR datasets acquired by using four different scanner vendors in six hospitals and three different countries (Spain, Canada and Germany), which we provide as open-access for the community to enable future research in the field.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HUPBA; no proj Approved no  
  Call Number Admin @ si @ CGI2021 Serial 3653  
Permanent link to this record
 

 
Author (down) Victor M. Campello; Carlos Martin-Isla; Cristian Izquierdo; Andrea Guala; Jose F. Rodriguez Palomares; David Vilades; Martin L. Descalzo; Mahir Karakas; Ersin Cavus; Zahra Zahra Raisi-Estabragh; Steffen E. Petersen; Sergio Escalera; Santiago Segui; Karim Lekadir edit  doi
openurl 
  Title Minimising multi-centre radiomics variability through image normalisation: a pilot study Type Journal Article
  Year 2022 Publication Scientific Reports Abbreviated Journal ScR  
  Volume 12 Issue 1 Pages 12532  
  Keywords  
  Abstract Radiomics is an emerging technique for the quantification of imaging data that has recently shown great promise for deeper phenotyping of cardiovascular disease. Thus far, the technique has been mostly applied in single-centre studies. However, one of the main difficulties in multi-centre imaging studies is the inherent variability of image characteristics due to centre differences. In this paper, a comprehensive analysis of radiomics variability under several image- and feature-based normalisation techniques was conducted using a multi-centre cardiovascular magnetic resonance dataset. 218 subjects divided into healthy (n = 112) and hypertrophic cardiomyopathy (n = 106, HCM) groups from five different centres were considered. First and second order texture radiomic features were extracted from three regions of interest, namely the left and right ventricular cavities and the left ventricular myocardium. Two methods were used to assess features’ variability. First, feature distributions were compared across centres to obtain a distribution similarity index. Second, two classification tasks were proposed to assess: (1) the amount of centre-related information encoded in normalised features (centre identification) and (2) the generalisation ability for a classification model when trained on these features (healthy versus HCM classification). The results showed that the feature-based harmonisation technique ComBat is able to remove the variability introduced by centre information from radiomic features, at the expense of slightly degrading classification performance. Piecewise linear histogram matching normalisation gave features with greater generalisation ability for classification ( balanced accuracy in between 0.78 ± 0.08 and 0.79 ± 0.09). Models trained with features from images without normalisation showed the worst performance overall ( balanced accuracy in between 0.45 ± 0.28 and 0.60 ± 0.22). In conclusion, centre-related information removal did not imply good generalisation ability for classification.  
  Address 2022/07/22  
  Corporate Author Thesis  
  Publisher Springer Nature Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes HuPBA Approved no  
  Call Number Admin @ si @ CMI2022 Serial 3749  
Permanent link to this record
 

 
Author (down) Victor Campmany; Sergio Silva; Juan Carlos Moure; Toni Espinosa; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title GPU-based pedestrian detection for autonomous driving Type Conference Article
  Year 2016 Publication GPU Technology Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords Pedestrian Detection; GPU  
  Abstract Pedestrian detection for autonomous driving is one of the hardest tasks within computer vision, and involves huge computational costs. Obtaining acceptable real-time performance, measured in frames per second (fps), for the most advanced algorithms is nowadays a hard challenge. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system that includes LBP and HOG as feature descriptors and SVM and Random forest as classifiers. We introduce significant algorithmic adjustments and optimizations to adapt the problem to the NVIDIA GPU architecture. The aim is to deploy a real-time system providing reliable results.  
  Address Silicon Valley; San Francisco; USA; April 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference GTC  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ CSM2016 Serial 2737  
Permanent link to this record
 

 
Author (down) Victor Campmany; Sergio Silva; Juan Carlos Moure; Antoni Espinosa; David Vazquez; Antonio Lopez edit   pdf
openurl 
  Title GPU-based pedestrian detection for autonomous driving Type Abstract
  Year 2015 Publication Programming and Tunning Massive Parallel Systems Abbreviated Journal PUMPS  
  Volume Issue Pages  
  Keywords Autonomous Driving; ADAS; CUDA; Pedestrian Detection  
  Abstract Pedestrian detection for autonomous driving has gained a lot of prominence during the last few years. Besides the fact that it is one of the hardest tasks within computer vision, it involves huge computational costs. The real-time constraints in the field are tight, and regular processors are not able to handle the workload obtaining an acceptable ratio of frames per second (fps). Moreover, multiple cameras are required to obtain accurate results, so the need to speed up the process is even higher. Taking the work in [1] as our baseline, we propose a CUDA implementation of a pedestrian detection system. Further, we introduce significant algorithmic adjustments and optimizations to adapt the problem to the GPU architecture. The aim is to provide a system capable of running in real-time obtaining reliable results.  
  Address Barcelona; Spain  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title PUMPS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference PUMPS  
  Notes ADAS; 600.076; 600.082; 600.085 Approved no  
  Call Number ADAS @ adas @ CSM2015 Serial 2644  
Permanent link to this record
 

 
Author (down) Victor Campmany; Sergio Silva; Antonio Espinosa; Juan Carlos Moure; David Vazquez; Antonio Lopez edit   pdf
url  openurl
  Title GPU-based pedestrian detection for autonomous driving Type Conference Article
  Year 2016 Publication 16th International Conference on Computational Science Abbreviated Journal  
  Volume 80 Issue Pages 2377-2381  
  Keywords Pedestrian detection; Autonomous Driving; CUDA  
  Abstract We propose a real-time pedestrian detection system for the embedded Nvidia Tegra X1 GPU-CPU hybrid platform. The pipeline is composed by the following state-of-the-art algorithms: Histogram of Local Binary Patterns (LBP) and Histograms of Oriented Gradients (HOG) features extracted from the input image; Pyramidal Sliding Window technique for foreground segmentation; and Support Vector Machine (SVM) for classification. Results show a 8x speedup in the target Tegra X1 platform and a better performance/watt ratio than desktop CUDA platforms in study.  
  Address San Diego; CA; USA; June 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCS  
  Notes ADAS; 600.085; 600.082; 600.076 Approved no  
  Call Number ADAS @ adas @ CSE2016 Serial 2741  
Permanent link to this record
 

 
Author (down) Victor Borjas; Jordi Vitria; Petia Radeva edit   pdf
openurl 
  Title Gradient Histogram Background Modeling for People Detection in Stationary Camera Environments Type Conference Article
  Year 2013 Publication 13th IAPR Conference on Machine Vision Applications Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Best Poster AwardOne of the big challenges of today person detectors is the decreasing of the false positive rate. In this paper, we propose a novel framework to customize person detectors in static camera scenarios in order to reduce this rate. This scheme includes background modeling for subtraction based on gradient histograms and Mean-Shift clustering. Our experiments show that the detection improved compared to using only the output from the pedestrian detector reducing 87% of the false positives and therefore the overall precision of the detection
was increased signi cantly.
 
  Address Kyoto; Japan; May 2013  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MVA  
  Notes OR; MILAB;MV Approved no  
  Call Number BVR2013 Serial 2238  
Permanent link to this record
 

 
Author (down) Veronica Romero; Emilio Granell; Alicia Fornes; Enrique Vidal; Joan Andreu Sanchez edit   pdf
url  openurl
  Title Information Extraction in Handwritten Marriage Licenses Books Type Conference Article
  Year 2019 Publication 5th International Workshop on Historical Document Imaging and Processing Abbreviated Journal  
  Volume Issue Pages 66-71  
  Keywords  
  Abstract Handwritten marriage licenses books are characterized by a simple structure of the text in the records with an evolutionary vocabulary, mainly composed of proper names that change along the time. This distinct vocabulary makes automatic transcription and semantic information extraction difficult tasks. Previous works have shown that the use of category-based language models and a Grammatical Inference technique known as MGGI can improve the accuracy of these
tasks. However, the application of the MGGI algorithm requires an a priori knowledge to label the words of the training strings, that is not always easy to obtain. In this paper we study how to automatically obtain the information required by the MGGI algorithm using a technique based on Confusion Networks. Using the resulting language model, full handwritten text recognition and information extraction experiments have been carried out with results supporting the proposed approach.
 
  Address Sydney; Australia; September 2019  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference HIP  
  Notes DAG; 600.140; 600.121 Approved no  
  Call Number Admin @ si @ RGF2019 Serial 3352  
Permanent link to this record
 

 
Author (down) Veronica Romero; Alicia Fornes; Nicolas Serrano; Joan Andreu Sanchez; A.H. Toselli; Volkmar Frinken; E. Vidal; Josep Llados edit   pdf
doi  openurl
  Title The ESPOSALLES database: An ancient marriage license corpus for off-line handwriting recognition Type Journal Article
  Year 2013 Publication Pattern Recognition Abbreviated Journal PR  
  Volume 46 Issue 6 Pages 1658-1669  
  Keywords  
  Abstract Historical records of daily activities provide intriguing insights into the life of our ancestors, useful for demography studies and genealogical research. Automatic processing of historical documents, however, has mostly been focused on single works of literature and less on social records, which tend to have a distinct layout, structure, and vocabulary. Such information is usually collected by expert demographers that devote a lot of time to manually transcribe them. This paper presents a new database, compiled from a marriage license books collection, to support research in automatic handwriting recognition for historical documents containing social records. Marriage license books are documents that were used for centuries by ecclesiastical institutions to register marriage licenses. Books from this collection are handwritten and span nearly half a millennium until the beginning of the 20th century. In addition, a study is presented about the capability of state-of-the-art handwritten text recognition systems, when applied to the presented database. Baseline results are reported for reference in future studies.  
  Address  
  Corporate Author Thesis  
  Publisher Elsevier Science Inc. New York, NY, USA Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN 0031-3203 ISBN Medium  
  Area Expedition Conference  
  Notes DAG; 600.045; 602.006; 605.203 Approved no  
  Call Number Admin @ si @ RFS2013 Serial 2298  
Permanent link to this record
 

 
Author (down) Veronica Romero; Alicia Fornes; Enrique Vidal; Joan Andreu Sanchez edit   pdf
isbn  openurl
  Title Information Extraction in Handwritten Marriage Licenses Books Using the MGGI Methodology Type Conference Article
  Year 2017 Publication 8th Iberian Conference on Pattern Recognition and Image Analysis Abbreviated Journal  
  Volume 10255 Issue Pages 287-294  
  Keywords Handwritten Text Recognition; Information extraction; Language modeling; MGGI; Categories-based language model  
  Abstract Historical records of daily activities provide intriguing insights into the life of our ancestors, useful for demographic and genealogical research. For example, marriage license books have been used for centuries by ecclesiastical and secular institutions to register marriages. These books follow a simple structure of the text in the records with a evolutionary vocabulary, mainly composed of proper names that change along the time. This distinct vocabulary makes automatic transcription and semantic information extraction difficult tasks. In previous works we studied the use of category-based language models and how a Grammatical Inference technique known as MGGI could improve the accuracy of these tasks. In this work we analyze the main causes of the semantic errors observed in previous results and apply a better implementation of the MGGI technique to solve these problems. Using the resulting language model, transcription and information extraction experiments have been carried out, and the results support our proposed approach.  
  Address Faro; Portugal; June 2017  
  Corporate Author Thesis  
  Publisher Place of Publication Editor L.A. Alexandre; J.Salvador Sanchez; Joao M. F. Rodriguez  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN 978-3-319-58837-7 Medium  
  Area Expedition Conference IbPRIA  
  Notes DAG; 602.006; 600.097; 600.121 Approved no  
  Call Number Admin @ si @ RFV2017 Serial 2952  
Permanent link to this record
 

 
Author (down) Veronica Romero; Alicia Fornes; Enrique Vidal; Joan Andreu Sanchez edit   pdf
openurl 
  Title Using the MGGI Methodology for Category-based Language Modeling in Handwritten Marriage Licenses Books Type Conference Article
  Year 2016 Publication 15th international conference on Frontiers in Handwriting Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Handwritten marriage licenses books have been used for centuries by ecclesiastical and secular institutions to register marriages. The information contained in these historical documents is useful for demography studies and
genealogical research, among others. Despite the generally simple structure of the text in these documents, automatic transcription and semantic information extraction is difficult due to the distinct and evolutionary vocabulary, which is composed mainly of proper names that change along the time. In previous
works we studied the use of category-based language models to both improve the automatic transcription accuracy and make easier the extraction of semantic information. Here we analyze the main causes of the semantic errors observed in previous results and apply a Grammatical Inference technique known as MGGI to improve the semantic accuracy of the language model obtained. Using this language model, full handwritten text recognition experiments have been carried out, with results supporting the interest of the proposed approach.
 
  Address Shenzhen; China; October 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICFHR  
  Notes DAG; 600.097; 602.006 Approved no  
  Call Number Admin @ si @ RFV2016 Serial 2909  
Permanent link to this record
 

 
Author (down) Vassileios Balntas; Edgar Riba; Daniel Ponsa; Krystian Mikolajczyk edit   pdf
openurl 
  Title Learning local feature descriptors with triplets and shallow convolutional neural networks Type Conference Article
  Year 2016 Publication 27th British Machine Vision Conference Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract It has recently been demonstrated that local feature descriptors based on convolutional neural networks (CNN) can significantly improve the matching performance. Previous work on learning such descriptors has focused on exploiting pairs of positive and negative patches to learn discriminative CNN representations. In this work, we propose to utilize triplets of training samples, together with in-triplet mining of hard negatives.
We show that our method achieves state of the art results, without the computational overhead typically associated with mining of negatives and with lower complexity of the network architecture. We compare our approach to recently introduced convolutional local feature descriptors, and demonstrate the advantages of the proposed methods in terms of performance and speed. We also examine different loss functions associated with triplets.
 
  Address York; UK; September 2016  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference BMVC  
  Notes ADAS; 600.086 Approved no  
  Call Number Admin @ si @ BRP2016 Serial 2818  
Permanent link to this record
 

 
Author (down) Valeriya Khan; Sebastian Cygert; Bartlomiej Twardowski; Tomasz Trzcinski edit   pdf
url  openurl
  Title Looking Through the Past: Better Knowledge Retention for Generative Replay in Continual Learning Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops Abbreviated Journal  
  Volume Issue Pages 3496-3500  
  Keywords  
  Abstract In this work, we improve the generative replay in a continual learning setting. We notice that in VAE-based generative replay, the generated features are quite far from the original ones when mapped to the latent space. Therefore, we propose modifications that allow the model to learn and generate complex data. More specifically, we incorporate the distillation in latent space between the current and previous models to reduce feature drift. Additionally, a latent matching for the reconstruction and original data is proposed to improve generated features alignment. Further, based on the observation that the reconstructions are better for preserving knowledge, we add the cycling of generations through the previously trained model to make them closer to the original data. Our method outperforms other generative replay methods in various scenarios.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes LAMP Approved no  
  Call Number Admin @ si @ KCT2023 Serial 3942  
Permanent link to this record
 

 
Author (down) Vacit Oguz Yazici; Long Long Yu; Arnau Ramisa; Luis Herranz; Joost Van de Weijer edit  doi
openurl 
  Title Main Product Detection with Graph Networks for Fashion Type Journal Article
  Year 2022 Publication Multimedia Tools and Applications Abbreviated Journal MTAP  
  Volume Issue Pages  
  Keywords  
  Abstract Computer vision has established a foothold in the online fashion retail industry. Main product detection is a crucial step of vision-based fashion product feed parsing pipelines, focused on identifying the bounding boxes that contain the product being sold in the gallery of images of the product page. The current state-of-the-art approach does not leverage the relations between regions in the image, and treats images of the same product independently, therefore not fully exploiting visual and product contextual information. In this paper, we propose a model that incorporates Graph Convolutional Networks (GCN) that jointly represent all detected bounding boxes in the gallery as nodes. We show that the proposed method is better than the state-of-the-art, especially, when we consider the scenario where title-input is missing at inference time and for cross-dataset evaluation, our method outperforms previous approaches by a large margin.  
  Address  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes LAMP; MACO; 600.147; 600.167; 600.164; 600.161; 600.141; 601.309 Approved no  
  Call Number Admin @ si @ YYR2022 Serial 3748  
Permanent link to this record
 

 
Author (down) Vacit Oguz Yazici; Joost Van de Weijer; Longlong Yu edit   pdf
url  doi
openurl 
  Title Visual Transformers with Primal Object Queries for Multi-Label Image Classification Type Conference Article
  Year 2022 Publication 26th International Conference on Pattern Recognition Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Multi-label image classification is about predicting a set of class labels that can be considered as orderless sequential data. Transformers process the sequential data as a whole, therefore they are inherently good at set prediction. The first vision-based transformer model, which was proposed for the object detection task introduced the concept of object queries. Object queries are learnable positional encodings that are used by attention modules in decoder layers to decode the object classes or bounding boxes using the region of interests in an image. However, inputting the same set of object queries to different decoder layers hinders the training: it results in lower performance and delays convergence. In this paper, we propose the usage of primal object queries that are only provided at the start of the transformer decoder stack. In addition, we improve the mixup technique proposed for multi-label classification. The proposed transformer model with primal object queries improves the state-of-the-art class wise F1 metric by 2.1% and 1.8%; and speeds up the convergence by 79.0% and 38.6% on MS-COCO and NUS-WIDE datasets respectively.  
  Address Montreal; Quebec; Canada; August 2022  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICPR  
  Notes LAMP; 600.147; 601.309 Approved no  
  Call Number Admin @ si @ YWY2022 Serial 3786  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: