toggle visibility Search & Display Options

Select All    Deselect All
 |   | 
Details
   print
  Records Links
Author P. Andreeva; Maya Dimitrova; Petia Radeva edit  openurl
  Title Data Mining Learning Models and Algorithms for Medical Applications Type Book Chapter
  Year 2004 Publication 18 Conference Systems for Automation of Engineering and Research (SEAR 2004) Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) Varna (Bulgaria)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ ADR2004 Serial 474  
Permanent link to this record
 

 
Author Maya Dimitrova; I. Terziev; Petia Radeva; Juan J. Villanueva edit  openurl
  Title Java-Servlet Technology for Building New Web Document Classifiers Type Miscellaneous
  Year 2004 Publication Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) Varna (Bulgaria)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ DTR2004 Serial 476  
Permanent link to this record
 

 
Author Maya Dimitrova; Petia Radeva; David Rotger; D. Boyadjiev; Juan J. Villanueva edit  openurl
  Title Advanced Cardiological Diagnosis via Intelligent Image Analysis Type Miscellaneous
  Year 2004 Publication Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract  
  Address (down) Varna (Bulgaria)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference  
  Notes MILAB Approved no  
  Call Number BCNPCL @ bcnpcl @ DRR2004 Serial 477  
Permanent link to this record
 

 
Author Sergio Escalera; Oriol Pujol; Eric Laciar; Jordi Vitria; Esther Pueyo; Petia Radeva edit  doi
openurl 
  Title Coronary Damage Classification of Patients with the Chagas Disease with Error-Correcting Output Codes Type Conference Article
  Year 2008 Publication Intelligent Systems, 4th International IEEE Conference, 6–8 setembre 2008. Abbreviated Journal  
  Volume 2 Issue Pages 12–17  
  Keywords  
  Abstract The Chagaspsila disease is endemic in all Latin America, affecting millions of people in the continent. In order to diagnose and treat the Chagaspsila disease, it is important to detect and measure the coronary damage of the patient. In this paper, we analyze and categorize patients into different groups based on the coronary damage produced by the disease. Based on the features of the heart cycle extracted using high resolution ECG, a multi-class scheme of error-correcting output codes (ECOC) is formulated and successfully applied. The results show that the proposed scheme obtains significant performance improvements compared to previous works and state-of-the-art ECOC designs.  
  Address (down) Varna (Bulgaria)  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference IS’08  
  Notes MILAB; OR;HuPBA;MV Approved no  
  Call Number BCNPCL @ bcnpcl @ EPL2008 Serial 1042  
Permanent link to this record
 

 
Author Yael Tudela; Ana Garcia Rodriguez; Gloria Fernandez Esparrach; Jorge Bernal edit  url
doi  openurl
  Title Towards Fine-Grained Polyp Segmentation and Classification Type Conference Article
  Year 2023 Publication Workshop on Clinical Image-Based Procedures Abbreviated Journal  
  Volume 14242 Issue Pages 32-42  
  Keywords Medical image segmentation; Colorectal Cancer; Vision Transformer; Classification  
  Abstract Colorectal cancer is one of the main causes of cancer death worldwide. Colonoscopy is the gold standard screening tool as it allows lesion detection and removal during the same procedure. During the last decades, several efforts have been made to develop CAD systems to assist clinicians in lesion detection and classification. Regarding the latter, and in order to be used in the exploration room as part of resect and discard or leave-in-situ strategies, these systems must identify correctly all different lesion types. This is a challenging task, as the data used to train these systems presents great inter-class similarity, high class imbalance, and low representation of clinically relevant histology classes such as serrated sessile adenomas.

In this paper, a new polyp segmentation and classification method, Swin-Expand, is introduced. Based on Swin-Transformer, it uses a simple and lightweight decoder. The performance of this method has been assessed on a novel dataset, comprising 1126 high-definition images representing the three main histological classes. Results show a clear improvement in both segmentation and classification performance, also achieving competitive results when tested in public datasets. These results confirm that both the method and the data are important to obtain more accurate polyp representations.
 
  Address (down) Vancouver; October 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title LNCS  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference MICCAIW  
  Notes ISE Approved no  
  Call Number Admin @ si @ TGF2023 Serial 3837  
Permanent link to this record
 

 
Author Hassan Ahmed Sial; S. Sancho; Ramon Baldrich; Robert Benavente; Maria Vanrell edit   pdf
url  openurl
  Title Color-based data augmentation for Reflectance Estimation Type Conference Article
  Year 2018 Publication 26th Color Imaging Conference Abbreviated Journal  
  Volume Issue Pages 284-289  
  Keywords  
  Abstract Deep convolutional architectures have shown to be successful frameworks to solve generic computer vision problems. The estimation of intrinsic reflectance from single image is not a solved problem yet. Encoder-Decoder architectures are a perfect approach for pixel-wise reflectance estimation, although it usually suffers from the lack of large datasets. Lack of data can be partially solved with data augmentation, however usual techniques focus on geometric changes which does not help for reflectance estimation. In this paper we propose a color-based data augmentation technique that extends the training data by increasing the variability of chromaticity. Rotation on the red-green blue-yellow plane of an opponent space enable to increase the training set in a coherent and sound way that improves network generalization capability for reflectance estimation. We perform some experiments on the Sintel dataset showing that our color-based augmentation increase performance and overcomes one of the state-of-the-art methods.  
  Address (down) Vancouver; November 2018  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CIC  
  Notes CIC Approved no  
  Call Number Admin @ si @ SSB2018a Serial 3129  
Permanent link to this record
 

 
Author Benjia Zhou; Zhigang Chen; Albert Clapes; Jun Wan; Yanyan Liang; Sergio Escalera; Zhen Lei; Du Zhang edit   pdf
url  doi
openurl 
  Title Gloss-free Sign Language Translation: Improving from Visual-Language Pretraining Type Conference Article
  Year 2023 Publication IEEE/CVF International Conference on Computer Vision (ICCV) Workshops Abbreviated Journal  
  Volume Issue Pages  
  Keywords  
  Abstract Sign Language Translation (SLT) is a challenging task due to its cross-domain nature, involving the translation of visual-gestural language to text. Many previous methods employ an intermediate representation, i.e., gloss sequences, to facilitate SLT, thus transforming it into a two-stage task of sign language recognition (SLR) followed by sign language translation (SLT). However, the scarcity of gloss-annotated sign language data, combined with the information bottleneck in the mid-level gloss representation, has hindered the further development of the SLT task. To address this challenge, we propose a novel Gloss-Free SLT based on Visual-Language Pretraining (GFSLT-VLP), which improves SLT by inheriting language-oriented prior knowledge from pre-trained models, without any gloss annotation assistance. Our approach involves two stages: (i) integrating Contrastive Language-Image Pre-training (CLIP) with masked self-supervised learning to create pre-tasks that bridge the semantic gap between visual and textual representations and restore masked sentences, and (ii) constructing an end-to-end architecture with an encoder-decoder-like structure that inherits the parameters of the pre-trained Visual Encoder and Text Decoder from the first stage. The seamless combination of these novel designs forms a robust sign language representation and significantly improves gloss-free sign language translation. In particular, we have achieved unprecedented improvements in terms of BLEU-4 score on the PHOENIX14T dataset (>+5) and the CSL-Daily dataset (>+3) compared to state-of-the-art gloss-free SLT methods. Furthermore, our approach also achieves competitive results on the PHOENIX14T dataset when compared with most of the gloss-based methods.  
  Address (down) Vancouver; Canada; June 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference ICCVW  
  Notes HUPBA; Approved no  
  Call Number Admin @ si @ ZCC2023 Serial 3839  
Permanent link to this record
 

 
Author Matthias Eisenmann; Annika Reinke; Vivienn Weru; Minu D. Tizabi; Fabian Isensee; Tim J. Adler; Sharib Ali; Vincent Andrearczyk; Marc Aubreville; Ujjwal Baid; Spyridon Bakas; Niranjan Balu; Sophia Bano; Jorge Bernal; Sebastian Bodenstedt; Alessandro Casella; Veronika Cheplygina; Marie Daum; Marleen de Bruijne edit   pdf
doi  openurl
  Title Why Is the Winner the Best? Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 19955-19966  
  Keywords  
  Abstract International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multi-center study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and postprocessing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.  
  Address (down) Vancouver; Canada; June 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes ISE Approved no  
  Call Number Admin @ si @ ERW2023 Serial 3842  
Permanent link to this record
 

 
Author JW Xiao; CB Zhang; J. Feng; Xialei Liu; Joost Van de Weijer; MM Cheng edit  doi
openurl 
  Title Endpoints Weight Fusion for Class Incremental Semantic Segmentation Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Abbreviated Journal  
  Volume Issue Pages 7204-7213  
  Keywords  
  Abstract Class incremental semantic segmentation (CISS) focuses on alleviating catastrophic forgetting to improve discrimination. Previous work mainly exploit regularization (e.g., knowledge distillation) to maintain previous knowledge in the current model. However, distillation alone often yields limited gain to the model since only the representations of old and new models are restricted to be consistent. In this paper, we propose a simple yet effective method to obtain a model with strong memory of old knowledge, named Endpoints Weight Fusion (EWF). In our method, the model containing old knowledge is fused with the model retaining new knowledge in a dynamic fusion manner, strengthening the memory of old classes in ever-changing distributions. In addition, we analyze the relation between our fusion strategy and a popular moving average technique EMA, which reveals why our method is more suitable for class-incremental learning. To facilitate parameter fusion with closer distance in the parameter space, we use distillation to enhance the optimization process. Furthermore, we conduct experiments on two widely used datasets, achieving the state-of-the-art performance.  
  Address (down) Vancouver; Canada; June 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPR  
  Notes LAMP Approved no  
  Call Number Admin @ si @ XZF2023 Serial 3854  
Permanent link to this record
 

 
Author Yawei Li; Yulun Zhang; Radu Timofte; Luc Van Gool; Zhijun Tu; Kunpeng Du; Hailing Wang; Hanting Chen; Wei Li; Xiaofei Wang; Jie Hu; Yunhe Wang; Xiangyu Kong; Jinlong Wu; Dafeng Zhang; Jianxing Zhang; Shuai Liu; Furui Bai; Chaoyu Feng; Hao Wang; Yuqian Zhang; Guangqi Shao; Xiaotao Wang; Lei Lei; Rongjian Xu; Zhilu Zhang; Yunjin Chen; Dongwei Ren; Wangmeng Zuo; Qi Wu; Mingyan Han; Shen Cheng; Haipeng Li; Ting Jiang; Chengzhi Jiang; Xinpeng Li; Jinting Luo; Wenjie Lin; Lei Yu; Haoqiang Fan; Shuaicheng Liu; Aditya Arora; Syed Waqas Zamir; Javier Vazquez; Konstantinos G. Derpanis; Michael S. Brown; Hao Li; Zhihao Zhao; Jinshan Pan; Jiangxin Dong; Jinhui Tang; Bo Yang; Jingxiang Chen; Chenghua Li; Xi Zhang; Zhao Zhang; Jiahuan Ren; Zhicheng Ji; Kang Miao; Suiyi Zhao; Huan Zheng; YanYan Wei; Kangliang Liu; Xiangcheng Du; Sijie Liu; Yingbin Zheng; Xingjiao Wu; Cheng Jin; Rajeev Irny; Sriharsha Koundinya; Vighnesh Kamath; Gaurav Khandelwal; Sunder Ali Khowaja; Jiseok Yoon; Ik Hyun Lee; Shijie Chen; Chengqiang Zhao; Huabin Yang; Zhongjian Zhang; Junjia Huang; Yanru Zhang edit  url
doi  openurl
  Title NTIRE 2023 challenge on image denoising: Methods and results Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages 1904-1920  
  Keywords  
  Abstract This paper reviews the NTIRE 2023 challenge on image denoising (σ = 50) with a focus on the proposed solutions and results. The aim is to obtain a network design capable to produce high-quality results with the best performance measured by PSNR for image denoising. Independent additive white Gaussian noise (AWGN) is assumed and the noise level is 50. The challenge had 225 registered participants, and 16 teams made valid submissions. They gauge the state-of-the-art for image denoising.  
  Address (down) Vancouver; Canada; June 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MACO; CIC Approved no  
  Call Number Admin @ si @ LZT2023 Serial 3910  
Permanent link to this record
 

 
Author Spencer Low; Oliver Nina; Angel Sappa; Erik Blasch; Nathan Inkawhich edit  url
doi  openurl
  Title Multi-Modal Aerial View Image Challenge: Translation From Synthetic Aperture Radar to Electro-Optical Domain Results-PBVS 2023 Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages 515-523  
  Keywords  
  Abstract This paper unveils the discoveries and outcomes of the inaugural iteration of the Multi-modal Aerial View Image Challenge (MAVIC) aimed at image translation. The primary objective of this competition is to stimulate research efforts towards the development of models capable of translating co-aligned images between multiple modalities. To accomplish the task of image translation, the competition utilizes images obtained from both synthetic aperture radar (SAR) and electro-optical (EO) sources. Specifically, the challenge centers on the translation from the SAR modality to the EO modality, an area of research that has garnered attention. The inaugural challenge demonstrates the feasibility of the task. The dataset utilized in this challenge is derived from the UNIfied COincident Optical and Radar for recognitioN (UNICORN) dataset. We introduce an new version of the UNICORN dataset that is focused on enabling the sensor translation task. Performance evaluation is conducted using a combination of measures to ensure high fidelity and high accuracy translations.  
  Address (down) Vancouver; Canada; June 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MSIAU Approved no  
  Call Number Admin @ si @ LNS2023a Serial 3913  
Permanent link to this record
 

 
Author Rafael E. Rivadeneira; Angel Sappa; Boris X. Vintimilla; Chenyang Wang; Junjun Jiang; Xianming Liu; Zhiwei Zhong; Dai Bin; Li Ruodi; Li Shengye edit  url
doi  openurl
  Title Thermal Image Super-Resolution Challenge Results-PBVS 2023 Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages 470-478  
  Keywords  
  Abstract This paper presents the results of two tracks from the fourth Thermal Image Super-Resolution (TISR) challenge, held at the Perception Beyond the Visible Spectrum (PBVS) 2023 workshop. Track-1 uses the same thermal image dataset as previous challenges, with 951 training images and 50 validation images at each resolution. In this track, two evaluations were conducted: the first consists of generating a SR image from a HR thermal noisy image downsampled by four, and the second consists of generating a SR image from a mid-resolution image and compare it with its semi-registered HR image (acquired with another camera). The results of Track-1 outperformed those from last year’s challenge. On the other hand, Track-2 uses a new acquired dataset consisting of 160 registered visible and thermal images of the same scenario for training and 30 validation images. This year, more than 150 teams participated in the challenge tracks, demonstrating the community’s ongoing interest in this topic.  
  Address (down) Vancouver; Canada; June 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MSIAU Approved no  
  Call Number Admin @ si @ RSV2023 Serial 3914  
Permanent link to this record
 

 
Author Spencer Low; Oliver Nina; Angel Sappa; Erik Blasch; Nathan Inkawhich edit  url
doi  openurl
  Title Multi-Modal Aerial View Object Classification Challenge Results-PBVS 2023 Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages 412-421  
  Keywords  
  Abstract This paper presents the findings and results of the third edition of the Multi-modal Aerial View Object Classification (MAVOC) challenge in a detailed and comprehensive manner. The challenge consists of two tracks. The primary aim of both tracks is to encourage research into building recognition models that utilize both synthetic aperture radar (SAR) and electro-optical (EO) imagery. Participating teams are encouraged to develop multi-modal approaches that incorporate complementary information from both domains. While the 2021 challenge demonstrated the feasibility of combining both modalities, the 2022 challenge expanded on the capability of multi-modal models. The 2023 challenge introduces a refined version of the UNICORN dataset and demonstrates significant improvements made. The 2023 challenge adopts an updated UNIfied CO-incident Optical and Radar for recognitioN (UNICORN V2) dataset and competition format. Two tasks are featured: SAR classification and SAR + EO classification. In addition to measuring accuracy of models, we also introduce out-of-distribution measures to encourage model robustness.The majority of this paper is dedicated to discussing the top performing methods and evaluating their performance on our blind test set. It is worth noting that all of the top ten teams outperformed the Resnet-50 baseline. The top team for SAR classification achieved a 173% performance improvement over the baseline, while the top team for SAR + EO classification achieved a 175% improvement.  
  Address (down) Vancouver; Canada; June 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MSIAU Approved no  
  Call Number Admin @ si @ LNS2023b Serial 3915  
Permanent link to this record
 

 
Author Chenshen Wu; Joost Van de Weijer edit  url
doi  openurl
  Title Density Map Distillation for Incremental Object Counting Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages 2505-2514  
  Keywords  
  Abstract We investigate the problem of incremental learning for object counting, where a method must learn to count a variety of object classes from a sequence of datasets. A naïve approach to incremental object counting would suffer from catastrophic forgetting, where it would suffer from a dramatic performance drop on previous tasks. In this paper, we propose a new exemplar-free functional regularization method, called Density Map Distillation (DMD). During training, we introduce a new counter head for each task and introduce a distillation loss to prevent forgetting of previous tasks. Additionally, we introduce a cross-task adaptor that projects the features of the current backbone to the previous backbone. This projector allows for the learning of new features while the backbone retains the relevant features for previous tasks. Finally, we set up experiments of incremental learning for counting new objects. Results confirm that our method greatly reduces catastrophic forgetting and outperforms existing methods.  
  Address (down) Vancouver; Canada; June 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MSIAU Approved no  
  Call Number Admin @ si @ WuW2023 Serial 3916  
Permanent link to this record
 

 
Author Hao Fang; Ajian Liu; Jun Wan; Sergio Escalera; Hugo Jair Escalante; Zhen Lei edit  url
doi  openurl
  Title Surveillance Face Presentation Attack Detection Challenge Type Conference Article
  Year 2023 Publication Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops Abbreviated Journal  
  Volume Issue Pages 6360-6370  
  Keywords  
  Abstract Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, most of the studies lacked consideration of long-distance scenarios. Specifically, compared with FAS in traditional scenes such as phone unlocking, face payment, and self-service security inspection, FAS in long-distance such as station squares, parks, and self-service supermarkets are equally important, but it has not been sufficiently explored yet. In order to fill this gap in the FAS community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask). SuHiFiMask contains 10,195 videos from 101 subjects of different age groups, which are collected by 7 mainstream surveillance cameras. Based on this dataset and protocol-3 for evaluating the robustness of the algorithm under quality changes, we organized a face presentation attack detection challenge in surveillance scenarios. It attracted 180 teams for the development phase with a total of 37 teams qualifying for the final round. The organization team re-verified and re-ran the submitted code and used the results as the final ranking. In this paper, we present an overview of the challenge, including an introduction to the dataset used, the definition of the protocol, the evaluation metrics, and the announcement of the competition results. Finally, we present the top-ranked algorithms and the research ideas provided by the competition for attack detection in long-range surveillance scenarios.  
  Address (down) Vancouver; Canada; June 2023  
  Corporate Author Thesis  
  Publisher Place of Publication Editor  
  Language Summary Language Original Title  
  Series Editor Series Title Abbreviated Series Title  
  Series Volume Series Issue Edition  
  ISSN ISBN Medium  
  Area Expedition Conference CVPRW  
  Notes MSIAU Approved no  
  Call Number Admin @ si @ FLW2023 Serial 3917  
Permanent link to this record
Select All    Deselect All
 |   | 
Details
   print

Save Citations:
Export Records: