Home | << 1 2 3 4 5 6 7 8 9 10 >> [11–11] |
Records | |||||
---|---|---|---|---|---|
Author | Matej Kristan; Jiri Matas; Martin Danelljan; Michael Felsberg; Hyung Jin Chang; Luka Cehovin Zajc; Alan Lukezic; Ondrej Drbohlav; Zhongqun Zhang; Khanh-Tung Tran; Xuan-Son Vu; Johanna Bjorklund; Christoph Mayer; Yushan Zhang; Lei Ke; Jie Zhao; Gustavo Fernandez; Noor Al-Shakarji; Dong An; Michael Arens; Stefan Becker; Goutam Bhat; Sebastian Bullinger; Antoni B. Chan; Shijie Chang; Hanyuan Chen; Xin Chen; Yan Chen; Zhenyu Chen; Yangming Cheng; Yutao Cui; Chunyuan Deng; Jiahua Dong; Matteo Dunnhofer; Wei Feng; Jianlong Fu; Jie Gao; Ruize Han; Zeqi Hao; Jun-Yan He; Keji He; Zhenyu He; Xiantao Hu; Kaer Huang; Yuqing Huang; Yi Jiang; Ben Kang; Jin-Peng Lan; Hyungjun Lee; Chenyang Li; Jiahao Li; Ning Li; Wangkai Li; Xiaodi Li; Xin Li; Pengyu Liu; Yue Liu; Huchuan Lu; Bin Luo; Ping Luo; Yinchao Ma; Deshui Miao; Christian Micheloni; Kannappan Palaniappan; Hancheol Park; Matthieu Paul; HouWen Peng; Zekun Qian; Gani Rahmon; Norbert Scherer-Negenborn; Pengcheng Shao; Wooksu Shin; Elham Soltani Kazemi; Tianhui Song; Rainer Stiefelhagen; Rui Sun; Chuanming Tang; Zhangyong Tang; Imad Eddine Toubal; Jack Valmadre; Joost van de Weijer; Luc Van Gool; Jash Vira; Stephane Vujasinovic; Cheng Wan; Jia Wan; Dong Wang; Fei Wang; Feifan Wang; He Wang; Limin Wang; Song Wang; Yaowei Wang; Zhepeng Wang; Gangshan Wu; Jiannan Wu; Qiangqiang Wu; Xiaojun Wu; Anqi Xiao; Jinxia Xie; Chenlong Xu; Min Xu; Tianyang Xu; Yuanyou Xu; Bin Yan; Dawei Yang; Ming-Hsuan Yang; Tianyu Yang; Yi Yang; Zongxin Yang; Xuanwu Yin; Fisher Yu; Hongyuan Yu; Qianjin Yu; Weichen Yu; YongSheng Yuan; Zehuan Yuan; Jianlin Zhang; Lu Zhang; Tianzhu Zhang; Guodongfang Zhao; Shaochuan Zhao; Yaozong Zheng; Bineng Zhong; Jiawen Zhu; Xuefeng Zhu; Yueting Zhuang; ChengAo Zong; Kunlong Zuo | ||||
Title | The First Visual Object Tracking Segmentation VOTS2023 Challenge Results | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops | Abbreviated Journal | |
Volume | Issue | Pages | 1796-1818 | ||
Keywords | |||||
Abstract | The Visual Object Tracking Segmentation VOTS2023 challenge is the eleventh annual tracker benchmarking activity of the VOT initiative. This challenge is the first to merge short-term and long-term as well as single-target and multiple-target tracking with segmentation masks as the only target location specification. A new dataset was created; the ground truth has been withheld to prevent overfitting. New performance measures and evaluation protocols have been created along with a new toolkit and an evaluation server. Results of the presented 47 trackers indicate that modern tracking frameworks are well-suited to deal with convergence of short-term and long-term tracking and that multiple and single target tracking can be considered a single problem. A leaderboard, with participating trackers details, the source code, the datasets, and the evaluation kit are publicly available at the challenge website\footnote https://www.votchallenge.net/vots2023/. | ||||
Address | Paris; France; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW | ||
Notes | LAMP | Approved | no | ||
Call Number | Admin @ si @ KMD2023 | Serial | 3939 | ||
Permanent link to this record | |||||
Author | Albin Soutif; Antonio Carta; Andrea Cossu; Julio Hurtado; Hamed Hemati; Vincenzo Lomonaco; Joost Van de Weijer | ||||
Title | A Comprehensive Empirical Evaluation on Online Continual Learning | Type | Conference Article | ||
Year | 2023 | Publication | Visual Continual Learning (ICCV-W) | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Online continual learning aims to get closer to a live learning experience by learning directly on a stream of data with temporally shifting distribution and by storing a minimum amount of data from that stream. In this empirical evaluation, we evaluate various methods from the literature that tackle online continual learning. More specifically, we focus on the class-incremental setting in the context of image classification, where the learner must learn new classes incrementally from a stream of data. We compare these methods on the Split-CIFAR100 and Split-TinyImagenet benchmarks, and measure their average accuracy, forgetting, stability, and quality of the representations, to evaluate various aspects of the algorithm at the end but also during the whole training period. We find that most methods suffer from stability and underfitting issues. However, the learned representations are comparable to i.i.d. training under the same computational budget. No clear winner emerges from the results and basic experience replay, when properly tuned and implemented, is a very strong baseline. We release our modular and extensible codebase at this https URL based on the avalanche framework to reproduce our results and encourage future research. | ||||
Address | Paris; France; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICCVW | ||
Notes | LAMP | Approved | no | ||
Call Number | Admin @ si @ SCC2023 | Serial | 3938 | ||
Permanent link to this record | |||||
Author | Zahra Raisi-Estabragh; Carlos Martin-Isla; Louise Nissen; Liliana Szabo; Victor M. Campello; Sergio Escalera; Simon Winther; Morten Bottcher; Karim Lekadir; and Steffen E. Petersen | ||||
Title | Radiomics analysis enhances the diagnostic performance of CMR stress perfusion: a proof-of-concept study using the Dan-NICAD dataset | Type | Journal Article | ||
Year | 2023 | Publication | Frontiers in Cardiovascular Medicine | Abbreviated Journal | FCM |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | |||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | |||
Notes | HUPBA | Approved | no | ||
Call Number | Admin @ si @ RMN2023 | Serial | 3937 | ||
Permanent link to this record | |||||
Author | ChuanMing Fang; Kai Wang; Joost Van de Weijer | ||||
Title | IterInv: Iterative Inversion for Pixel-Level T2I Models | Type | Conference Article | ||
Year | 2023 | Publication | 37th Annual Conference on Neural Information Processing Systems | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Large-scale text-to-image diffusion models have been a ground-breaking development in generating convincing images following an input text prompt. The goal of image editing research is to give users control over the generated images by modifying the text prompt. Current image editing techniques are relying on DDIM inversion as a common practice based on the Latent Diffusion Models (LDM). However, the large pretrained T2I models working on the latent space as LDM suffer from losing details due to the first compression stage with an autoencoder mechanism. Instead, another mainstream T2I pipeline working on the pixel level, such as Imagen and DeepFloyd-IF, avoids this problem. They are commonly composed of several stages, normally with a text-to-image stage followed by several super-resolution stages. In this case, the DDIM inversion is unable to find the initial noise to generate the original image given that the super-resolution diffusion models are not compatible with the DDIM technique. According to our experimental findings, iteratively concatenating the noisy image as the condition is the root of this problem. Based on this observation, we develop an iterative inversion (IterInv) technique for this stream of T2I models and verify IterInv with the open-source DeepFloyd-IF model. By combining our method IterInv with a popular image editing method, we prove the application prospects of IterInv. The code will be released at \url{this https URL}. | ||||
Address | New Orleans; USA; December 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NEURIPS | ||
Notes | LAMP | Approved | no | ||
Call Number | Admin @ si @ FWW2023 | Serial | 3936 | ||
Permanent link to this record | |||||
Author | Kai Wang; Fei Yang; Shiqi Yang; Muhammad Atif Butt; Joost Van de Weijer | ||||
Title | Dynamic Prompt Learning: Addressing Cross-Attention Leakage for Text-Based Image Editing | Type | Conference Article | ||
Year | 2023 | Publication | 37th Annual Conference on Neural Information Processing Systems | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Poster | ||||
Address | New Orleans; USA; December 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NEURIPS | ||
Notes | LAMP | Approved | no | ||
Call Number | Admin @ si @ WYY2023 | Serial | 3935 | ||
Permanent link to this record | |||||
Author | Dipam Goswami; Yuyang Liu ; Bartlomiej Twardowski; Joost Van de Weijer | ||||
Title | FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning | Type | Conference Article | ||
Year | 2023 | Publication | 37th Annual Conference on Neural Information Processing Systems | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | Poster | ||||
Address | New Orleans; USA; December 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | NEURIPS | ||
Notes | LAMP | Approved | no | ||
Call Number | Admin @ si @ GLT2023 | Serial | 3934 | ||
Permanent link to this record | |||||
Author | Alejandro Ariza-Casabona; Bartlomiej Twardowski; Tri Kurniawan Wijaya | ||||
Title | Exploiting Graph Structured Cross-Domain Representation for Multi-domain Recommendation | Type | Conference Article | ||
Year | 2023 | Publication | European Conference on Information Retrieval – ECIR 2023: Advances in Information Retrieval | Abbreviated Journal | |
Volume | 13980 | Issue | Pages | 49–65 | |
Keywords | |||||
Abstract | Multi-domain recommender systems benefit from cross-domain representation learning and positive knowledge transfer. Both can be achieved by introducing a specific modeling of input data (i.e. disjoint history) or trying dedicated training regimes. At the same time, treating domains as separate input sources becomes a limitation as it does not capture the interplay that naturally exists between domains. In this work, we efficiently learn multi-domain representation of sequential users’ interactions using graph neural networks. We use temporal intra- and inter-domain interactions as contextual information for our method called MAGRec (short for Multi-dom Ain Graph-based Recommender). To better capture all relations in a multi-domain setting, we learn two graph-based sequential representations simultaneously: domain-guided for recent user interest, and general for long-term interest. This approach helps to mitigate the negative knowledge transfer problem from multiple domains and improve overall representation. We perform experiments on publicly available datasets in different scenarios where MAGRec consistently outperforms state-of-the-art methods. Furthermore, we provide an ablation study and discuss further extensions of our method. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ECIR | ||
Notes | LAMP | Approved | no | ||
Call Number | Admin @ si @ ATK2023 | Serial | 3933 | ||
Permanent link to this record | |||||
Author | Gisel Bastidas-Guacho; Patricio Moreno; Boris X. Vintimilla; Angel Sappa | ||||
Title | Application on the Loop of Multimodal Image Fusion: Trends on Deep-Learning Based Approaches | Type | Conference Article | ||
Year | 2023 | Publication | 13th International Conference on Pattern Recognition Systems | Abbreviated Journal | |
Volume | 14234 | Issue | Pages | 25–36 | |
Keywords | |||||
Abstract | Multimodal image fusion allows the combination of information from different modalities, which is useful for tasks such as object detection, edge detection, and tracking, to name a few. Using the fused representation for applications results in better task performance. There are several image fusion approaches, which have been summarized in surveys. However, the existing surveys focus on image fusion approaches where the application on the loop of multimodal image fusion is not considered. On the contrary, this study summarizes deep learning-based multimodal image fusion for computer vision (e.g., object detection) and image processing applications (e.g., semantic segmentation), that is, approaches where the application module leverages the multimodal fusion process to enhance the final result. Firstly, we introduce image fusion and the existing general frameworks for image fusion tasks such as multifocus, multiexposure and multimodal. Then, we describe the multimodal image fusion approaches. Next, we review the state-of-the-art deep learning multimodal image fusion approaches for vision applications. Finally, we conclude our survey with the trends of task-driven multimodal image fusion. | ||||
Address | Guayaquil; Ecuador; July 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | ICPRS | ||
Notes | MSIAU | Approved | no | ||
Call Number | Admin @ si @ BMV2023 | Serial | 3932 | ||
Permanent link to this record | |||||
Author | Siyang Song; Micol Spitale; Cheng Luo; German Barquero; Cristina Palmero; Sergio Escalera; Michel Valstar; Tobias Baur; Fabien Ringeval; Elisabeth Andre; Hatice Gunes | ||||
Title | REACT2023: The First Multiple Appropriate Facial Reaction Generation Challenge | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 31st ACM International Conference on Multimedia | Abbreviated Journal | |
Volume | Issue | Pages | 9620–9624 | ||
Keywords | |||||
Abstract | The Multiple Appropriate Facial Reaction Generation Challenge (REACT2023) is the first competition event focused on evaluating multimedia processing and machine learning techniques for generating human-appropriate facial reactions in various dyadic interaction scenarios, with all participants competing strictly under the same conditions. The goal of the challenge is to provide the first benchmark test set for multi-modal information processing and to foster collaboration among the audio, visual, and audio-visual behaviour analysis and behaviour generation (a.k.a generative AI) communities, to compare the relative merits of the approaches to automatic appropriate facial reaction generation under different spontaneous dyadic interaction conditions. This paper presents: (i) the novelties, contributions and guidelines of the REACT2023 challenge; (ii) the dataset utilized in the challenge; and (iii) the performance of the baseline systems on the two proposed sub-challenges: Offline Multiple Appropriate Facial Reaction Generation and Online Multiple Appropriate Facial Reaction Generation, respectively. The challenge baseline code is publicly available at https://github.com/reactmultimodalchallenge/baseline_react2023. | ||||
Address | Otawa; Canada; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MM | ||
Notes | HUPBA | Approved | no | ||
Call Number | Admin @ si @ SSL2023 | Serial | 3931 | ||
Permanent link to this record | |||||
Author | Yi Xiao; Felipe Codevilla; Diego Porres; Antonio Lopez | ||||
Title | Scaling Vision-Based End-to-End Autonomous Driving with Multi-View Attention Learning | Type | Conference Article | ||
Year | 2023 | Publication | International Conference on Intelligent Robots and Systems | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | On end-to-end driving, human driving demonstrations are used to train perception-based driving models by imitation learning. This process is supervised on vehicle signals (e.g., steering angle, acceleration) but does not require extra costly supervision (human labeling of sensor data). As a representative of such vision-based end-to-end driving models, CILRS is commonly used as a baseline to compare with new driving models. So far, some latest models achieve better performance than CILRS by using expensive sensor suites and/or by using large amounts of human-labeled data for training. Given the difference in performance, one may think that it is not worth pursuing vision-based pure end-to-end driving. However, we argue that this approach still has great value and potential considering cost and maintenance. In this paper, we present CIL++, which improves on CILRS by both processing higher-resolution images using a human-inspired HFOV as an inductive bias and incorporating a proper attention mechanism. CIL++ achieves competitive performance compared to models which are more costly to develop. We propose to replace CILRS with CIL++ as a strong vision-based pure end-to-end driving baseline supervised by only vehicle signals and trained by conditional imitation learning. | ||||
Address | Detroit; USA; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IROS | ||
Notes | ADAS | Approved | no | ||
Call Number | Admin @ si @ XCP2023 | Serial | 3930 | ||
Permanent link to this record | |||||
Author | Christian Keilstrup Ingwersen; Artur Xarles; Albert Clapes; Meysam Madadi; Janus Nortoft Jensen; Morten Rieger Hannemose; Anders Bjorholm Dahl; Sergio Escalera | ||||
Title | Video-based Skill Assessment for Golf: Estimating Golf Handicap | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 6th International Workshop on Multimedia Content Analysis in Sports | Abbreviated Journal | |
Volume | Issue | Pages | 31-39 | ||
Keywords | |||||
Abstract | Automated skill assessment in sports using video-based analysis holds great potential for revolutionizing coaching methodologies. This paper focuses on the problem of skill determination in golfers by leveraging deep learning models applied to a large database of video recordings of golf swings. We investigate different regression, ranking and classification based methods and compare to a simple baseline approach. The performance is evaluated using mean squared error (MSE) as well as computing the percentages of correctly ranked pairs based on the Kendall correlation. Our results demonstrate an improvement over the baseline, with a 35% lower mean squared error and 68% correctly ranked pairs. However, achieving fine-grained skill assessment remains challenging. This work contributes to the development of AI-driven coaching systems and advances the understanding of video-based skill determination in the context of golf. | ||||
Address | Otawa; Canada; October 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | MMSports | ||
Notes | HUPBA | Approved | no | ||
Call Number | Admin @ si @ KXC2023 | Serial | 3929 | ||
Permanent link to this record | |||||
Author | David Dueñas; Mostafa Kamal; Petia Radeva | ||||
Title | Efficient Deep Learning Ensemble for Skin Lesion Classification | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 303-314 | ||
Keywords | |||||
Abstract | Vision Transformers (ViTs) are deep learning techniques that have been gaining in popularity in recent years.
In this work, we study the performance of ViTs and Convolutional Neural Networks (CNNs) on skin lesions classification tasks, specifically melanoma diagnosis. We show that regardless of the performance of both architectures, an ensemble of them can improve their generalization. We also present an adaptation to the Gram-OOD* method (detecting Out-of-distribution (OOD) using Gram matrices) for skin lesion images. Moreover, the integration of super-convergence was critical to success in building models with strict computing and training time constraints. We evaluated our ensemble of ViTs and CNNs, demonstrating that generalization is enhanced by placing first in the 2019 and third in the 2020 ISIC Challenge Live Leaderboards (available at https://challenge.isic-archive.com/leaderboards/live/). |
||||
Address | Lisboa; Portugal; February 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISIGRAPP | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ DKR2023 | Serial | 3928 | ||
Permanent link to this record | |||||
Author | Patricia Suarez; Angel Sappa | ||||
Title | Toward a Thermal Image-Like Representation | Type | Conference Article | ||
Year | 2023 | Publication | Proceedings of the 18th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications | Abbreviated Journal | |
Volume | Issue | Pages | 133-140 | ||
Keywords | |||||
Abstract | This paper proposes a novel model to obtain thermal image-like representations to be used as an input in any thermal image compressive sensing approach (e.g., thermal image: filtering, enhancing, super-resolution). Thermal images offer interesting information about the objects in the scene, in addition to their temperature. Unfortunately, in most of the cases thermal cameras acquire low resolution/quality images. Hence, in order to improve these images, there are several state-of-the-art approaches that exploit complementary information from a low-cost channel (visible image) to increase the image quality of an expensive channel (infrared image). In these SOTA approaches visible images are fused at different levels without paying attention the images acquire information at different bands of the spectral. In this paper a novel approach is proposed to generate thermal image-like representations from a low cost visible images, by means of a contrastive cycled GAN network. Obtained representations (synthetic thermal image) can be later on used to improve the low quality thermal image of the same scene. Experimental results on different datasets are presented. | ||||
Address | Lisboa; Portugal; February 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | VISIGRAPP | ||
Notes | MSIAU | Approved | no | ||
Call Number | Admin @ si @ SuS2023b | Serial | 3927 | ||
Permanent link to this record | |||||
Author | Guillermo Torres; Jan Rodríguez Dueñas; Sonia Baeza; Antoni Rosell; Carles Sanchez; Debora Gil | ||||
Title | Prediction of Malignancy in Lung Cancer using several strategies for the fusion of Multi-Channel Pyradiomics Images | Type | Conference Article | ||
Year | 2023 | Publication | 7th Workshop on Digital Image Processing for Medical and Automotive Industry in the framework of SYNASC 2023 | Abbreviated Journal | |
Volume | Issue | Pages | |||
Keywords | |||||
Abstract | This study shows the generation process and the subsequent study of the representation space obtained by extracting GLCM texture features from computer-aided tomography (CT) scans of pulmonary nodules (PN). For this, data from 92 patients from the Germans Trias i Pujol University Hospital were used. The workflow focuses on feature extraction using Pyradiomics and the VGG16 Convolutional Neural Network (CNN). The aim of the study is to assess whether the data obtained have a positive impact on the diagnosis of lung cancer (LC). To design a machine learning (ML) model training method that allows generalization, we train SVM and neural network (NN) models, evaluating diagnosis performance using metrics defined at slice and nodule level. | ||||
Address | |||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | |||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | DIPMAI | ||
Notes | IAM | Approved | no | ||
Call Number | Admin @ si @ TRB2023 | Serial | 3926 | ||
Permanent link to this record | |||||
Author | Albert Tatjer; Bhalaji Nagarajan; Ricardo Marques; Petia Radeva | ||||
Title | CCLM: Class-Conditional Label Noise Modelling | Type | Conference Article | ||
Year | 2023 | Publication | 11th Iberian Conference on Pattern Recognition and Image Analysis | Abbreviated Journal | |
Volume | 14062 | Issue | Pages | 3-14 | |
Keywords | |||||
Abstract | The performance of deep neural networks highly depends on the quality and volume of the training data. However, cost-effective labelling processes such as crowdsourcing and web crawling often lead to data with noisy (i.e., wrong) labels. Making models robust to this label noise is thus of prime importance. A common approach is using loss distributions to model the label noise. However, the robustness of these methods highly depends on the accuracy of the division of training set into clean and noisy samples. In this work, we dive in this research direction highlighting the existing problem of treating this distribution globally and propose a class-conditional approach to split the clean and noisy samples. We apply our approach to the popular DivideMix algorithm and show how the local treatment fares better with respect to the global treatment of loss distribution. We validate our hypothesis on two popular benchmark datasets and show substantial improvements over the baseline experiments. We further analyze the effectiveness of the proposal using two different metrics – Noise Division Accuracy and Classiness. | ||||
Address | Alicante; Spain; June 2023 | ||||
Corporate Author | Thesis | ||||
Publisher | Place of Publication | Editor | |||
Language | Summary Language | Original Title | |||
Series Editor | Series Title | Abbreviated Series Title | LNCS | ||
Series Volume | Series Issue | Edition | |||
ISSN | ISBN | Medium | |||
Area | Expedition | Conference | IbPRIA | ||
Notes | MILAB | Approved | no | ||
Call Number | Admin @ si @ TNM2023 | Serial | 3925 | ||
Permanent link to this record |