|
Records |
Links |
|
Author |
Maedeh Aghaei; Mariella Dimiccoli; Petia Radeva |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
All the people around me: face clustering in egocentric photo streams |
Type |
Conference Article |
|
Year |
2017 |
Publication |
24th International Conference on Image Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
face discovery; face clustering; deepmatching; bag-of-tracklets; egocentric photo-streams |
|
|
Abstract |
arxiv1703.01790
Given an unconstrained stream of images captured by a wearable photo-camera (2fpm), we propose an unsupervised bottom-up approach for automatic clustering appearing faces into the individual identities present in these data. The problem is challenging since images are acquired under real world conditions; hence the visible appearance of the people in the images undergoes intensive variations. Our proposed pipeline consists of first arranging the photo-stream into events, later, localizing the appearance of multiple people in them, and
finally, grouping various appearances of the same person across different events. Experimental results performed on a dataset acquired by wearing a photo-camera during one month, demonstrate the effectiveness of the proposed approach for the considered purpose. |
|
|
Address |
Beijing; China; September 2017 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIP |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ EDR2017 |
Serial |
3025 |
|
Permanent link to this record |
|
|
|
|
Author |
Mireia Forns-Nadal; Federico Sem; Anna Mane; Laura Igual; Dani Guinart; Oscar Vilarroya |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Increased Nucleus Accumbens Volume in First-Episode Psychosis |
Type |
Journal Article |
|
Year |
2017 |
Publication |
Psychiatry Research-Neuroimaging |
Abbreviated Journal |
PRN |
|
|
Volume |
263 |
Issue |
|
Pages |
57-60 |
|
|
Keywords |
|
|
|
Abstract |
Nucleus accumbens has been reported as a key structure in the neurobiology of schizophrenia. Studies analyzing structural abnormalities have shown conflicting results, possibly related to confounding factors. We investigated the nucleus accumbens volume using manual delimitation in first-episode psychosis (FEP) controlling for age, cannabis use and medication. Thirty-one FEP subjects who were naive or minimally exposed to antipsychotics and a control group were MRI scanned and clinically assessed from baseline to 6 months of follow-up. FEP showed increased relative and total accumbens volumes. Clinical correlations with negative symptoms, duration of untreated psychosis and cannabis use were not significant. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ FSM2017 |
Serial |
3028 |
|
Permanent link to this record |
|
|
|
|
Author |
Md.Mostafa Kamal Sarker; Mohammed Jabreel; , Hatem A. Rashwan; Syeda Furruka Banu; Petia Radeva; Domenec Puig |
![download PDF file pdf](img/file_PDF.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
CuisineNet: Food Attributes Classification using Multi-scale Convolution Network |
Type |
Conference Article |
|
Year |
2018 |
Publication |
21st International Conference of the Catalan Association for Artificial Intelligence |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
365-372 |
|
|
Keywords |
|
|
|
Abstract |
Diversity of food and its attributes represents the culinary habits of peoples from different countries. Thus, this paper addresses the problem of identifying food culture of people around the world and its flavor by classifying two main food attributes, cuisine and flavor. A deep learning model based on multi-scale convotuional networks is proposed for extracting more accurate features from input images. The aggregation of multi-scale convolution layers with different kernel size is also used for weighting the features results from different scales. In addition, a joint loss function based on Negative Log Likelihood (NLL) is used to fit the model probability to multi labeled classes for multi-modal classification task. Furthermore, this work provides a new dataset for food attributes, so-called Yummly48K, extracted from the popular food website, Yummly. Our model is assessed on the constructed Yummly48K dataset. The experimental results show that our proposed method yields 65% and 62% average F1 score on validation and test set which outperforming the state-of-the-art models. |
|
|
Address |
Roses; catalonia; October 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
CCIA |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ SJR2018 |
Serial |
3113 |
|
Permanent link to this record |
|
|
|
|
Author |
Md.Mostafa Kamal Sarker; Hatem A. Rashwan; Hatem A. Rashwan; Estefania Talavera; Syeda Furruka Banu; Petia Radeva; Domenec Puig |
![download PDF file pdf](img/file_PDF.gif)
|
|
Title |
MACNet: Multi-scale Atrous Convolution Networks for Food Places Classification in Egocentric Photo-streams |
Type |
Conference Article |
|
Year |
2018 |
Publication |
European Conference on Computer Vision workshops |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
423-433 |
|
|
Keywords |
|
|
|
Abstract |
First-person (wearable) camera continually captures unscripted interactions of the camera user with objects, people, and scenes reflecting his personal and relational tendencies. One of the preferences of people is their interaction with food events. The regulation of food intake and its duration has a great importance to protect against diseases. Consequently, this work aims to develop a smart model that is able to determine the recurrences of a person on food places during a day. This model is based on a deep end-to-end model for automatic food places recognition by analyzing egocentric photo-streams. In this paper, we apply multi-scale Atrous convolution networks to extract the key features related to food places of the input images. The proposed model is evaluated on an in-house private dataset called “EgoFoodPlaces”. Experimental results shows promising results of food places classification recognition in egocentric photo-streams. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LCNS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ECCVW |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ SRR2018b |
Serial |
3185 |
|
Permanent link to this record |
|
|
|
|
Author |
Giuseppe Pezzano; Oliver Diaz; Vicent Ribas Ripoll; Petia Radeva |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
CoLe-CNN+: Context learning – Convolutional neural network for COVID-19-Ground-Glass-Opacities detection and segmentation |
Type |
Journal Article |
|
Year |
2021 |
Publication |
Computers in Biology and Medicine |
Abbreviated Journal |
CBM |
|
|
Volume |
136 |
Issue |
|
Pages |
104689 |
|
|
Keywords |
|
|
|
Abstract |
The most common tool for population-wide COVID-19 identification is the Reverse Transcription-Polymerase Chain Reaction test that detects the presence of the virus in the throat (or sputum) in swab samples. This test has a sensitivity between 59% and 71%. However, this test does not provide precise information regarding the extension of the pulmonary infection. Moreover, it has been proven that through the reading of a computed tomography (CT) scan, a clinician can provide a more complete perspective of the severity of the disease. Therefore, we propose a comprehensive system for fully-automated COVID-19 detection and lesion segmentation from CT scans, powered by deep learning strategies to support decision-making process for the diagnosis of COVID-19. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ PDR2021 |
Serial |
3635 |
|
Permanent link to this record |
|
|
|
|
Author |
Simone Balocco; Mauricio Gonzalez; Ricardo Ñancule; Petia Radeva; Gabriel Thomas |
![goto web page url](img/www.gif)
|
|
Title |
Calcified Plaque Detection in IVUS Sequences: Preliminary Results Using Convolutional Nets |
Type |
Conference Article |
|
Year |
2018 |
Publication |
International Workshop on Artificial Intelligence and Pattern Recognition |
Abbreviated Journal |
|
|
|
Volume |
11047 |
Issue |
|
Pages |
34-42 |
|
|
Keywords |
Intravascular ultrasound images; Convolutional nets; Deep learning; Medical image analysis |
|
|
Abstract |
The manual inspection of intravascular ultrasound (IVUS) images to detect clinically relevant patterns is a difficult and laborious task performed routinely by physicians. In this paper, we present a framework based on convolutional nets for the quick selection of IVUS frames containing arterial calcification, a pattern whose detection plays a vital role in the diagnosis of atherosclerosis. Preliminary experiments on a dataset acquired from eighty patients show that convolutional architectures improve detections of a shallow classifier in terms of 𝐹1-measure, precision and recall. |
|
|
Address |
Cuba; September 2018 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IWAIPR |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ BGÑ2018 |
Serial |
3237 |
|
Permanent link to this record |
|
|
|
|
Author |
Md.Mostafa Kamal Sarker; Hatem A. Rashwan; Farhan Akram; Estefania Talavera; Syeda Furruka Banu; Petia Radeva; Domenec Puig |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Recognizing Food Places in Egocentric Photo-Streams Using Multi-Scale Atrous Convolutional Networks and Self-Attention Mechanism |
Type |
Journal Article |
|
Year |
2019 |
Publication |
IEEE Access |
Abbreviated Journal |
ACCESS |
|
|
Volume |
7 |
Issue |
|
Pages |
39069-39082 |
|
|
Keywords |
|
|
|
Abstract |
Wearable sensors (e.g., lifelogging cameras) represent very useful tools to monitor people's daily habits and lifestyle. Wearable cameras are able to continuously capture different moments of the day of their wearers, their environment, and interactions with objects, people, and places reflecting their personal lifestyle. The food places where people eat, drink, and buy food, such as restaurants, bars, and supermarkets, can directly affect their daily dietary intake and behavior. Consequently, developing an automated monitoring system based on analyzing a person's food habits from daily recorded egocentric photo-streams of the food places can provide valuable means for people to improve their eating habits. This can be done by generating a detailed report of the time spent in specific food places by classifying the captured food place images to different groups. In this paper, we propose a self-attention mechanism with multi-scale atrous convolutional networks to generate discriminative features from image streams to recognize a predetermined set of food place categories. We apply our model on an egocentric food place dataset called “EgoFoodPlaces” that comprises of 43 392 images captured by 16 individuals using a lifelogging camera. The proposed model achieved an overall classification accuracy of 80% on the “EgoFoodPlaces” dataset, respectively, outperforming the baseline methods, such as VGG16, ResNet50, and InceptionV3. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ SRA2019 |
Serial |
3296 |
|
Permanent link to this record |
|
|
|
|
Author |
Emanuel Sanchez Aimar; Petia Radeva; Mariella Dimiccoli |
![download PDF file pdf](img/file_PDF.gif)
![goto web page (via DOI) doi](img/doi.gif)
|
|
Title |
Social Relation Recognition in Egocentric Photostreams |
Type |
Conference Article |
|
Year |
2019 |
Publication |
26th International Conference on Image Processing |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
3227-3231 |
|
|
Keywords |
|
|
|
Abstract |
This paper proposes an approach to automatically categorize the social interactions of a user wearing a photo-camera (2fpm), by relying solely on what the camera is seeing. The problem is challenging due to the overwhelming complexity of social life and the extreme intra-class variability of social interactions captured under unconstrained conditions. We adopt the formalization proposed in Bugental's social theory, that groups human relations into five social domains with related categories. Our method is a new deep learning architecture that exploits the hierarchical structure of the label space and relies on a set of social attributes estimated at frame level to provide a semantic representation of social interactions. Experimental results on the new EgoSocialRelation dataset demonstrate the effectiveness of our proposal. |
|
|
Address |
Taipei; Taiwan; September 2019 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
ICIP |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ SRD2019 |
Serial |
3370 |
|
Permanent link to this record |
|
|
|
|
Author |
Estefania Talavera; Nicolai Petkov; Petia Radeva |
![goto web page url](img/www.gif)
|
|
Title |
Towards Unsupervised Familiar Scene Recognition in Egocentric Videos |
Type |
Miscellaneous |
|
Year |
2019 |
Publication |
Arxiv |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
CoRR abs/1905.04093
Nowadays, there is an upsurge of interest in using lifelogging devices. Such devices generate huge amounts of image data; consequently, the need for automatic methods for analyzing and summarizing these data is drastically increasing. We present a new method for familiar scene recognition in egocentric videos, based on background pattern detection through automatically configurable COSFIRE filters. We present some experiments over egocentric data acquired with the Narrative Clip. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ TPR2019b |
Serial |
3379 |
|
Permanent link to this record |
|
|
|
|
Author |
Alejandro Cartas; Jordi Luque; Petia Radeva; Carlos Segura; Mariella Dimiccoli |
![goto web page url](img/www.gif)
|
|
Title |
How Much Does Audio Matter to Recognize Egocentric Object Interactions? |
Type |
Miscellaneous |
|
Year |
2019 |
Publication |
Arxiv |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
CoRR abs/1906.00634
Sounds are an important source of information on our daily interactions with objects. For instance, a significant amount of people can discern the temperature of water that it is being poured just by using the sense of hearing. However, only a few works have explored the use of audio for the classification of object interactions in conjunction with vision or as single modality. In this preliminary work, we propose an audio model for egocentric action recognition and explore its usefulness on the parts of the problem (noun, verb, and action classification). Our model achieves a competitive result in terms of verb classification (34.26% accuracy) on a standard benchmark with respect to vision-based state of the art systems, using a comparatively lighter architecture. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ CLR2019 |
Serial |
3383 |
|
Permanent link to this record |
|
|
|
|
Author |
Md. Mostafa Kamal Sarker; Hatem A. Rashwan; Mohamed Abdel-Nasser; Vivek Kumar Singh; Syeda Furruka Banu; Farhan Akram; Forhad U. H. Chowdhury; Kabir Ahmed Choudhury; Sylvie Chambon; Petia Radeva; Domenec Puig |
![goto web page url](img/www.gif)
|
|
Title |
MobileGAN: Skin Lesion Segmentation Using a Lightweight Generative Adversarial Network |
Type |
Miscellaneous |
|
Year |
2019 |
Publication |
Arxiv |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
CoRR abs/1907.00856
Skin lesion segmentation in dermoscopic images is a challenge due to their blurry and irregular boundaries. Most of the segmentation approaches based on deep learning are time and memory consuming due to the hundreds of millions of parameters. Consequently, it is difficult to apply them to real dermatoscope devices with limited GPU and memory resources. In this paper, we propose a lightweight and efficient Generative Adversarial Networks (GAN) model, called MobileGAN for skin lesion segmentation. More precisely, the MobileGAN combines 1D non-bottleneck factorization networks with position and channel attention modules in a GAN model. The proposed model is evaluated on the test dataset of the ISBI 2017 challenges and the validation dataset of ISIC 2018 challenges. Although the proposed network has only 2.35 millions of parameters, it is still comparable with the state-of-the-art. The experimental results show that our MobileGAN obtains comparable performance with an accuracy of 97.61%. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ MRA2019 |
Serial |
3384 |
|
Permanent link to this record |
|
|
|
|
Author |
Manisha Das; Deep Gupta; Petia Radeva; Ashwini M. Bakde |
![goto web page url](img/www.gif)
|
|
Title |
Multi-scale decomposition-based CT-MR neurological image fusion using optimized bio-inspired spiking neural model with meta-heuristic optimization |
Type |
Journal Article |
|
Year |
2021 |
Publication |
International Journal of Imaging Systems and Technology |
Abbreviated Journal |
IMA |
|
|
Volume |
31 |
Issue |
4 |
Pages |
2170-2188 |
|
|
Keywords |
|
|
|
Abstract |
Multi-modal medical image fusion plays an important role in clinical diagnosis and works as an assistance model for clinicians. In this paper, a computed tomography-magnetic resonance (CT-MR) image fusion model is proposed using an optimized bio-inspired spiking feedforward neural network in different decomposition domains. First, source images are decomposed into base (low-frequency) and detail (high-frequency) layer components. Low-frequency subbands are fused using texture energy measures to capture the local energy, contrast, and small edges in the fused image. High-frequency coefficients are fused using firing maps obtained by pixel-activated neural model with the optimized parameters using three different optimization techniques such as differential evolution, cuckoo search, and gray wolf optimization, individually. In the optimization model, a fitness function is computed based on the edge index of resultant fused images, which helps to extract and preserve sharp edges available in the source CT and MR images. To validate the fusion performance, a detailed comparative analysis is presented among the proposed and state-of-the-art methods in terms of quantitative and qualitative measures along with computational complexity. Experimental results show that the proposed method produces a significantly better visual quality of fused images meanwhile outperforms the existing methods. |
|
|
Address |
|
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
|
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ DGR2021a |
Serial |
3630 |
|
Permanent link to this record |
|
|
|
|
Author |
Nil Ballus; Bhalaji Nagarajan; Petia Radeva |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Opt-SSL: An Enhanced Self-Supervised Framework for Food Recognition |
Type |
Conference Article |
|
Year |
2022 |
Publication |
10th Iberian Conference on Pattern Recognition and Image Analysis |
Abbreviated Journal |
|
|
|
Volume |
13256 |
Issue |
|
Pages |
|
|
|
Keywords |
Self-supervised; Contrastive learning; Food recognition |
|
|
Abstract |
Self-supervised Learning has been showing upbeat performance in several computer vision tasks. The popular contrastive methods make use of a Siamese architecture with different loss functions. In this work, we go deeper into two very recent state of the art frameworks, namely, SimSiam and Barlow Twins. Inspired by them, we propose a new self-supervised learning method we call Opt-SSL that combines both image and feature contrasting. We validate the proposed method on the food recognition task, showing that our proposed framework enables the self-learning networks to learn better visual representations. |
|
|
Address |
Aveiro; Portugal; May 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
LNCS |
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
IbPRIA |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ BNR2022 |
Serial |
3782 |
|
Permanent link to this record |
|
|
|
|
Author |
Vishwesh Pillai; Pranav Mehar; Manisha Das; Deep Gupta; Petia Radeva |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Integrated Hierarchical and Flat Classifiers for Food Image Classification using Epistemic Uncertainty |
Type |
Conference Article |
|
Year |
2022 |
Publication |
IEEE International Conference on Signal Processing and Communications |
Abbreviated Journal |
|
|
|
Volume |
|
Issue |
|
Pages |
|
|
|
Keywords |
|
|
|
Abstract |
The problem of food image recognition is an essential one in today’s context because health conditions such as diabetes, obesity, and heart disease require constant monitoring of a person’s diet. To automate this process, several models are available to recognize food images. Due to a considerable number of unique food dishes and various cuisines, a traditional flat classifier ceases to perform well. To address this issue, prediction schemes consisting of both flat and hierarchical classifiers, with the analysis of epistemic uncertainty are used to switch between the classifiers. However, the accuracy of the predictions made using epistemic uncertainty data remains considerably low. Therefore, this paper presents a prediction scheme using three different threshold criteria that helps to increase the accuracy of epistemic uncertainty predictions. The performance of the proposed method is demonstrated using several experiments performed on the MAFood-121 dataset. The experimental results validate the proposal performance and show that the proposed threshold criteria help to increase the overall accuracy of the predictions by correctly classifying the uncertainty distribution of the samples. |
|
|
Address |
Bangalore; India; July 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
SPCOM |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ PMD2022 |
Serial |
3796 |
|
Permanent link to this record |
|
|
|
|
Author |
Bhalaji Nagarajan; Ricardo Marques; Marcos Mejia; Petia Radeva |
![goto web page url](img/www.gif)
![find record details (via OpenURL) openurl](img/xref.gif)
|
|
Title |
Class-conditional Importance Weighting for Deep Learning with Noisy Labels |
Type |
Conference Article |
|
Year |
2022 |
Publication |
17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications |
Abbreviated Journal |
|
|
|
Volume |
5 |
Issue |
|
Pages |
679-686 |
|
|
Keywords |
Noisy Labeling; Loss Correction; Class-conditional Importance Weighting; Learning with Noisy Labels |
|
|
Abstract |
Large-scale accurate labels are very important to the Deep Neural Networks to train them and assure high performance. However, it is very expensive to create a clean dataset since usually it relies on human interaction. To this purpose, the labelling process is made cheap with a trade-off of having noisy labels. Learning with Noisy Labels is an active area of research being at the same time very challenging. The recent advances in Self-supervised learning and robust loss functions have helped in advancing noisy label research. In this paper, we propose a loss correction method that relies on dynamic weights computed based on the model training. We extend the existing Contrast to Divide algorithm coupled with DivideMix using a new class-conditional weighted scheme. We validate the method using the standard noise experiments and achieved encouraging results. |
|
|
Address |
Virtual; February 2022 |
|
|
Corporate Author |
|
Thesis |
|
|
|
Publisher |
|
Place of Publication |
|
Editor |
|
|
|
Language |
|
Summary Language |
|
Original Title |
|
|
|
Series Editor |
|
Series Title |
|
Abbreviated Series Title |
|
|
|
Series Volume |
|
Series Issue |
|
Edition |
|
|
|
ISSN |
|
ISBN |
|
Medium |
|
|
|
Area |
|
Expedition |
|
Conference |
VISAPP |
|
|
Notes ![sorted by Notes field, ascending order (up)](img/sort_asc.gif) |
MILAB; no menciona |
Approved |
no |
|
|
Call Number |
Admin @ si @ NMM2022 |
Serial |
3798 |
|
Permanent link to this record |