Md. Mostafa Kamal Sarker, Hatem A. Rashwan, Farhan Akram, Vivek Kumar Singh, Syeda Furruka Banu, Forhad U H Chowdhury, et al. (2021). SLSNet: Skin lesion segmentation using a lightweight generative adversarial network. ESWA - Expert Systems With Applications, 183, 115433.
Abstract: The determination of precise skin lesion boundaries in dermoscopic images using automated methods faces many challenges, most importantly, the presence of hair, inconspicuous lesion edges and low contrast in dermoscopic images, and variability in the color, texture and shapes of skin lesions. Existing deep learning-based skin lesion segmentation algorithms are expensive in terms of computational time and memory. Consequently, running such segmentation algorithms requires a powerful GPU and high bandwidth memory, which are not available in dermoscopy devices. Thus, this article aims to achieve precise skin lesion segmentation with minimum resources: a lightweight, efficient generative adversarial network (GAN) model called SLSNet, which combines 1-D kernel factorized networks, position and channel attention, and multiscale aggregation mechanisms with a GAN model. The 1-D kernel factorized network reduces the computational cost of 2D filtering. The position and channel attention modules enhance the discriminative ability between the lesion and non-lesion feature representations in spatial and channel dimensions, respectively. A multiscale block is also used to aggregate the coarse-to-fine features of input skin images and reduce the effect of the artifacts. SLSNet is evaluated on two publicly available datasets: ISBI 2017 and the ISIC 2018. Although SLSNet has only 2.35 million parameters, the experimental results demonstrate that it achieves segmentation results on a par with the state-of-the-art skin lesion segmentation methods with an accuracy of 97.61%, and Dice and Jaccard similarity coefficients of 90.63% and 81.98%, respectively. SLSNet can run at more than 110 frames per second (FPS) in a single GTX1080Ti GPU, which is faster than well-known deep learning-based image segmentation models, such as FCN. Therefore, SLSNet can be used for practical dermoscopic applications.
|
Md. Mostafa Kamal Sarker, Hatem A. Rashwan, Farhan Akram, Syeda Furruka Banu, Adel Saleh, Vivek Kumar Singh, et al. (2018). SLSDeep: Skin Lesion Segmentation Based on Dilated Residual and Pyramid Pooling Networks. In 21st International Conference on Medical Image Computing & Computer Assisted Intervention (Vol. 2, pp. 21–29).
Abstract: Skin lesion segmentation (SLS) in dermoscopic images is a crucial task for automated diagnosis of melanoma. In this paper, we present a robust deep learning SLS model, so-called SLSDeep, which is represented as an encoder-decoder network. The encoder network is constructed by dilated residual layers, in turn, a pyramid pooling network followed by three convolution layers is used for the decoder. Unlike the traditional methods employing a cross-entropy loss, we investigated a loss function by combining both Negative Log Likelihood (NLL) and End Point Error (EPE) to accurately segment the melanoma regions with sharp boundaries. The robustness of the proposed model was evaluated on two public databases: ISBI 2016 and 2017 for skin lesion analysis towards melanoma detection challenge. The proposed model outperforms the state-of-the-art methods in terms of segmentation accuracy. Moreover, it is capable to segment more than 100 images of size 384x384 per second on a recent GPU.
|
Md. Mostafa Kamal Sarker, Hatem A. Rashwan, Farhan Akram, Estefania Talavera, Syeda Furruka Banu, Petia Radeva, et al. (2019). Recognizing Food Places in Egocentric Photo-Streams Using Multi-Scale Atrous Convolutional Networks and Self-Attention Mechanism. ACCESS - IEEE Access, 7, 39069–39082.
Abstract: Wearable sensors (e.g., lifelogging cameras) represent very useful tools to monitor people's daily habits and lifestyle. Wearable cameras are able to continuously capture different moments of the day of their wearers, their environment, and interactions with objects, people, and places reflecting their personal lifestyle. The food places where people eat, drink, and buy food, such as restaurants, bars, and supermarkets, can directly affect their daily dietary intake and behavior. Consequently, developing an automated monitoring system based on analyzing a person's food habits from daily recorded egocentric photo-streams of the food places can provide valuable means for people to improve their eating habits. This can be done by generating a detailed report of the time spent in specific food places by classifying the captured food place images to different groups. In this paper, we propose a self-attention mechanism with multi-scale atrous convolutional networks to generate discriminative features from image streams to recognize a predetermined set of food place categories. We apply our model on an egocentric food place dataset called “EgoFoodPlaces” that comprises of 43 392 images captured by 16 individuals using a lifelogging camera. The proposed model achieved an overall classification accuracy of 80% on the “EgoFoodPlaces” dataset, respectively, outperforming the baseline methods, such as VGG16, ResNet50, and InceptionV3.
|
Maya Dimitrova, Petia Radeva, David Rotger, D. Boyadjiev, & Juan J. Villanueva. (2004). Advanced Cardiological Diagnosis via Intelligent Image Analysis.
|
Maya Dimitrova, N. Kushmerick, Petia Radeva, & Juan J. Villanueva. (2003). User Assesment of a Visual Genre Classifier.
|
Maya Dimitrova, I. Terziev, Petia Radeva, & Juan J. Villanueva. (2004). Java-Servlet Technology for Building New Web Document Classifiers.
|
Maya Dimitrova, Ch. Roumenin, Siya Lozanova, David Rotger, & Petia Radeva. (2007). An Interface System Based on Multimodal Principle for Cardiological Diagnosis Assistance. In International Conference On Computer Systems And Technologies (Vol. IIIB.4, 1–6).
|
Maya Dimitrova, Ch. Roumenin, Petia Radeva, David Rotger, & Juan J. Villanueva. (2003). Multimodal Intelligent System for Cardiovascular Diagnosis.
|
Maurizio Mencuccini, Jordi Martinez-Vilalta, Josep Piñol, Lasse Loepfe, Mireia Burnat, Xavier Alvarez, et al. (2010). A quantitative and statistically robust method for the determination of xylem conduit spatial distribution. AJB - American Journal of Botany, 97(8), 1247–1259.
Abstract: Premise of the study: Because of their limited length, xylem conduits need to connect to each other to maintain water transport from roots to leaves. Conduit spatial distribution in a cross section plays an important role in aiding this connectivity. While indices of conduit spatial distribution already exist, they are not well defined statistically. * Methods: We used point pattern analysis to derive new spatial indices. One hundred and five cross-sectional images from different species were transformed into binary images. The resulting point patterns, based on the locations of the conduit centers-of-area, were analyzed to determine whether they departed from randomness. Conduit distribution was then modeled using a spatially explicit stochastic model. * Key results: The presence of conduit randomness, uniformity, or aggregation depended on the spatial scale of the analysis. The large majority of the images showed patterns significantly different from randomness at least at one spatial scale. A strong phylogenetic signal was detected in the spatial variables. * Conclusions: Conduit spatial arrangement has been largely conserved during evolution, especially at small spatial scales. Species in which conduits were aggregated in clusters had a lower conduit density compared to those with uniform distribution. Statistically sound spatial indices must be employed as an aid in the characterization of distributional patterns across species and in models of xylem water transport. Point pattern analysis is a very useful tool in identifying spatial patterns.
Keywords: Geyer; hydraulic conductivity; point pattern analysis; Ripley; Spatstat; vessel clusters; xylem anatomy; xylem network
|
Matthias S. Keil, & Jordi Vitria. (2005). Does the brain generate representations of smooth brightness gradients? A novel account for Mach bands, Chevreul’s illusion, and a variant of the Ehrenstein disk.
|
Matthias S. Keil, & Jordi Vitria. (2005). Does the brain generate representations of smooth brightness gradients? A novel account for Mach bands, Chevreul’s illusion, and a variant of the Ehrenstein disk. Perception 34:209–210 Suppl. S (IF: 1.391).
|
Matthias S. Keil, & Jordi Vitria. (2007). Pushing it to the Limit: Adaptation with Dynamically Switching Gain Control. EURASIP Journal on Advances in Signal Processing, Vol 2007, Article ID 51684, 10 pages, doi: 10.1155/2007/51684.
|
Matthias S. Keil, Gabriel Cristobal, Thorsten Hansen, & Heiko Neumann. (2005). Recovering real-world images from single-scale boundaries with a novel filling-in architecture. Neural Networks 18(10):1319–1331 (IF: 1.665).
|
Matthias S. Keil, Gabriel Cristobal, & Heiko Neumann. (2006). Gradient representation and perception in the early visual system – A novel account of Mach band formation. VR - Vision Research, 46(17): 2659–2674.
|
Matthias S. Keil, & Gabriel Cristobal. (2000). Separating the chaff from the wheat: possible origins of the oblique effect. Journal of the Optical Society of America A – Optics, Image Science, and Vision, 17(4): 697–710 (IF: 1.481).
|