Redirigiendo al acceso original de articulo en 24 segundos...
Inicio  /  Applied Sciences  /  Vol: 14 Par: 6 (2024)  /  Artículo
ARTÍCULO
TITULO

Adversarial Attacks on Medical Segmentation Model via Transformation of Feature Statistics

Woonghee Lee    
Mingeon Ju    
Yura Sim    
Young Kul Jung    
Tae Hyung Kim and Younghoon Kim    

Resumen

Deep learning-based segmentation models have made a profound impact on medical procedures, with U-Net based computed tomography (CT) segmentation models exhibiting remarkable performance. Yet, even with these advances, these models are found to be vulnerable to adversarial attacks, a problem that equally affects automatic CT segmentation models. Conventional adversarial attacks typically rely on adding noise or perturbations, leading to a compromise between the success rate of the attack and its perceptibility. In this study, we challenge this paradigm and introduce a novel generation of adversarial attacks aimed at deceiving both the target segmentation model and medical practitioners. Our approach aims to deceive a target model by altering the texture statistics of an organ while retaining its shape. We employ a real-time style transfer method, known as the texture reformer, which uses adaptive instance normalization (AdaIN) to change the statistics of an image?s feature.To induce transformation, we modify the AdaIN, which typically aligns the source and target image statistics. Through rigorous experiments, we demonstrate the effectiveness of our approach. Our adversarial samples successfully pass as realistic in blind tests conducted with physicians, surpassing the effectiveness of contemporary techniques. This innovative methodology not only offers a robust tool for benchmarking and validating automated CT segmentation systems but also serves as a potent mechanism for data augmentation, thereby enhancing model generalization. This dual capability significantly bolsters advancements in the field of deep learning-based medical and healthcare segmentation models.

 Artículos similares

       
 
Saqib Ali, Sana Ashraf, Muhammad Sohaib Yousaf, Shazia Riaz and Guojun Wang    
The successful outcomes of deep learning (DL) algorithms in diverse fields have prompted researchers to consider backdoor attacks on DL models to defend them in practical applications. Adversarial examples could deceive a safety-critical system, which co... ver más
Revista: Applied Sciences

 
Raluca Chitic, Ali Osman Topal and Franck Leprévost    
Recently, convolutional neural networks (CNNs) have become the main drivers in many image recognition applications. However, they are vulnerable to adversarial attacks, which can lead to disastrous consequences. This paper introduces ShuffleDetect as a n... ver más
Revista: Applied Sciences

 
Minxiao Wang, Ning Yang, Dulaj H. Gunasinghe and Ning Weng    
Utilizing machine learning (ML)-based approaches for network intrusion detection systems (NIDSs) raises valid concerns due to the inherent susceptibility of current ML models to various threats. Of particular concern are two significant threats associate... ver más
Revista: Computers

 
Yuwen Fu, E. Xia, Duan Huang and Yumei Jing    
Machine learning has been applied in continuous-variable quantum key distribution (CVQKD) systems to address the growing threat of quantum hacking attacks. However, the use of machine learning algorithms for detecting these attacks has uncovered a vulner... ver más
Revista: Applied Sciences

 
Albatul Albattah and Murad A. Rassam    
Deep learning (DL) models are frequently employed to extract valuable features from heterogeneous and high-dimensional healthcare data, which are used to keep track of patient well-being via healthcare monitoring systems. Essentially, the training and te... ver más
Revista: Applied Sciences