41   Artículos

 
en línea
Meng Bi, Xianyun Yu, Zhida Jin and Jian Xu    
In this paper, we propose an Iterative Greedy-Universal Adversarial Perturbations (IGUAP) approach based on an iterative greedy algorithm to create universal adversarial perturbations for acoustic prints. A thorough, objective account of the IG-UAP metho... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Woonghee Lee, Mingeon Ju, Yura Sim, Young Kul Jung, Tae Hyung Kim and Younghoon Kim    
Deep learning-based segmentation models have made a profound impact on medical procedures, with U-Net based computed tomography (CT) segmentation models exhibiting remarkable performance. Yet, even with these advances, these models are found to be vulner... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Yuwen Fu, E. Xia, Duan Huang and Yumei Jing    
Machine learning has been applied in continuous-variable quantum key distribution (CVQKD) systems to address the growing threat of quantum hacking attacks. However, the use of machine learning algorithms for detecting these attacks has uncovered a vulner... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Suliman A. Alsuhibany    
The Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) technique has been a topic of interest for several years. The ability of computers to recognize CAPTCHA has significantly increased due to the development of deep le... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Mehdi Sadi, Bashir Mohammad Sabquat Bahar Talukder, Kaniz Mishty and Md Tauhidur Rahman    
Universal adversarial perturbations are image-agnostic and model-independent noise that, when added to any image, can mislead the trained deep convolutional neural networks into the wrong prediction. Since these universal adversarial perturbations can se... ver más
Revista: Information    Formato: Electrónico

 
en línea
Afnan Alotaibi and Murad A. Rassam    
Concerns about cybersecurity and attack methods have risen in the information age. Many techniques are used to detect or deter attacks, such as intrusion detection systems (IDSs), that help achieve security goals, such as detecting malicious attacks befo... ver más
Revista: Future Internet    Formato: Electrónico

 
en línea
Dejian Guan, Wentao Zhao and Xiao Liu    
Recent studies show that deep neural networks (DNNs)-based object recognition algorithms overly rely on object textures rather than global object shapes, and DNNs are also vulnerable to human-less perceptible adversarial perturbations. Based on these two... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Peng Wang, Jingju Liu, Dongdong Hou and Shicheng Zhou    
The application of cybersecurity knowledge graphs is attracting increasing attention. However, many cybersecurity knowledge graphs are incomplete due to the sparsity of cybersecurity knowledge. Existing knowledge graph completion methods do not perform w... ver más
Revista: Applied Sciences    Formato: Electrónico

 
en línea
Weizhen Xu, Chenyi Zhang, Fangzhen Zhao and Liangda Fang    
Adversarial attacks hamper the functionality and accuracy of deep neural networks (DNNs) by meddling with subtle perturbations to their inputs. In this work, we propose a new mask-based adversarial defense scheme (MAD) for DNNs to mitigate the negative e... ver más
Revista: Algorithms    Formato: Electrónico

 
en línea
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li    
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,... ver más
Revista: Algorithms    Formato: Electrónico

« Anterior     Página: 1 de 3     Siguiente »