|
|
|
Sharoug Alzaidy and Hamad Binsalleeh
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs,...
ver más
|
|
|
|
|
|
|
Raluca Chitic, Ali Osman Topal and Franck Leprévost
Through the addition of humanly imperceptible noise to an image classified as belonging to a category ????
c
a
, targeted adversarial attacks can lead convolutional neural networks (CNNs) to classify a modified image as belonging to any predefined target...
ver más
|
|
|
|
|
|
|
Dominik Wunderlich, Daniel Bernau, Francesco Aldà, Javier Parra-Arnau and Thorsten Strufe
Hierarchical text classification consists of classifying text documents into a hierarchy of classes and sub-classes. Although Artificial Neural Networks have proved useful to perform this task, unfortunately, they can leak training data information to ad...
ver más
|
|
|
|
|
|
|
Kazuki Koga and Kazuhiro Takemoto
Universal adversarial attacks, which hinder most deep neural network (DNN) tasks using only a single perturbation called universal adversarial perturbation (UAP), are a realistic security threat to the practical application of a DNN for medical imaging. ...
ver más
|
|
|
|
|
|
|
Joseph Pedersen, Rafael Muñoz-Gómez, Jiangnan Huang, Haozhe Sun, Wei-Wei Tu and Isabelle Guyon
We address the problem of defending predictive models, such as machine learning classifiers (Defender models), against membership inference attacks, in both the black-box and white-box setting, when the trainer and the trained model are publicly released...
ver más
|
|
|
|
|
|
|
Zhirui Luo, Qingqing Li and Jun Zheng
Transfer learning using pre-trained deep neural networks (DNNs) has been widely used for plant disease identification recently. However, pre-trained DNNs are susceptible to adversarial attacks which generate adversarial samples causing DNN models to make...
ver más
|
|
|
|
|
|
|
Xianfeng Gao, Yu-an Tan, Hongwei Jiang, Quanxin Zhang and Xiaohui Kuang
These years, Deep Neural Networks (DNNs) have shown unprecedented performance in many areas. However, some recent studies revealed their vulnerability to small perturbations added on source inputs. Furthermore, we call the ways to generate these perturba...
ver más
|
|
|
|
|
|
|
Yong Fang, Cheng Huang, Yijia Xu and Yang Li
With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a...
ver más
|
|
|
|