|
|
|
Sharoug Alzaidy and Hamad Binsalleeh
In the field of behavioral detection, deep learning has been extensively utilized. For example, deep learning models have been utilized to detect and classify malware. Deep learning, however, has vulnerabilities that can be exploited with crafted inputs,...
ver más
|
|
|
|
|
|
|
Zhe Yang, Yi Huang, Yaqin Chen, Xiaoting Wu, Junlan Feng and Chao Deng
Controllable Text Generation (CTG) aims to modify the output of a Language Model (LM) to meet specific constraints. For example, in a customer service conversation, responses from the agent should ideally be soothing and address the user?s dissatisfactio...
ver más
|
|
|
|
|
|
|
Mingyong Yin, Yixiao Xu, Teng Hu and Xiaolei Liu
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems. Video adversarial attacks add subtle noise to the original example, resulti...
ver más
|
|
|
|
|
|
|
Yuting Guan, Junjiang He, Tao Li, Hui Zhao and Baoqiang Ma
SQL injection is a highly detrimental web attack technique that can result in significant data leakage and compromise system integrity. To counteract the harm caused by such attacks, researchers have devoted much attention to the examination of SQL injec...
ver más
|
|
|
|
|
|
|
Sapdo Utomo, Adarsh Rouniyar, Hsiu-Chun Hsu and Pao-Ann Hsiung
Smart city applications that request sensitive user information necessitate a comprehensive data privacy solution. Federated learning (FL), also known as privacy by design, is a new paradigm in machine learning (ML). However, FL models are susceptible to...
ver más
|
|
|
|
|
|
|
Songshen Han, Kaiyong Xu, Songhui Guo, Miao Yu and Bo Yang
Automatic Speech Recognition (ASR) provides a new way of human-computer interaction. However, it is vulnerable to adversarial examples, which are obtained by deliberately adding perturbations to the original audios. Thorough studies on the universal feat...
ver más
|
|
|
|
|
|
|
Dapeng Lang, Deyun Chen, Sizhao Li and Yongjun He
The deep model is widely used and has been demonstrated to have more hidden security risks. An adversarial attack can bypass the traditional means of defense. By modifying the input data, the attack on the deep model is realized, and it is imperceptible ...
ver más
|
|
|
|
|
|
|
Dapeng Lang, Deyun Chen, Jinjie Huang and Sizhao Li
Small perturbations can make deep models fail. Since deep models are widely used in face recognition systems (FRS) such as surveillance and access control, adversarial examples may introduce more subtle threats to face recognition systems. In this paper,...
ver más
|
|
|
|
|
|
|
Dmitry Namiot,Eugene Ilyushin
Pág. 101 - 118
This article, written for the Robust Machine Learning Curriculum, discusses the so-called Generative Models in Machine Learning. Generative models learn the distribution of data from some sample data set and then can generate (create) new data instances....
ver más
|
|
|
|
|
|
|
Vasily Kostyumov
Pág. 11 - 20
Deep learning has received a lot of attention from the scientific community in recent years due to excellent results in various areas of tasks, including computer vision. For example, in the problem of image classification, some authors even announced th...
ver más
|
|
|
|