Document Type
Conference Proceeding
Publication Title
IEEE International Conference on Communications
Abstract
Neural networks have demonstrated remarkable success in learning and solving complex tasks in a variety of fields including cognitive cities. Nevertheless, the rise of those networks in modern computing has been accompanied by concerns regarding their vulnerability to adversarial attacks. In this work, we propose a novel gradient-free, gray box, incremental attack that targets the training process of neural networks. The proposed attack, which implicitly poisons the intermediate data structures that retain the training instances between training epochs acquires its high-risk property from attacking data structures that are typically unobserved by professionals. Hence, the attack goes unnoticed despite the damage it can cause. Moreover, the attack can be executed without the attackers' knowledge of the neural network structure or training data, making it more dangerous. The proposed attack was tested under a sensitive application of secure cognitive cities, namely, biometric authentication. The conducted experiments showed that the proposed attack is effective and stealthy. Finally, the attack effectiveness property was concluded from the fact that it was able to flip the sign of the loss gradient in the conducted experiments to become positive, which is noisy and unstable training. Moreover, the attack was able to decrease the inference probability in the poisoned networks compared to their unpoisoned counterparts by 15.37%, 14.68%, and 24.88% for the Densenet, VGG, and Xception, respectively. Finally, the attack retained its stealthiness despite its high effectiveness. This was demonstrated by the fact that the attack did not cause a notable increase in the training time, in addition, the Fscore values only dropped by an average of 1.2%, 1.9%, and 1.5% for the poisoned Densenet, VGG, and Xception, respectively.
First Page
45
Last Page
50
DOI
10.1109/ICC45041.2023.10278837
Publication Date
10-23-2023
Keywords
Knowledge engineering, Toxicology, Urban areas, Training data, Data structures, Adversarial Attacks, Data Poisoning, Iris Recognition, Neural Networks
Recommended Citation
R. Al-qudah, M. Aloqaily, B. Ouni, M. Guizani and T. Lestable, "An Incremental Gray-Box Physical Adversarial Attack on Neural Network Training," ICC 2023 - IEEE International Conference on Communications, Rome, Italy, 2023, pp. 45-50, doi: 10.1109/ICC45041.2023.10278837.
Comments
Open Access version from arXiv
CC BY
Uploaded on May 31, 2024