Skoči na glavni sadržaj

Izvorni znanstveni članak

https://doi.org/10.17559/TV-20230607000701

Defending Against Local Adversarial Attacks through Empirical Gradient Optimization

Boyang Sun ; School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
Xiaoxuan Ma ; School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
Hengyou Wang ; School of Science, Beijing University of Civil Engineering and Architecture, Beijing 100044, China


Puni tekst: engleski pdf 4.337 Kb

str. 1888-1898

preuzimanja: 144

citiraj


Sažetak

Deep neural networks (DNNs) are susceptible to adversarial attacks, including the recently introduced locally visible adversarial patch attack, which achieves a success rate exceeding 96%. These attacks pose significant challenges to DNN security. Various defense methods, such as adversarial training, robust attention modules, watermarking, and gradient smoothing, have been proposed to enhance empirical robustness against patch attacks. However, these methods often have limitations concerning patch location requirements, randomness, and their impact on recognition accuracy for clean images.To address these challenges, we propose a novel defense algorithm called Local Adversarial Attack Empirical Defense using Gradient Optimization (LAAGO). The algorithm incorporates a low-pass filter before noise suppression to effectively mitigate the interference of high-frequency noise on the classifier while preserving the low-frequency areas of the images. Additionally, it emphasizes the original target features by enhancing the image gradients. Extensive experimental results demonstrate that the proposed method improves defense performance by 3.69% for 80 × 80 noise patches (representing approximately 4% of the images), while experiencing only a negligible 0.3% accuracy drop on clean images. The LAAGO algorithm provides a robust defense mechanism against local adversarial attacks, overcoming the limitations of previous methods. Our approach leverages gradient optimization, noise suppression, and feature enhancement, resulting in significant improvements in defense performance while maintaining high accuracy for clean images. This work contributes to the advancement of defense strategies against emerging adversarial attacks, thereby enhancing the security and reliability of deep neural networks.

Ključne riječi

adversarial attack; adversarial patch; deep learning; local gradient smoothing

Hrčak ID:

309238

URI

https://hrcak.srce.hr/309238

Datum izdavanja:

25.10.2023.

Posjeta: 323 *