论文阅读笔记:Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
要点
- (universal adversarial perturbation)[1]
- 3D物理对抗样本
对抗样本生成
Box-constrained L-BFGS
Fast Gradient Sign Method (FGSM)
one-shot 生成方式要求不能太小Basic Iterative Method (BIM)
Iterative Least-likely Class Method (ILCM)
选取测试概率最低的那个类作为target class进行BIM targeted adversarial example generation- Jacobian-based Saliency Map Attack (JSMA)
- One Pixel Attack
- Carlini and Wagner Attacks (C&W)
- DeepFool
- Universal Adversarial Perturbations
对几乎所有输入均有效、不依赖于输入的对抗扰动 - UPSET(Universal Perturbations for Steering to Exact Targets)
- ANGRI(Antagonistic Network for Generating Rogue Images)
对抗训练
参考文献
[1]
来源:CSDN
作者:qq_36356761
链接:https://blog.csdn.net/qq_36356761/article/details/79546618