论文阅读笔记:Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

时间秒杀一切 提交于 2019-12-08 05:44:00

论文阅读笔记:Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

要点

  1. (universal adversarial perturbation)[1]
  2. 3D物理对抗样本

对抗样本生成

  1. Box-constrained L-BFGS
    minρ

  2. Fast Gradient Sign Method (FGSM)
    ρ=ϵsign(J(θ,Ic,l))
    one-shot 生成方式要求ϵ不能太小

  3. Basic Iterative Method (BIM)
    Ik+1=clip(Ik+αsign(J(θ,Ik,l))

  4. Iterative Least-likely Class Method (ILCM)
    选取测试概率最低的那个类作为target class进行BIM targeted adversarial example generation

  5. Jacobian-based Saliency Map Attack (JSMA)
  6. One Pixel Attack
  7. Carlini and Wagner Attacks (C&W)
  8. DeepFool
  9. Universal Adversarial Perturbations
    对几乎所有输入均有效、不依赖于输入的对抗扰动
  10. UPSET(Universal Perturbations for Steering to Exact Targets)
  11. ANGRI(Antagonistic Network for Generating Rogue Images)

对抗训练

参考文献

[1]

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!