Aktionen

Adversarial Attacks: Unterschied zwischen den Versionen

Aus exmediawiki

Zeile 40: Zeile 40:
  
 
----
 
----
 +
----
 +
=WHITE/BLACK BOX ATTACKS=
 +
==on voice (ASR)==
 +
===Psychoacoustic Hiding (Attacking Speech Recognition)===
 +
* https://adversarial-attacks.net/
 +
** Code: https://github.com/rub-ksv/adversarialattacks
 +
** Paper: https://www.ndss-symposium.org/wp-content/uploads/2019/02/ndss2019_08-2_Schonherr_paper.pdf
 +
** Präsentationsfolien: https://www.ndss-symposium.org/wp-content/uploads/ndss2019_08-2_Schonherr_slides.pdf
 +
 
----
 
----
 
=BLACK BOX ATTACKS=
 
=BLACK BOX ATTACKS=
Zeile 70: Zeile 79:
 
* https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds
 
* https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds
  
 +
=BLACK BOX / WHITE BOX ATTACKS=
 +
==on voice (ASR)==
 
===Psychoacoustic Hiding (Attacking Speech Recognition)===
 
===Psychoacoustic Hiding (Attacking Speech Recognition)===
 
* https://adversarial-attacks.net/
 
* https://adversarial-attacks.net/

Version vom 24. April 2019, 14:34 Uhr

800

WHITE BOX ATTACKS


Untargeted Adversarial Attacks

Adversarial attacks that just want your model to be confused and predict a wrong class are called Untargeted Adversarial Attacks.

  • nicht zielgerichtet

Fast Gradient Sign Method(FGSM)

FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack).

Basic Iterative Method

Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden

Iterative Least-Likely Class Method

ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt


Targeted Adversarial Attacks

Attacks which compel the model to predict a (wrong) desired output are called Targeted Adversarial attacks

  • zielgerichtet

(Un-)Targeted Adversarial Attacks

kann beides...

Projected Gradient Descent (PGD)

Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert:



WHITE/BLACK BOX ATTACKS

on voice (ASR)

Psychoacoustic Hiding (Attacking Speech Recognition)


BLACK BOX ATTACKS


on computer vision

propose zeroth order optimization (ZOO)

Black-Box Attacks using Adversarial Samples

  • a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack

new Tesla Hack


on voice (ASR)

hidden voice commands

BLACK BOX / WHITE BOX ATTACKS

on voice (ASR)

Psychoacoustic Hiding (Attacking Speech Recognition)


on written text (NLP)

paraphrasing attacks


Anti Surveillance

http://dismagazine.com/dystopia/evolved-lifestyles/8115/anti-surveillance-how-to-hide-from-machines/


libraries