Aktionen

Benutzer

C.heck: Unterschied zwischen den Versionen

Aus exmediawiki

Zeile 2: Zeile 2:
 
=Links=
 
=Links=
 
einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf  
 
einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf  
==adversarial attacks==
+
=adversarial attacks=
 +
KNN's sind extrem anfällig für
 +
=White Box Attacks=
 +
* https://cv-tricks.com/how-to/breaking-deep-learning-with-adversarial-examples-using-tensorflow/
 +
** Paper »ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD«: https://arxiv.org/pdf/1607.02533.pdf
 +
 
 +
==Untargeted Adversarial Attacks==
 +
Adversarial attacks that just want '''your model to be confused and predict a wrong class''' are called Untargeted Adversarial Attacks.
 +
* nicht zielgerichtet
 +
===Fast Gradient Sign Method(FGSM)===
 +
FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack).
 +
===Basic Iterative Method===
 +
Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden
 +
===Iterative Least-Likely Class Method===
 +
ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt
 +
 
 +
* https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8
 +
** Jupyter Notebook: https://github.com/dangeng/Simple_Adversarial_Examples
 +
 
 +
==Targeted Adversarial Attacks==
 +
Attacks which compel the model to predict a '''(wrong) desired output''' are called Targeted Adversarial attacks
 +
* zielgerichtet
 +
 
 +
==(Un-)Targeted Adversarial Attacks==
 +
kann beides...
 +
===Projected Gradient Descent (PGD)===
 +
Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert:
 +
* MNIST-Bsp.: https://towardsdatascience.com/know-your-enemy-7f7c5038bdf3
 +
** Jupyter Notebook: https://github.com/oscarknagg/adversarial/blob/master/notebooks/Creating_And_Defending_From_Adversarial_Examples.ipynb
 +
 
 +
==BLACK BOX ATTACKS==
 +
 
 +
 
 +
 
 
[[KNN-Hacks]]
 
[[KNN-Hacks]]
 
* Praxis-Beispiele: https://boingboing.net/tag/adversarial-examples
 
* Praxis-Beispiele: https://boingboing.net/tag/adversarial-examples
Zeile 12: Zeile 45:
 
* https://en.wikipedia.org/wiki/Deep_learning#Cyberthreat
 
* https://en.wikipedia.org/wiki/Deep_learning#Cyberthreat
  
 +
==on voice (ASR)==
 +
* https://www.the-ambient.com/features/weird-ways-echo-can-be-hacked-how-to-stop-it-231
 
===Psychoacoustic Hiding (Attacking Speech Recognition)===
 
===Psychoacoustic Hiding (Attacking Speech Recognition)===
 
* https://adversarial-attacks.net/
 
* https://adversarial-attacks.net/
Zeile 20: Zeile 55:
 
===hidden voice commands===
 
===hidden voice commands===
 
* https://www.theregister.co.uk/2016/07/11/siri_hacking_phones/
 
* https://www.theregister.co.uk/2016/07/11/siri_hacking_phones/
 
+
* https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds
 +
==on written text (NLP)==
 
===paraphrasing attacks===
 
===paraphrasing attacks===
 
* https://venturebeat.com/2019/04/01/text-based-ai-models-are-vulnerable-to-paraphrasing-attacks-researchers-find/
 
* https://venturebeat.com/2019/04/01/text-based-ai-models-are-vulnerable-to-paraphrasing-attacks-researchers-find/
Zeile 27: Zeile 63:
 
* https://motherboard.vice.com/en_us/article/9axx5e/ai-can-be-fooled-with-one-misspelled-word
 
* https://motherboard.vice.com/en_us/article/9axx5e/ai-can-be-fooled-with-one-misspelled-word
  
 +
==on computer vision==
 
===Tesla===
 
===Tesla===
 
* https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane
 
* https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane
 
* https://boingboing.net/2019/03/31/mote-in-cars-eye.html
 
* https://boingboing.net/2019/03/31/mote-in-cars-eye.html
 
** Paper vom Forschungsteam: https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf
 
** Paper vom Forschungsteam: https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf
===HowTo's===
 
* https://towardsdatascience.com/know-your-enemy-7f7c5038bdf3
 
** Jupyter Notebook: https://github.com/oscarknagg/adversarial/blob/master/notebooks/Creating_And_Defending_From_Adversarial_Examples.ipynb
 
* https://cv-tricks.com/how-to/breaking-deep-learning-with-adversarial-examples-using-tensorflow/
 
* https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8
 
** Jupyter Notebook: https://github.com/dangeng/Simple_Adversarial_Examples
 
  
 
===libraries===
 
===libraries===

Version vom 16. April 2019, 08:54 Uhr

Links

einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf

adversarial attacks

KNN's sind extrem anfällig für

White Box Attacks

Untargeted Adversarial Attacks

Adversarial attacks that just want your model to be confused and predict a wrong class are called Untargeted Adversarial Attacks.

  • nicht zielgerichtet

Fast Gradient Sign Method(FGSM)

FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack).

Basic Iterative Method

Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden

Iterative Least-Likely Class Method

ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt

Targeted Adversarial Attacks

Attacks which compel the model to predict a (wrong) desired output are called Targeted Adversarial attacks

  • zielgerichtet

(Un-)Targeted Adversarial Attacks

kann beides...

Projected Gradient Descent (PGD)

Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert:

BLACK BOX ATTACKS

KNN-Hacks

on voice (ASR)

Psychoacoustic Hiding (Attacking Speech Recognition)

hidden voice commands

on written text (NLP)

paraphrasing attacks

on computer vision

Tesla

libraries

XAI

XAI/NLG

ethics

esotheric neural net

KI-generierte Sprache

NLP / NLG / NLU / NLI

NLP:

NLU:

NLG:

https://github.com/dangeng/Simple_Adversarial_Examples

Speech recognition

https://de.wikipedia.org/wiki/Spracherkennung

=datenbanken

deutsch:

englisch:

E2E NLG Challenge:

chatbots

Toolkits/Librarys

tryouts:

(KI-generierte) Krypto

Reproduktive KI

https://www.sir-apfelot.de/kuenstliche-intelligenz-erschafft-neue-ki-systeme-10436/

last semester

Datei:Neuronales-netz am eigenen-bild.ipynb