Aktionen

Benutzer

C.heck: Unterschied zwischen den Versionen

Aus exmediawiki

Zeile 1: Zeile 1:
  
=Links=
 
 
einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf  
 
einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf  
 
=adversarial attacks=
 
=adversarial attacks=
KNN's sind extrem anfällig für  
+
KNN's sind extrem anfällig für...
 +
 
 +
* Praxis-Beispiele: https://boingboing.net/tag/adversarial-examples
 +
* https://bdtechtalks.com/2018/12/27/deep-learning-adversarial-attacks-ai-malware/
 +
* https://www.dailydot.com/debug/ai-malware/
 +
 
 +
* https://singularityhub.com/2017/10/10/ai-is-easy-to-fool-why-that-needs-to-change
 +
 
 +
* https://en.wikipedia.org/wiki/Deep_learning#Cyberthreat
 +
 
=White Box Attacks=
 
=White Box Attacks=
 
* https://cv-tricks.com/how-to/breaking-deep-learning-with-adversarial-examples-using-tensorflow/
 
* https://cv-tricks.com/how-to/breaking-deep-learning-with-adversarial-examples-using-tensorflow/
Zeile 17: Zeile 25:
 
===Iterative Least-Likely Class Method===
 
===Iterative Least-Likely Class Method===
 
ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt
 
ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt
 
* https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8
 
** Jupyter Notebook: https://github.com/dangeng/Simple_Adversarial_Examples
 
 
 
==Targeted Adversarial Attacks==
 
==Targeted Adversarial Attacks==
 
Attacks which compel the model to predict a '''(wrong) desired output''' are called Targeted Adversarial attacks
 
Attacks which compel the model to predict a '''(wrong) desired output''' are called Targeted Adversarial attacks
 
* zielgerichtet
 
* zielgerichtet
 
 
==(Un-)Targeted Adversarial Attacks==
 
==(Un-)Targeted Adversarial Attacks==
 
kann beides...
 
kann beides...
Zeile 32: Zeile 35:
 
** Jupyter Notebook: https://github.com/oscarknagg/adversarial/blob/master/notebooks/Creating_And_Defending_From_Adversarial_Examples.ipynb
 
** Jupyter Notebook: https://github.com/oscarknagg/adversarial/blob/master/notebooks/Creating_And_Defending_From_Adversarial_Examples.ipynb
  
==BLACK BOX ATTACKS==
+
=BLACK BOX ATTACKS=
 
+
==(Un-)Targeted Adversarial Attacks==
 
+
kann beides...
 
+
* https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8
[[KNN-Hacks]]
+
** Jupyter Notebook: https://github.com/dangeng/Simple_Adversarial_Examples
* Praxis-Beispiele: https://boingboing.net/tag/adversarial-examples
 
* https://bdtechtalks.com/2018/12/27/deep-learning-adversarial-attacks-ai-malware/
 
* https://www.dailydot.com/debug/ai-malware/
 
 
 
* https://singularityhub.com/2017/10/10/ai-is-easy-to-fool-why-that-needs-to-change
 
 
 
* https://en.wikipedia.org/wiki/Deep_learning#Cyberthreat
 
  
 
==on voice (ASR)==
 
==on voice (ASR)==
 
* https://www.the-ambient.com/features/weird-ways-echo-can-be-hacked-how-to-stop-it-231
 
* https://www.the-ambient.com/features/weird-ways-echo-can-be-hacked-how-to-stop-it-231
 +
===hidden voice commands===
 +
* https://www.theregister.co.uk/2016/07/11/siri_hacking_phones/
 +
* https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds
 
===Psychoacoustic Hiding (Attacking Speech Recognition)===
 
===Psychoacoustic Hiding (Attacking Speech Recognition)===
 
* https://adversarial-attacks.net/
 
* https://adversarial-attacks.net/
Zeile 53: Zeile 52:
 
** Präsentationsfolien: https://www.ndss-symposium.org/wp-content/uploads/ndss2019_08-2_Schonherr_slides.pdf
 
** Präsentationsfolien: https://www.ndss-symposium.org/wp-content/uploads/ndss2019_08-2_Schonherr_slides.pdf
  
===hidden voice commands===
 
* https://www.theregister.co.uk/2016/07/11/siri_hacking_phones/
 
* https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds
 
 
==on written text (NLP)==
 
==on written text (NLP)==
 
===paraphrasing attacks===
 
===paraphrasing attacks===
Zeile 69: Zeile 65:
 
** Paper vom Forschungsteam: https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf
 
** Paper vom Forschungsteam: https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf
  
===libraries===
+
==Anti Surveillance==
 +
http://dismagazine.com/dystopia/evolved-lifestyles/8115/anti-surveillance-how-to-hide-from-machines/
 +
 
 +
==libraries==
 
* https://github.com/bethgelab
 
* https://github.com/bethgelab
 
* https://github.com/tensorflow/cleverhans
 
* https://github.com/tensorflow/cleverhans

Version vom 16. April 2019, 13:26 Uhr

einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf

adversarial attacks

KNN's sind extrem anfällig für...

White Box Attacks

Untargeted Adversarial Attacks

Adversarial attacks that just want your model to be confused and predict a wrong class are called Untargeted Adversarial Attacks.

  • nicht zielgerichtet

Fast Gradient Sign Method(FGSM)

FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack).

Basic Iterative Method

Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden

Iterative Least-Likely Class Method

ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt

Targeted Adversarial Attacks

Attacks which compel the model to predict a (wrong) desired output are called Targeted Adversarial attacks

  • zielgerichtet

(Un-)Targeted Adversarial Attacks

kann beides...

Projected Gradient Descent (PGD)

Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert:

BLACK BOX ATTACKS

(Un-)Targeted Adversarial Attacks

kann beides...

on voice (ASR)

hidden voice commands

Psychoacoustic Hiding (Attacking Speech Recognition)

on written text (NLP)

paraphrasing attacks

on computer vision

Tesla

Anti Surveillance

http://dismagazine.com/dystopia/evolved-lifestyles/8115/anti-surveillance-how-to-hide-from-machines/

libraries

XAI

XAI/NLG

ethics

esotheric neural net

KI-generierte Sprache

NLP / NLG / NLU / NLI

NLP:

NLU:

NLG:

https://github.com/dangeng/Simple_Adversarial_Examples

Speech recognition

https://de.wikipedia.org/wiki/Spracherkennung

=datenbanken

deutsch:

englisch:

E2E NLG Challenge:

chatbots

Toolkits/Librarys

tryouts:

(KI-generierte) Krypto

Reproduktive KI

https://www.sir-apfelot.de/kuenstliche-intelligenz-erschafft-neue-ki-systeme-10436/

last semester

Datei:Neuronales-netz am eigenen-bild.ipynb