C.heck: Unterschied zwischen den Versionen
Aus exmediawiki
C.heck (Diskussion | Beiträge) |
C.heck (Diskussion | Beiträge) |
||
Zeile 2: | Zeile 2: | ||
=Links= | =Links= | ||
einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf | einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf | ||
− | ==adversarial attacks== | + | =adversarial attacks= |
+ | KNN's sind extrem anfällig für | ||
+ | =White Box Attacks= | ||
+ | * https://cv-tricks.com/how-to/breaking-deep-learning-with-adversarial-examples-using-tensorflow/ | ||
+ | ** Paper »ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD«: https://arxiv.org/pdf/1607.02533.pdf | ||
+ | |||
+ | ==Untargeted Adversarial Attacks== | ||
+ | Adversarial attacks that just want '''your model to be confused and predict a wrong class''' are called Untargeted Adversarial Attacks. | ||
+ | * nicht zielgerichtet | ||
+ | ===Fast Gradient Sign Method(FGSM)=== | ||
+ | FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack). | ||
+ | ===Basic Iterative Method=== | ||
+ | Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden | ||
+ | ===Iterative Least-Likely Class Method=== | ||
+ | ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt | ||
+ | |||
+ | * https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8 | ||
+ | ** Jupyter Notebook: https://github.com/dangeng/Simple_Adversarial_Examples | ||
+ | |||
+ | ==Targeted Adversarial Attacks== | ||
+ | Attacks which compel the model to predict a '''(wrong) desired output''' are called Targeted Adversarial attacks | ||
+ | * zielgerichtet | ||
+ | |||
+ | ==(Un-)Targeted Adversarial Attacks== | ||
+ | kann beides... | ||
+ | ===Projected Gradient Descent (PGD)=== | ||
+ | Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert: | ||
+ | * MNIST-Bsp.: https://towardsdatascience.com/know-your-enemy-7f7c5038bdf3 | ||
+ | ** Jupyter Notebook: https://github.com/oscarknagg/adversarial/blob/master/notebooks/Creating_And_Defending_From_Adversarial_Examples.ipynb | ||
+ | |||
+ | ==BLACK BOX ATTACKS== | ||
+ | |||
+ | |||
+ | |||
[[KNN-Hacks]] | [[KNN-Hacks]] | ||
* Praxis-Beispiele: https://boingboing.net/tag/adversarial-examples | * Praxis-Beispiele: https://boingboing.net/tag/adversarial-examples | ||
Zeile 12: | Zeile 45: | ||
* https://en.wikipedia.org/wiki/Deep_learning#Cyberthreat | * https://en.wikipedia.org/wiki/Deep_learning#Cyberthreat | ||
+ | ==on voice (ASR)== | ||
+ | * https://www.the-ambient.com/features/weird-ways-echo-can-be-hacked-how-to-stop-it-231 | ||
===Psychoacoustic Hiding (Attacking Speech Recognition)=== | ===Psychoacoustic Hiding (Attacking Speech Recognition)=== | ||
* https://adversarial-attacks.net/ | * https://adversarial-attacks.net/ | ||
Zeile 20: | Zeile 55: | ||
===hidden voice commands=== | ===hidden voice commands=== | ||
* https://www.theregister.co.uk/2016/07/11/siri_hacking_phones/ | * https://www.theregister.co.uk/2016/07/11/siri_hacking_phones/ | ||
− | + | * https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds | |
+ | ==on written text (NLP)== | ||
===paraphrasing attacks=== | ===paraphrasing attacks=== | ||
* https://venturebeat.com/2019/04/01/text-based-ai-models-are-vulnerable-to-paraphrasing-attacks-researchers-find/ | * https://venturebeat.com/2019/04/01/text-based-ai-models-are-vulnerable-to-paraphrasing-attacks-researchers-find/ | ||
Zeile 27: | Zeile 63: | ||
* https://motherboard.vice.com/en_us/article/9axx5e/ai-can-be-fooled-with-one-misspelled-word | * https://motherboard.vice.com/en_us/article/9axx5e/ai-can-be-fooled-with-one-misspelled-word | ||
+ | ==on computer vision== | ||
===Tesla=== | ===Tesla=== | ||
* https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane | * https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane | ||
* https://boingboing.net/2019/03/31/mote-in-cars-eye.html | * https://boingboing.net/2019/03/31/mote-in-cars-eye.html | ||
** Paper vom Forschungsteam: https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf | ** Paper vom Forschungsteam: https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
===libraries=== | ===libraries=== |
Version vom 16. April 2019, 08:54 Uhr
Inhaltsverzeichnis
- 1 Links
- 2 adversarial attacks
- 3 White Box Attacks
- 4 last semester
Links
einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf
adversarial attacks
KNN's sind extrem anfällig für
White Box Attacks
- https://cv-tricks.com/how-to/breaking-deep-learning-with-adversarial-examples-using-tensorflow/
- Paper »ADVERSARIAL EXAMPLES IN THE PHYSICAL WORLD«: https://arxiv.org/pdf/1607.02533.pdf
Untargeted Adversarial Attacks
Adversarial attacks that just want your model to be confused and predict a wrong class are called Untargeted Adversarial Attacks.
- nicht zielgerichtet
Fast Gradient Sign Method(FGSM)
FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack).
Basic Iterative Method
Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden
Iterative Least-Likely Class Method
ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt
- https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8
- Jupyter Notebook: https://github.com/dangeng/Simple_Adversarial_Examples
Targeted Adversarial Attacks
Attacks which compel the model to predict a (wrong) desired output are called Targeted Adversarial attacks
- zielgerichtet
(Un-)Targeted Adversarial Attacks
kann beides...
Projected Gradient Descent (PGD)
Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert:
BLACK BOX ATTACKS
- Praxis-Beispiele: https://boingboing.net/tag/adversarial-examples
- https://bdtechtalks.com/2018/12/27/deep-learning-adversarial-attacks-ai-malware/
- https://www.dailydot.com/debug/ai-malware/
on voice (ASR)
Psychoacoustic Hiding (Attacking Speech Recognition)
- https://www.theregister.co.uk/2016/07/11/siri_hacking_phones/
- https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds
on written text (NLP)
paraphrasing attacks
- https://venturebeat.com/2019/04/01/text-based-ai-models-are-vulnerable-to-paraphrasing-attacks-researchers-find/
- https://bdtechtalks.com/2019/04/02/ai-nlp-paraphrasing-adversarial-attacks/
on computer vision
Tesla
- https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane
- https://boingboing.net/2019/03/31/mote-in-cars-eye.html
- Paper vom Forschungsteam: https://keenlab.tencent.com/en/whitepapers/Experimental_Security_Research_of_Tesla_Autopilot.pdf
libraries
XAI
- https://de.m.wikipedia.org/wiki/Explainable_Artificial_Intelligence
- https://netzpolitik.org/2018/enquete-kommission-kuenstliche-intelligenz-sachverstaendige-und-abgeordnete-klaeren-grundbegriffe/
- https://www.ayasdi.com/blog/artificial-intelligence/trust-challenge-explainable-ai-not-enough/
- https://www.bons.ai/blog/explainable-artificial-intelligence-using-model-induction
- https://en.m.wikipedia.org/wiki/Right_to_explanation
- https://bdtechtalks.com/2018/09/25/explainable-interpretable-ai/
- RISE: https://bdtechtalks.com/2018/10/15/kate-saenko-explainable-ai-deep-learning-rise/
- DARPA: https://www.darpa.mil/program/explainable-artificial-intelligence
ethics
- https://www.economist.com/science-and-technology/2018/02/15/computer-programs-recognise-white-men-better-than-black-women
- https://books.google.de/books?id=rLsyDwAAQBAJ&pg=PA95&redir_esc=y#v=onepage&q&f=false
- https://books.google.de/books?id=_H1K3vojDFQC&pg=PA762&redir_esc=y#v=onepage&q&f=false
- https://neil.fraser.name/writing/tank/
- https://www.wired.com/story/why-ai-is-still-waiting-for-its-ethics-transplant/
- AI Now Report: https://medium.com/@AINowInstitute/the-10-top-recommendations-for-the-ai-field-in-2017-b3253624a7
- https://bdtechtalks.com/2018/03/26/racist-sexist-ai-deep-learning-algorithms/
esotheric neural net
- Forscher suchen eigene Programmiersprache: https://t3n.de/news/machine-learning-facebooks-ki-chef-sucht-sprache-1144900/
- esoterische programmiersprachen http://kryptografie.de/kryptografie/chiffre/index-sprachen.htm
KI-generierte Sprache
- Google: https://motherboard.vice.com/de/article/mg7md8/eine-kuenstliche-intelligenz-von-google-hat-gerade-seine-eigene-sprache-erfunden
- veröffentlichtes Paper: https://arxiv.org/pdf/1611.04558v1.pdf
- FB Bots: https://code.fb.com/ml-applications/deal-or-no-deal-training-ai-bots-to-negotiate/
NLP / NLG / NLU / NLI
NLP:
NLU:
- https://www.informatik-aktuell.de/betrieb/kuenstliche-intelligenz/natural-language-understanding-nlu.html
- https://en.wikipedia.org/wiki/Natural-language_understanding
NLG:
- https://de.wikipedia.org/wiki/Textgenerierung
- http://www.thealit.de/lab/serialitaet/teil/nieberle/nieberle.html
https://github.com/dangeng/Simple_Adversarial_Examples
Speech recognition
https://de.wikipedia.org/wiki/Spracherkennung
=datenbanken
deutsch:
englisch:
E2E NLG Challenge:
chatbots
- https://bdtechtalks.com/2017/08/21/rob-high-ibm-watson-cto-artificial-intelligence-chatbots/
- https://chatbotsmagazine.com/contextual-chat-bots-with-tensorflow-4391749d0077
- Facebook-Messenger-Bot: https://dzone.com/articles/how-i-used-deep-learning-to-train-a-chatbot-to-tal
- https://tutorials.botsfloor.com/how-to-build-your-first-chatbot-c84495d4622d
- Jupyter Notebooks: https://github.com/suriyadeepan/practical_seq2seq
Toolkits/Librarys
- Natural Language Toolkit: http://www.nltk.org/
- Poetry Generator: https://github.com/schollz/poetry-generator
tryouts:
- https://machinelearningmastery.com/text-generation-lstm-recurrent-neural-networks-python-keras/
- https://remicnrd.github.io/Natural-language-generation/
- https://github.com/shashank-bhatt-07/Natural-Language-Generation-using-LSTM-Keras
(KI-generierte) Krypto
- https://motherboard.vice.com/de/article/8q8wkv/google-ki-entwickelt-verschluesselung-die-selbst-google-nicht-versteht
- http://kryptografie.de/kryptografie/index.htm
Reproduktive KI
https://www.sir-apfelot.de/kuenstliche-intelligenz-erschafft-neue-ki-systeme-10436/