Aktionen

Benutzer

C.heck: Unterschied zwischen den Versionen

Aus exmediawiki

Zeile 1: Zeile 1:
 
+
=Keras Examples=
 +
https://github.com/keras-team/keras/tree/master/examples
 +
----
 
einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf  
 
einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf  
 
=ADVERSARIAL ATTACKS=
 
=ADVERSARIAL ATTACKS=
Zeile 19: Zeile 21:
 
Adversarial attacks that just want '''your model to be confused and predict a wrong class''' are called Untargeted Adversarial Attacks.
 
Adversarial attacks that just want '''your model to be confused and predict a wrong class''' are called Untargeted Adversarial Attacks.
 
* nicht zielgerichtet
 
* nicht zielgerichtet
 +
 
===Fast Gradient Sign Method(FGSM)===
 
===Fast Gradient Sign Method(FGSM)===
 
FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack).
 
FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack).
 +
 
===Basic Iterative Method===
 
===Basic Iterative Method===
 
Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden
 
Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden
 +
 
===Iterative Least-Likely Class Method===
 
===Iterative Least-Likely Class Method===
 
ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt
 
ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt
 +
 
==Targeted Adversarial Attacks==
 
==Targeted Adversarial Attacks==
 
Attacks which compel the model to predict a '''(wrong) desired output''' are called Targeted Adversarial attacks
 
Attacks which compel the model to predict a '''(wrong) desired output''' are called Targeted Adversarial attacks
 
* zielgerichtet
 
* zielgerichtet
 +
 
==(Un-)Targeted Adversarial Attacks==
 
==(Un-)Targeted Adversarial Attacks==
 
kann beides...
 
kann beides...
 +
 
===Projected Gradient Descent (PGD)===
 
===Projected Gradient Descent (PGD)===
 
Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert:
 
Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert:
Zeile 41: Zeile 49:
  
 
==on computer vision==
 
==on computer vision==
 +
 
===propose zeroth order optimization (ZOO)===
 
===propose zeroth order optimization (ZOO)===
 
* attacks to directly estimate the gradients of the targeted DNN
 
* attacks to directly estimate the gradients of the targeted DNN
 
** https://arxiv.org/abs/1708.03999
 
** https://arxiv.org/abs/1708.03999
 +
 
===Black-Box Attacks using Adversarial Samples===
 
===Black-Box Attacks using Adversarial Samples===
 
*  a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack
 
*  a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack
 
** https://arxiv.org/abs/1605.07277
 
** https://arxiv.org/abs/1605.07277
 +
 
===new Tesla Hack===
 
===new Tesla Hack===
 
* https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane
 
* https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane
Zeile 54: Zeile 65:
 
==on voice (ASR)==
 
==on voice (ASR)==
 
* https://www.the-ambient.com/features/weird-ways-echo-can-be-hacked-how-to-stop-it-231
 
* https://www.the-ambient.com/features/weird-ways-echo-can-be-hacked-how-to-stop-it-231
 +
 
===hidden voice commands===
 
===hidden voice commands===
 
* https://www.theregister.co.uk/2016/07/11/siri_hacking_phones/
 
* https://www.theregister.co.uk/2016/07/11/siri_hacking_phones/
 
* https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds
 
* https://www.fastcompany.com/90240975/alexa-can-be-hacked-by-chirping-birds
 +
 
===Psychoacoustic Hiding (Attacking Speech Recognition)===
 
===Psychoacoustic Hiding (Attacking Speech Recognition)===
 
* https://adversarial-attacks.net/
 
* https://adversarial-attacks.net/
Zeile 64: Zeile 77:
  
 
==on written text (NLP)==
 
==on written text (NLP)==
 +
 
===paraphrasing attacks===
 
===paraphrasing attacks===
 
* https://venturebeat.com/2019/04/01/text-based-ai-models-are-vulnerable-to-paraphrasing-attacks-researchers-find/
 
* https://venturebeat.com/2019/04/01/text-based-ai-models-are-vulnerable-to-paraphrasing-attacks-researchers-find/
Zeile 113: Zeile 127:
  
 
==NLG==
 
==NLG==
 +
https://byteacademy.co/blog/overview-NLG
 +
 +
'''XAI durch Sprachrationalisierung'''
 +
* Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations
 +
** https://arxiv.org/abs/1702.07826
 +
 
* https://de.wikipedia.org/wiki/Textgenerierung
 
* https://de.wikipedia.org/wiki/Textgenerierung
 
* http://www.thealit.de/lab/serialitaet/teil/nieberle/nieberle.html
 
* http://www.thealit.de/lab/serialitaet/teil/nieberle/nieberle.html
Zeile 125: Zeile 145:
 
** https://www.fastcompany.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it
 
** https://www.fastcompany.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it
  
===techniques===
+
===(un-)supervised===
====LSTM+RNN===
+
 +
====LSTM====
 +
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
 +
 
 +
====LSTM+RNN====
 
* »on the road« by AI: https://medium.com/artists-and-machine-intelligence/ai-poetry-hits-the-road-eb685dfc1544
 
* »on the road« by AI: https://medium.com/artists-and-machine-intelligence/ai-poetry-hits-the-road-eb685dfc1544
====Autoencoders====
+
 
 +
====Autoencoder====
 
https://www.wired.co.uk/article/google-artificial-intelligence-poetry
 
https://www.wired.co.uk/article/google-artificial-intelligence-poetry
 +
 +
====LSTM+Autoencoder====
 +
* https://github.com/keras-team/keras/issues/1401
 +
* https://www.dlology.com/blog/how-to-do-unsupervised-clustering-with-keras/
 +
 
====GAN====
 
====GAN====
====Transformations====
+
https://arxiv.org/abs/1705.10929
 +
 
 +
====transformer-based language model====
 +
OpenAI's gpt-2:
 +
* https://openai.com/blog/better-language-models/
 +
** https://github.com/openai/gpt-2
 +
 
 +
Diskussion:
 +
* https://www.skynettoday.com/briefs/gpt2
  
 
===Poetry===
 
===Poetry===
 +
 
====examples...====
 
====examples...====
 
* https://hackernoon.com/i-tried-my-hand-at-deep-learning-and-made-some-poetry-along-the-way-2e350c33376f
 
* https://hackernoon.com/i-tried-my-hand-at-deep-learning-and-made-some-poetry-along-the-way-2e350c33376f

Version vom 16. April 2019, 14:07 Uhr

Keras Examples

https://github.com/keras-team/keras/tree/master/examples


einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf

ADVERSARIAL ATTACKS

KNN's sind extrem anfällig für...

WHITE BOX ATTACKS

Untargeted Adversarial Attacks

Adversarial attacks that just want your model to be confused and predict a wrong class are called Untargeted Adversarial Attacks.

  • nicht zielgerichtet

Fast Gradient Sign Method(FGSM)

FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack).

Basic Iterative Method

Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden

Iterative Least-Likely Class Method

ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt

Targeted Adversarial Attacks

Attacks which compel the model to predict a (wrong) desired output are called Targeted Adversarial attacks

  • zielgerichtet

(Un-)Targeted Adversarial Attacks

kann beides...

Projected Gradient Descent (PGD)

Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert:

BLACK BOX ATTACKS

on computer vision

propose zeroth order optimization (ZOO)

Black-Box Attacks using Adversarial Samples

  • a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack

new Tesla Hack

on voice (ASR)

hidden voice commands

Psychoacoustic Hiding (Attacking Speech Recognition)

on written text (NLP)

paraphrasing attacks

Anti Surveillance

http://dismagazine.com/dystopia/evolved-lifestyles/8115/anti-surveillance-how-to-hide-from-machines/

libraries

ETHIK

XAI

XAI/NLG

LANGUAGE

esotheric neural net (programming language)

NLU / NLI

NLP

Speech recognition

https://de.wikipedia.org/wiki/Spracherkennung

NLG

https://byteacademy.co/blog/overview-NLG

XAI durch Sprachrationalisierung

(un-)supervised

LSTM

http://colah.github.io/posts/2015-08-Understanding-LSTMs/

LSTM+RNN

Autoencoder

https://www.wired.co.uk/article/google-artificial-intelligence-poetry

LSTM+Autoencoder

GAN

https://arxiv.org/abs/1705.10929

transformer-based language model

OpenAI's gpt-2:

Diskussion:

Poetry

examples...

datenbanken

deutsch:

englisch:

E2E NLG Challenge:

chatbots

Toolkits/Librarys

tryouts:

(KI-GENERIERTE) KRYPTO

REPRODUKTIVE KI

https://www.sir-apfelot.de/kuenstliche-intelligenz-erschafft-neue-ki-systeme-10436/