Aktionen

Benutzer

C.heck: Unterschied zwischen den Versionen

Aus exmediawiki

Zeile 124: Zeile 124:
 
* FB Bots: https://code.fb.com/ml-applications/deal-or-no-deal-training-ai-bots-to-negotiate/
 
* FB Bots: https://code.fb.com/ml-applications/deal-or-no-deal-training-ai-bots-to-negotiate/
 
** https://www.fastcompany.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it
 
** https://www.fastcompany.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it
 +
 +
===techniques===
 +
====LSTM+RNN===
 +
* »on the road« by AI: https://medium.com/artists-and-machine-intelligence/ai-poetry-hits-the-road-eb685dfc1544
 +
====Autoencoders====
 +
https://www.wired.co.uk/article/google-artificial-intelligence-poetry
 +
====GAN====
 +
====Transformations====
  
 
===Poetry===
 
===Poetry===

Version vom 16. April 2019, 13:29 Uhr

einfaches perceptron (schöne skizzen): https://github.com/nature-of-code/NOC-S17-2-Intelligence-Learning/blob/master/week4-neural-networks/perceptron.pdf

ADVERSARIAL ATTACKS

KNN's sind extrem anfällig für...

WHITE BOX ATTACKS

Untargeted Adversarial Attacks

Adversarial attacks that just want your model to be confused and predict a wrong class are called Untargeted Adversarial Attacks.

  • nicht zielgerichtet

Fast Gradient Sign Method(FGSM)

FGSM is a single step attack, ie.. the perturbation is added in a single step instead of adding it over a loop (Iterative attack).

Basic Iterative Method

Störung, anstatt in einem einzelnen Schritt in mehrere kleinen Schrittgrößen anwenden

Iterative Least-Likely Class Method

ein Bild erstellen, welches in der Vorhersage den niedrigsten Score trägt

Targeted Adversarial Attacks

Attacks which compel the model to predict a (wrong) desired output are called Targeted Adversarial attacks

  • zielgerichtet

(Un-)Targeted Adversarial Attacks

kann beides...

Projected Gradient Descent (PGD)

Eine Störung finden die den Verlust eines Modells bei einer bestimmten Eingabe maximiert:

BLACK BOX ATTACKS

on computer vision

propose zeroth order optimization (ZOO)

Black-Box Attacks using Adversarial Samples

  • a technique that uses the victim model as an oracle to label a synthetic training set for the substitute, so the attacker need not even collect a training set to mount the attack

new Tesla Hack

on voice (ASR)

hidden voice commands

Psychoacoustic Hiding (Attacking Speech Recognition)

on written text (NLP)

paraphrasing attacks

Anti Surveillance

http://dismagazine.com/dystopia/evolved-lifestyles/8115/anti-surveillance-how-to-hide-from-machines/

libraries

XAI

XAI/NLG

ETHIK

LANGUAGE

esotheric neural net (programming language)

NLU / NLI

NLP

Speech recognition

https://de.wikipedia.org/wiki/Spracherkennung

NLG

techniques

=LSTM+RNN

Autoencoders

https://www.wired.co.uk/article/google-artificial-intelligence-poetry

GAN

Transformations

Poetry

examples...

datenbanken

deutsch:

englisch:

E2E NLG Challenge:

chatbots

Toolkits/Librarys

tryouts:

KI-GENERIERTE KRYPTO

REPRODUKTIVE KI

https://www.sir-apfelot.de/kuenstliche-intelligenz-erschafft-neue-ki-systeme-10436/