Aktionen

Benutzer

C.heck: Unterschied zwischen den Versionen

Aus exmediawiki

Zeile 1: Zeile 1:
 +
 +
=Links=
 +
==adversarial attacks==
 +
* https://bdtechtalks.com/2019/04/02/ai-nlp-paraphrasing-adversarial-attacks/
 +
* https://bdtechtalks.com/2019/04/02/ai-nlp-paraphrasing-adversarial-attacks/
 +
* https://boingboing.net/2019/03/31/mote-in-cars-eye.html
 +
* https://venturebeat.com/2019/04/01/text-based-ai-models-are-vulnerable-to-paraphrasing-attacks-researchers-find/
 +
* https://spectrum.ieee.org/cars-that-think/transportation/self-driving/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane
 +
===HowTo's===
 +
* https://cv-tricks.com/how-to/breaking-deep-learning-with-adversarial-examples-using-tensorflow/
 +
* https://medium.com/@ml.at.berkeley/tricking-neural-networks-create-your-own-adversarial-examples-a61eb7620fd8
 +
===libraries===
 +
* https://github.com/bethgelab
 +
* https://github.com/tensorflow/cleverhans
 +
 +
 +
https://towardsdatascience.com/know-your-enemy-7f7c5038bdf3
 +
 +
https://medium.com/@AINowInstitute/the-10-top-recommendations-for-the-ai-field-in-2017-b3253624a7
 +
 +
 +
 +
==XAI==
 +
* https://de.m.wikipedia.org/wiki/Explainable_Artificial_Intelligence
 +
* https://netzpolitik.org/2018/enquete-kommission-kuenstliche-intelligenz-sachverstaendige-und-abgeordnete-klaeren-grundbegriffe/
 +
* https://www.ayasdi.com/blog/artificial-intelligence/trust-challenge-explainable-ai-not-enough/
 +
* https://www.bons.ai/blog/explainable-artificial-intelligence-using-model-induction
 +
* https://en.m.wikipedia.org/wiki/Right_to_explanation
 +
 +
==ethics==
 +
* https://www.economist.com/science-and-technology/2018/02/15/computer-programs-recognise-white-men-better-than-black-women
 +
* https://books.google.de/books?id=rLsyDwAAQBAJ&pg=PA95&redir_esc=y#v=onepage&q&f=false
 +
* https://books.google.de/books?id=_H1K3vojDFQC&pg=PA762&redir_esc=y#v=onepage&q&f=false
 +
* https://neil.fraser.name/writing/tank/
 +
* https://www.wired.com/story/why-ai-is-still-waiting-for-its-ethics-transplant/
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +
 +
=last semester=
 
[[Datei:Neuronales-netz_am_eigenen-bild.ipynb]]
 
[[Datei:Neuronales-netz_am_eigenen-bild.ipynb]]

Version vom 4. April 2019, 00:43 Uhr

Links

adversarial attacks

HowTo's

libraries


https://towardsdatascience.com/know-your-enemy-7f7c5038bdf3

https://medium.com/@AINowInstitute/the-10-top-recommendations-for-the-ai-field-in-2017-b3253624a7


XAI

ethics






last semester

Datei:Neuronales-netz am eigenen-bild.ipynb