Aktionen

Benutzerin

Verena: Unterschied zwischen den Versionen

Aus exmediawiki

K
 
(85 dazwischenliegende Versionen desselben Benutzers werden nicht angezeigt)
Zeile 1: Zeile 1:
 
...
 
...
  
=KI-SEMINAR=
+
==Vorbereitungen==
  
==Projektdokumentation==
+
==== [[PHYTOTRON|PHYTOTRON – Workspace for Biological Media]] ====
  
===Kurzbeschreibung (EN)===
+
==== [[DIY_Computer|DIY Computer]] ====
 +
 
 +
==== [[Bewaesserungssystem|Bewässerungssystem für KHM-Garten]] ====
 +
==== [[Workshop_Luftdaten|Workshop: Luftdaten]] ====
 +
 
 +
==== [[Rechnen durch Handeln]] ====
 +
 
 +
==== [[Mechatronics_101|Mechatronics 101]] ====
 +
 
 +
== Archiv ==
 +
 
 +
=== Workshop: [[DNA_phenotyping|Forensische DNA-Phänotypisierung]] ===
 +
 
 +
=== Seminar [[Open Lab]] (Friedrich, Heck, Hen) ===
 +
 
 +
=== Seminar [[Whole_Earth_Reading_Group|WHOLE EARTH Reading Group]] ===
 +
 
 +
=== Ausstellungsprojekt [[Multispecies_@_Weltkunstzimmer|"GOOD BYE CRUEL WORLD, IT'S OVER"]] ===
 +
 
 +
=== Ausstellung [http://www.temporarygallery.org/?p=8780 Praktiken der Annäherung @ Temporary Gallery] ===
 +
<!-- [[Practices_of_Approximation|Praktiken der Annäherung @ Temporary Gallery]] -->
 +
 
 +
=== Projekt [[KHM-Garten]] ===
 +
=== Seminar [[Blockchain Reading Group|Blockchain Reading Group]] ===
 +
=== Workshop [[KünstlerInnenhonorare]] ===
 +
=== Seminar [[Re-Cycle?]] ===
 +
 
 +
<!--
 +
==Linux==
 +
===Bootfähigen USB-Stick erstellen (von Mac)===
 +
 
 +
 
 +
Betriebssystem Ubuntu Desktop 18.04.3 LTS (last stable version) soll installiert werden
 +
 
 +
Anleitung: [https://tutorials.ubuntu.com/tutorial/tutorial-create-a-usb-stick-on-macos?_ga=2.75139794.1749488733.1571761439-720078860.1571761439#0 How to create a bootable USB stick on macOS]
 +
 
 +
2. Requirements
 +
** A 2GB or larger USB stick/flash drive
 +
** An Apple computer or laptop running macOS
 +
** An Ubuntu ISO file. See [https://ubuntu.com/download/desktop Get Ubuntu] for download links
 +
 
 +
3. Prepare/Erase the USB stick Apple's ‘Disk Utility'
 +
 
 +
4. Install and run [https://www.balena.io/ Etcher]
 +
 
 +
5. Flash the USB drive
 +
 
 +
 
 +
===Dual Boot auf Windows-System===
 +
 
 +
 
 +
https://wiki.ubuntuusers.de/Dualboot/
 +
 
 +
<!--
 +
==KI-SEMINAR==
 +
 
 +
===Projektdokumentation===
 +
 
 +
====Kurzbeschreibung (EN)====
 
<br>
 
<br>
 
'''THE OFFICE (working title), 2019'''
 
'''THE OFFICE (working title), 2019'''
Zeile 26: Zeile 84:
 
neural networks
 
neural networks
  
===Hintergrund/Research===
+
====Hintergrund/Research====
 +
 
 +
 
 +
=====Projekte Pflanzen & KI=====
 +
 
 +
* [https://www.aestheticsofexclusion.com/projects/botanica-variegata Botanica Variegata (2019?) von Sjoerd Ter Borg]
 +
 
 +
=====Projekte Environment & KI=====
 +
 
 +
* [http://harrischris.com/article/biophillic-vision-experiment-1 Biophillic Vision - Experiment 1: experimenting with machine learning to remove cars from video footage von Chris Harris]
 +
 
 +
* https://rybakov.com/blog/ai_deleting_bodies/
  
===Technische Umsetzung===
+
====Technische Umsetzung/Vorgehensweise====
  
====Erste Schritte====
+
=====Erste Schritte=====
  
 
* Convolutional Neural Networks for object detection (in videos)
 
* Convolutional Neural Networks for object detection (in videos)
 
* Heartbeat Tutorial Part 1: [https://heartbeat.fritz.ai/detecting-objects-in-videos-and-camera-feeds-using-keras-opencv-and-imageai-c869fe1ebcdb Detecting objects in videos and camera feeds using Keras, OpenCV, and ImageAI]
 
* Heartbeat Tutorial Part 1: [https://heartbeat.fritz.ai/detecting-objects-in-videos-and-camera-feeds-using-keras-opencv-and-imageai-c869fe1ebcdb Detecting objects in videos and camera feeds using Keras, OpenCV, and ImageAI]
 +
** using a Python library called ImageAi
 +
** using a pretrained YOLOv3 computer vision model that can recognize 80 different objects, including "potted plants"
 +
** TO DO: installing a number of python libraries and ImageAi
 
* Heartbeat Tutorial Part 2: [https://heartbeat.fritz.ai/analyze-and-visualize-detected-video-objects-using-keras-and-imageai-d84c99b0ae8e Analyze and Visualize Detected Video Objects Using Keras and ImageAI]
 
* Heartbeat Tutorial Part 2: [https://heartbeat.fritz.ai/analyze-and-visualize-detected-video-objects-using-keras-and-imageai-d84c99b0ae8e Analyze and Visualize Detected Video Objects Using Keras and ImageAI]
  
====Aktueller Stand====
+
=====Aktueller Stand=====
 +
 
 +
Working with the pretrained network
 +
* analyze video frame by frame (works for example with .mp4 and .m4v)
 +
* detect custom objects, in this case of the category "potted plant"
 +
* output a list called "detected frames" that for each frame in the video contains either a 0 or 1 to indicate whether a plant has or has not been detected in that frame. For example (for a video with 24 frames):
 +
<tt>
 +
[0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0]
 +
</tt>
 +
* transform this list into a numpy array
 +
* transform this array into another array called "changeArray" only indicates the positions where something changes in the array (if values jump from 0 to 1, or from 1 to 0 or in other words detect the beginning and end of the plant sequence
 +
* reshape array into an array called "changeArrayReshaped" with 2 columns with each row containing only the start and stop frame of the plant sequences
 +
<tt>
 +
[[ 724  736]
 +
[1716 1717]
 +
[1734 1739]
 +
[1742 1807]
 +
[1809 1812]
 +
[2073 2075]
 +
[2077 2102]
 +
[3260 3309]
 +
[3344 3376]
 +
[3416 3424]
 +
[3497 3526]]
 +
</tt>
 +
* write this data into a csv file
 +
 
 +
=====Preparation of training, test and validation data=====
 +
 
 +
 
 +
* take last DVD from each season (the selected tv series comprises 9 seasons in total) and reserve it for generating the training, test and validation data (this material will not be used for the final analysis)
 +
* format of original material: .mp4; H.264; 720x406; 44100 Hz; 25fps; ...
 +
* convert all .mp4 files from last DVD to 15fps to reduce file size in regard to analysis (we probably do not need so many frames per second as images will be very similar)
 +
* analyze video files (see heartbeat tutorial) and output detected frames by custom objects detection ('potted plant') in the form of a csv file
 +
* use detected frames data in csv file to cut .mp4 files into multiple shorter files (sequences containing plants)
 +
* handsort all video sequences (.mp4 files) into subfolders, e.g.:
 +
 
 +
 
 +
[ all_HANDSORTED_original ]
 +
    [ plant_center ]
 +
      [ TO_S2_15fps.mp4_sub_5.mp4 ]
 +
      [ TO_S2_15fps.mp4_sub_6.mp4 ]
 +
      [ TO_S2_15fps.mp4_sub_20.mp4 ]
 +
      ...
 +
    [ plant_chefofficefront ]
 +
    [ plant_receptiontop ]
 +
    ...
 +
 
 +
* rename video files in subfolders with consecutive file names for batch processing
 +
 
 +
 
 +
[ renamed_plant_center ]
 +
    [ plant_center_1.mp4 ]
 +
    [ plant_center_2.mp4 ]
 +
    [ plant_center_3.mp4 ]
 +
    ...
 +
    [ plant_center_268.mp4 ]
 +
[ renamed_plant_chefofficefront ]
 +
[ renamed_plant_receptiontop ]
 +
...
 +
 
 +
* batch processing: convert all video files in plant-specific subfolders to images(.jpg)
 +
* rename again into images with consecutive numbers
 +
 
 +
 
 +
[ plant_center_images_toCrop_renamed ]
 +
    [ plant_center_1.jpg ]
 +
    [ plant_center_2.jpg ]
 +
    [ plant_center_3.jpg ]
 +
    ...
 +
    [ plant_center_2186.jpg ]
 +
 
 +
* analyze single images (jpgs) with code based on (adapted) imageAi tutorial "Object Detection with 10 lines of code.ipynb"
 +
* per batch processed image: output box points info of detected plant and, within each loop, crop image according to box points and save in newly created folder
 +
 
 +
Cropped images final output:
 +
* plant_center (ca. 2200 images)
 +
* plant_chefofficefront(ca. 1300 images)
 +
* plant_conferencefront (ca. 600 images)
 +
* plant_conferenceinside (ca. 300 images)
 +
* plant_receptiontop (ca. 738 images)
  
====Nächste Schritte====
+
=====Nächste Schritte=====
  
==Links zum Thema==
+
* train custom model based on collected images
 +
* collecting and processing larger quantities of suitable video material
 +
* further automatizing the process of preselecting plant scenes, splitting the
 +
detected plants into different classes, and making a new cut based on these
 +
classes in order to generate visually coherent video material
 +
* final selection of interesting output movies for presentation purposes
 +
* video post production (if neccessary)
 +
 
 +
====Code====
 +
 
 +
===Links zum Thema KI===
  
 
* [https://www.nature.com/articles/d41586-019-01413-1?fbclid=IwAR3DD0D66JN5qjxGuH8YsvNPxKLJs_NXuNzrLaopJ1jtFgGk3TNa_rtcSKs Don’t let industry write the rules for AI]
 
* [https://www.nature.com/articles/d41586-019-01413-1?fbclid=IwAR3DD0D66JN5qjxGuH8YsvNPxKLJs_NXuNzrLaopJ1jtFgGk3TNa_rtcSKs Don’t let industry write the rules for AI]
 
* [https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/?utm_brand=wired&utm_source=facebook&utm_social-type=owned&utm_medium=social&mbid=social_fb&utm_campaign=wired&fbclid=IwAR0cVyOvfT4uDVeO66zjlPHOXbuS8aGlDChgKSPOd1jT_16YJ492BRxeOS4 The AI Text Generator That's Too Dangerous to Make Public]
 
* [https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/?utm_brand=wired&utm_source=facebook&utm_social-type=owned&utm_medium=social&mbid=social_fb&utm_campaign=wired&fbclid=IwAR0cVyOvfT4uDVeO66zjlPHOXbuS8aGlDChgKSPOd1jT_16YJ492BRxeOS4 The AI Text Generator That's Too Dangerous to Make Public]
  
===Evolutionary Algorithms===
+
====Evolutionary Algorithms====
 
* [https://www.technologyreview.com/s/611568/evolutionary-algorithm-outperforms-deep-learning-machines-at-video-games/?utm_medium=social&utm_source=facebook.com&utm_campaign=owned_social&fbclid=IwAR01pdD_uNn4tobm2tpdDRibCaigDfiJgmUG9DW6GR5u_ltmdvfdAol4esQ Evolutionary algorithm outperforms deep-learning machines at video games], MIT Technology Review, 06/2018
 
* [https://www.technologyreview.com/s/611568/evolutionary-algorithm-outperforms-deep-learning-machines-at-video-games/?utm_medium=social&utm_source=facebook.com&utm_campaign=owned_social&fbclid=IwAR01pdD_uNn4tobm2tpdDRibCaigDfiJgmUG9DW6GR5u_ltmdvfdAol4esQ Evolutionary algorithm outperforms deep-learning machines at video games], MIT Technology Review, 06/2018
  
==Code==
+
===Code===
 +
 
 +
 
 +
<!-- unsichtbarer Kommentar
 +
 
 +
==Other Research==
  
 +
===Teaching & Non-Teaching===
 +
* [https://medium.com/pi-top/meet-the-school-with-no-classes-no-classrooms-and-no-curriculum-7cc7be517cef?fbclid=IwAR34KmSoA7ER59npOAu2PyE6DAQDZ5IthTloXfqmOD3rtMdOmaPm_LEpOek Meet the school with no classes, no classrooms and no curriculum], Medium, 05/2019
  
  
=BLOCKCHAIN READING GROUP notes=
+
=BLOCKCHAIN READING GROUP notes=
  
 
> seminar page: [[Blockchain Reading Group|Blockchain Reading Group]]
 
> seminar page: [[Blockchain Reading Group|Blockchain Reading Group]]
Zeile 74: Zeile 243:
 
** https://en.wikipedia.org/wiki/Smart_contract
 
** https://en.wikipedia.org/wiki/Smart_contract
 
* [https://hackernoon.com/6-interesting-blockchain-projects-8c315364ff7f 6 Interesting Blockchain Projects]
 
* [https://hackernoon.com/6-interesting-blockchain-projects-8c315364ff7f 6 Interesting Blockchain Projects]
 +
-->
 +
-->
  
 
+
-->
=KünstlerInnenhonorare=
 
 
 
* Linksammlung zum Thema [[KünstlerInnenhonorare|KünstlerInnenhonorare]]
 
 
 
=Other Research=
 
 
 
==Teaching & Non-Teaching==
 
* [https://medium.com/pi-top/meet-the-school-with-no-classes-no-classrooms-and-no-curriculum-7cc7be517cef?fbclid=IwAR34KmSoA7ER59npOAu2PyE6DAQDZ5IthTloXfqmOD3rtMdOmaPm_LEpOek Meet the school with no classes, no classrooms and no curriculum], Medium, 05/2019
 

Aktuelle Version vom 15. Dezember 2021, 15:21 Uhr