Aktionen

Benutzerin

Verena: Unterschied zwischen den Versionen

Aus exmediawiki

Zeile 89: Zeile 89:
 
===Code===
 
===Code===
  
==Links zum Thema==
+
==Links zum Thema KI==
  
 
* [https://www.nature.com/articles/d41586-019-01413-1?fbclid=IwAR3DD0D66JN5qjxGuH8YsvNPxKLJs_NXuNzrLaopJ1jtFgGk3TNa_rtcSKs Don’t let industry write the rules for AI]
 
* [https://www.nature.com/articles/d41586-019-01413-1?fbclid=IwAR3DD0D66JN5qjxGuH8YsvNPxKLJs_NXuNzrLaopJ1jtFgGk3TNa_rtcSKs Don’t let industry write the rules for AI]

Version vom 23. Mai 2019, 07:38 Uhr

...

KI-SEMINAR

Projektdokumentation

Kurzbeschreibung (EN)


THE OFFICE (working title), 2019
Humans and plants live on different timescales. This is certainly one of the reasons why in everyday life, plants might often seem static and object-like to us. »The Office« (2019) makes use of existing video footage that covers large time spans, in this case, popular long-term tv shows. These productions showcase office and apartment interiors over a longer period of time, often also including houseplants which are mainly used for decorative purposes. Some of these series run over years or even decades and thus comprise many hours of footage. Convolutional neural networks are used as a tool to detect scenes involving houseplants while processing large quantities of the given video material. Selected scenes are compiled into a time-lapse movie which documents plant growth over a long period of time. While the lively movement and growth of the normally passively seeming plants becomes visible, the human activities become blurry and fade into the background.
Keywords: plants, plant-human-relationships, time, timescales, time-lapse, video, neural networks

Hintergrund/Research

Projekte Pflanzen & KI

Projekte Environment & KI

Technische Umsetzung/Vorgehensweise

Erste Schritte

Aktueller Stand

Working with the pretrained network

  • analyze video frame by frame (works for example with .mp4 and .m4v)
  • detect custom objects, in this case of the category "potted plant"
  • output a list called "detected frames" that for each frame in the video contains either a 0 or 1 to indicate whether a plant has or has not been detected in that frame. For example (for a video with 24 frames):

[0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0]

  • transform this list into a numpy array
  • transform this array into another array called "changeArray" only indicates the positions where something changes in the array (if values jump from 0 to 1, or from 1 to 0 or in other words detect the beginning and end of the plant sequence
  • reshape array into an array called "changeArrayReshaped" with 2 columns with each row containing only the start and stop frame of the plant sequences

[[ 724 736]

[1716 1717]
[1734 1739]
[1742 1807]
[1809 1812]
[2073 2075]
[2077 2102]
[3260 3309]
[3344 3376]
[3416 3424]
[3497 3526]]

  • write this data into a csv file

Nächste Schritte

  • Trainingsmaterial generieren: aus den einzelnen extrahierten Sequenzen (je 1 Pflanze) Einzelbilder erzeugen
  • train custom model based on collected images
  • collecting and processing larger quantities of suitable video material
  • further automatizing the process of preselecting plant scenes, splitting the

detected plants into different classes, and making a new cut based on these classes in order to generate visually coherent video material

  • final selection of interesting output movies for presentation purposes
  • video post production (if neccessary)

Code

Links zum Thema KI

Evolutionary Algorithms

Code

BLOCKCHAIN READING GROUP notes

> seminar page: Blockchain Reading Group

Bitcoin & Blockchain Basics

Ethereum


KünstlerInnenhonorare

Other Research

Teaching & Non-Teaching