I was initially thinking of working with live data and played a bit with the NASA image API and my KOI-Sketch.
I then mapped out different possible or impossible connections of APIs.
While making coffee in the morning I had the idea to use an image of the coffee ground to predict a horoscope. In the Corpora-Git I found a Tarot-JSON file that seemed perfect for that task: It had different ranks for each card with an integer value that I could map to the overall image brightness - and (not very seriously) "predict" a horoscope. I had a few issues with the JSON data, but finally managed to map all values to corresponding cards. The fortune-telling sentence then gets displayed on the screen.
After finishing the code I worked on the graphics - I kept them minimal and dark to keep the "coffee feeling".
I didn't manage to automate the upload from file or getUserMedia into the browser directly. So far the images have to be placed in a folder on the server - something to work on this week.
Here the code:
For the winter show I would like to keep working on the KOI. I have two things in my mind:
1. Users load the KOI sketch on their phones, the little fish is swimming on their phone. Then they put their phone on little floats in a little pond filled with water - the KOIs will all be "swimming" in their screens on the surface of the pond. A swarm of floating phone-KOIs.
2. The KOIs can cross different screens, the more users align their phone screens on a table, the bigger the virtual pond of the KOI gets. It can cross between the aligned phones.
So much for the initial rough ideas - let's see how these develop over the next two months.