invite users to contemplate on their online selves and identities by creating a semi-realtime chat with pre-recorded videos and actors - and the possibility for users to participate in this fake setup. 


  • how do we establish trust online with strangers?
  • how do we perceive ourselves in a group environment online?
  • what are the rules of communications in a video chat with strangers ?
  • how is time related to communication ?

inspiration / early sketches

We started with early sketches that were playing with video feeds in VR. Initially we wanted to give users the possibility to exchange their personalities with other users: We had the idea of a group chat setup where you could exchange "body parts" (video cutouts) with different chat group participants. This should be a playful and explorative experience for participants. How does it feel to be in parts of another identity? How do we rely on our own perception of our body to establish our identities? How do we feel about digitally augmented and changed bodies? How does that relate to our real body perception? Does it change our feeling for our own bodies and therefore our identities? How close are we to our own bodies? Do we belief in body and mind as separate entities? How is body and mind separated in a virtual and bodyless VR experience? 



After looking at our timeframe and technical skills we decided to postpone the VR element of our concept and focus on the core ideas in a 3D setup: body perception, trust and identity in virtual interactions. We chose three.js as our main language as it provided us with a lightweight environment in the browser that possibly could be deployed in an app or in a local environment. As we decided later for a local setup for our group online environment this proofed to come with a few tradeoffs regarding access to files on the local machines from javascript. We used python as backend tool to compensate the security restrictions in javascript. 


conceptual setup

Close to our initial sketches we constructed an online environment that should fulfill the following requirements:

  • group experience with person-to person interaction
  • video feeds like in a webcam-chat 
  • insecurity, vagueness and un-predicability as guiding feelings for the interaction with strangers
  • fake elements and identities to explore the role of trust in communication
  • a self-sustainable environment that has the potential to grow or feed itself

To achieve that we built a welcome page that simulates a video-chat network. The aesthetics were kept partially retro and cheap. The website should look not completely trustworthy and already invoke a feeling of insecurity - but still stimulate the interest in the unknown. 

The main page features 3 screens with the user's webcam feed in the center - right between two users that seem to establish a communication. The user should feel in-between those two other users and feel the pressure to join in in this conversation. Both users on the left and right are actors, the video-feeds pre-recorded and not live. The user in the middle does not know this - they should look like realtime chat-videos. 

While trying to establish a conversation with the fake video-chat partners, the webcam-feed of the user gets recorded via WebRTC. After 40s in this environment the recording of this feed pops up suddenly and replaces the video-feed with the actor on the left side. The users should realise now that reality is not what it seems in this video-chat. It is fake, even time seems to be unpredictable. After 5s looking at their own recorded feed, a popup on top of the screen asks the user if she wants to feed this recording of her into a next round to confuse other people. The question here is why a user would do that. In the user testing most users wanted to participate in the setup in the next round. As the users dynamically replace the videos on the left and right this could be a self-feeding chat that is never real time - you are always talking to strangers from another time. But for a few seconds they exist for you in real-time with you in this world - until you realize that this was not true. At least according to our concept of time and being. 

As mentioned before we used three.js as the main framework. On top of that we used webRCT extensively, especially the recording and download function of webcam feeds. On the backend python helped us to move around the recorded and downloaded files dynamically from download-location on the local machine to a JS-accessible folder. Python as well helped us to keep track of the position of the videos (left or right) when the browser window gets re-loaded between different users. This was a hack, a node - server would have probably been better for this task - but python was simply quicker. 

We did not use the app in a live setup as we felt we need to refine the code and as well experiment further with porting it to VR. 

So far it was very rewarding for me as I could explore a lot of three.js while working with Chian on the project. WebRTC proofed again to be a great way to set up a life video chat - with a little help from python on the backend it worked as a prototype. The VR version will probably have to run exclusively in unity. This is mainly C# - a new adventure for the next semesters! 


Here a video walkthrough:


On the code side we used a github-repo to backup our files. Here you can find all files including a short code documentation as readme. 





Users turn a giant knob on the 4th floor of ITP and control the brightness of a virtual LED in AR positioned on a physical column in the Tisch Lounge. 


Inside the knob will be a rotary encoder to track the exact position of the element. These values are read and continuously sent to a remote AWS server from a nodeMCU 8266 wifi module inside the knob via serial/python/websockets. On the server they are stored in a txt-file and read into a publicly accessible html-file serving the AR-LED in an AR version of three.js. We will have to test how much of a delay we will have using this communication. A public server seems to be the only way for the audience to access the AR site from outside the local NYU network. The delay might still be acceptable as knob and AR-column are situated on different floors. With iOS 11, AR on mobile is possible now on all platforms using getUserMedia/camera access. 

Here a quick test with the github-example library, a target displayed on a screen and an iPhone running iOS 11:




We found a 3ft long rotating steel arm with bearings and attachment ring in the shop junk-shelf. We will use it as the core rotating element inside the knob. 


This rotating element will be mounted on a combination of wooden (plywood) bases that will stabilize the knob when it is getting moved. The weight of the knob will rest on wheels that are running on a rail/channel in the wide base that features a wave structure inside so that the 'click' of the knob feels more accurate. Inside the knob we will build a structure with 2 solid wooden rings that are the diameter of the knob and are attached to the rotating element. On the outside we will cover the knob with lightweight wooden planks or cardboard tubes. 



We worked on the wooden base for the knob/metal pole using multi-layered plywood to keep the rotary encoder within the same wooden element as the pole - this prevents a damage of the electronics/mechanics once there is a push or tilt towards the sides of the knob.



In collaboration with Brandon Newberg & Anthony Bui.


Screenshot from 2017-11-20 11-13-35.png




  • Tue. 21 Nov. 2017: tech communication finished, fabrication parts ordered

work on automation + fabrication (knob), work on AR script

  • Tue. 28 Nov. 2017: all communications work, knob basic fabrication finished

work on fabrication finish, column, code cleaning, documentation, 

  • Tue. 5 Dec. 2017: final presentation



As in PCOMP I switched from flying solo for my project to a collaboration with Chian Huang: she showed me her project and was open to the idea about working together. Her concept was based on a web-based VR/3D experience with camera inputs including sound. I quickly agreed to the collaboration and after a brainstorming session we altered the initial idea and came up with a rough outline for our project: "Watch yourself watching others, watch others watching you - maybe."


social networks, surveillance, peer pressure


The system should create a VR network structure where multiple agents can interact with each other  - all in google cardboard. As the latter technology will obscure the eyes, the participants are not sure whether they are being watched by the other participants. Audio will be disabled and body parts exchanged between the video feeds. The physical interchangeability of the participants highlights a distinct feature of new social networks: The user is just a datacloud, which can be sold off in parts to the highest bidders. Our obsession with watching ourselves and others is the fundamental interaction of virtual social networks. But we never really know what other network members see. 

Chian's initial sketch is based on a p5 example and augmented with music. 

We based our concept on those video-streams and used webRTC to have multiple participants in a virtual space. All of the sketches are running on a remote server in the cloud. 


After experimenting with getting an external webRTC stream as a texture into p5/WebGL, we decided to switch to three.js  - it seems to offer better rendering and more flexibility with webRTC. 

Here a few three.js example sketches that we might use as starting points for our project:

Screen Shot 2017-11-15 at 6.02.15 AM.png

We still have not finally decided on the aesthetics of this "surveillance" - experience. We would like to keep it open until we both feel more comfortable with three.js  - which seems to be considerably more challenging than p5, but as well a lot more powerful when it comes to 3D rendering. 



 laser cut tablet of neural network generated cuneiform

laser cut tablet of neural network generated cuneiform

glossolalia setup.png


glossolalia / speaking in tongues


Are neural networks our new gods?


My installation piece aims to create a techno-spiritual experience with a neural network trained on ancient cuneiform writing for an exhibition audience in an art context. 


The title 'glossolalia / speaking in tongues' of the piece refers to a phenomenon where people seem to speak in a language unknown to them - mostly in a spiritual context. In the art piece both "speaker" (machine) and "recipient" are unaware of the language of their communication: The unconscious machine is constantly dreaming up new pieces of cuneiform tablets that the audience cannot translate. Two things to mention: First, after 3000 years, one of the oldest forms of human writing (means encoding of thoughts) becomes "alive" again with the help of a neural network. Second, it is difficult to evaluate how accurate the new cuneiform is - only a few scholars can fully decode and translate cuneiform today. 

 original cuneiform tablet ( wikimedia )

original cuneiform tablet (wikimedia)

 part of original Sumerian cuneiform tablet (paper impression)

part of original Sumerian cuneiform tablet (paper impression)

 cuneiform generated by neural network

cuneiform generated by neural network

Observing the machine creating these formerly human artifacts in its "deep dream" is in itself a spiritual experience: It is in a way a time-machine. By picking up on the thoughts of the existing, cuneiform writing corpus, the machine breathes new life into culture at Sumerian times. The moment the machine finished training on about 20 000 tablets (paper impressions) and dreamed up its first new tablet, the past 2000 - 3000 year hiatus became irrelevant - for the neural network old Babylon is the only world that exists. 

In the installation piece, the audience get the opportunity to observe the neural network the moment they kneel down on a small bench and look at the top of an acrylic pyramid. This activates the transmission of generated images from the network in the cloud to the audience that hover as an extruded hologram over the pyramid.

cunei_bstract [Recovered].png
 side, above: digital abstractions of generated cuneiform

side, above: digital abstractions of generated cuneiform

The audience can pick up headphones with ambient sounds that intensify the experience (optional).

It is important to mention that the network is not activated by the audience: The audience gets the opportunity to observe its constant and ongoing dream. The network is not a slave of the audience, it is regarded as a form of new entity in itself. When an audience member gets up from the bench, the transmission stops - the spiritual moment is over.



-- "H" for "have", "G" for "get" --

PCOMP parts:

  • raspberry pi or nvidia jetson tx2 -- G
  • if raspberry pi: phat DAC for audio out -- G
  • 8266 wifi module -- G
  • 2 x buttons / switches -- H
  • headphones -- G
  • audio cables -- G
  • screen -- G
  • hdmi cable -- H
  • local wifi-network (phone) -- H

ICM parts: 

  • GPU cloud server trained on 20 000 cuneiform tablets (scientific drawings, monochrome) -- H
  • processing sketch -- H

Fabrication parts:

  • wood for kneeling bench -- G
  • wood for column -- G
  • acrylic for pyramid -- H
  • wooden stick for headphone holder -- G

system diagram




  • Tue. 7 Nov. 2017: playtesting with paper prototype / user feedback

work on server-local comm, processing sketch, order/buy fab parts

  • Tue. 14 Nov. 2017: server-local comm works, all fabrication parts received

work on input-local comm, processing sketch, fabrication (peppers ghost)

  • Tue. 21 Nov. 2017: processing sketch finished, input-local comm works

work on automation, fabrication (bench, column)

  • Tue. 28 Nov. 2017: all comms work, basic fabrication finished

work on fabrication finish, code cleaning, documentation, 

  • Tue. 5 Dec. 2017: final presentation






This week I tried using  a javascript emotion-tracking library - generally with mixed results (not accurate and slow, maybe a processor issue or the camera resolution too low) in combination with a bonsai growing sketch (open in Safari, Chrome does not render this sketch properly) I worked on for the past couple of weeks. The idea was to trigger the growth of the digital bonsai not only with the amount of mouse-clicked water but with smiles. 

As the results for the facial-expressions tracking were mixed I decided to update my coffee ground -tarot interface with the possibility for the user to take a snapshot from the webcam stream and use the average brightness of this snapshot for "predicting" the horoscope via the API.

I redesigned the arrangement of the elements to make it better accessible from multiple devices but struggled to get it running on iOS 11 Safari - it asked for access to the camera but then only showed a black frame. I would like to fix this, as the direct camera access only makes sense from a mobile device - no user will use a laptop camera for taking images of coffee ground, the mobile camera instead makes more sense. 



I got a few things in PCOMP and ICM I am playing with at the moment and would like to take further:

- KI: virtual bonsai (open in safari, chrome wrecks the script ...) living on the screen, needs to be watered with real water to grow

Screen Shot 2017-10-30 at 10.41.53 PM.png


- KOI: virtual Koi living on the screen (somehow a theme here), his pond can be made bigger with more screens put together (using openCV for screen location detection?) - it will cross the screens OR the audience can open the sketch on their phones, put their phones on special floats and let their virtual Kois swim in a real pond (koi-phone-swarm). For the latter version I would need to control a gentle current in the water to keep the phone-Kois floating. 

- ELEVATOR PITCH: 2 X 8266 modules with gyro-sensors located in the 2 ITP elevators create music together (maybe seeding that into tensorflow magenta for more variation), pitch is matched to the floor number.


I am experimenting with autonomous cuneiform generation with a DCGAN as well, I would need some PCOMP for automation and physical representation of the cuneiform (ideally automatic laser engraving into acrylic). 


One of my fields of interest at the moment is the blockchain protocol. I am looking for physical implementations regarding hashing/mining/proof of work - maybe this could be an angle for a PCOMP project. 


I was initially thinking of working with live data and played a bit with the NASA image API and my KOI-Sketch

Screen Shot 2017-10-25 at 5.05.02 AM.png

I then mapped out different possible or impossible connections of APIs.


While making coffee in the morning I had the idea to use an image of the coffee ground to predict a horoscope. In the Corpora-Git I found a Tarot-JSON file that seemed perfect for that task: It had different ranks for each card with an integer value that I could map to the overall image brightness - and (not very seriously) "predict" a horoscope. I had a few issues with the JSON data, but finally managed to map all values to corresponding cards. The fortune-telling sentence then gets displayed on the screen.

Screen Shot 2017-10-25 at 1.19.22 AM.png

After finishing the code I worked on the graphics - I kept them minimal and dark to keep the "coffee feeling".

I didn't manage to automate the upload from file or getUserMedia into the browser directly. So far the images have to be placed in a folder on the server - something to work on this week.  

Screen Shot 2017-10-25 at 3.34.43 AM.png
Screen Shot 2017-10-25 at 4.42.52 AM.png

Here the code:

//not so serious coffee ground tarot engine to give you early morning joys :) 
// using these tarot explanations https://github.com/dariusk/corpora/blob/master/data/divination/tarot_interpretations.json

let myImage;
let pix;
let rank; // king: rank 25, queen: rank 24, knight: rank 23, page: rank 22;
let brightness;
let fortune_array = [];

function preload() {
  myImage = loadImage("pics/coffee.jpg");
  title = loadImage("etch.png");
  data = loadJSON("https://raw.githubusercontent.com/dariusk/corpora/master/data/divination/tarot_interpretations.json")

function setup() {
    createCanvas(windowWidth, windowHeight);
    translate(windowWidth / 2, windowHeight / 2);
    image(title, 0, 0);
    textFont("Cutive Mono");
    text("NY COFFEE GROUNDS TAROT", windowWidth/4, windowHeight/2 - myImage.height*0.1/2 - 45);
    translate(windowWidth / 2, windowHeight / 2);
    image(myImage, 0, 0);
    //get average brightness of image and match it to card rank in tarot set;
        console.log("rank: " + rank); //can somehow not access "rank" as a global variable ...???
        // search as well for king, queen, knight and page ranks
        if (rank > 21){
            extra_ranks = ['page', 'knight', 'queen', 'knight'];
            rank = extra_ranks[rank - 22];
        let fortunes = data.tarot_interpretations[rank].fortune_telling[round(random(data.tarot_interpretations[rank].fortune_telling.length -1),0)];
        textFont("Cutive Mono")
        text(fortunes + ".", windowWidth/2 + 500, windowHeight/2 + myImage.height*0.1/2 + 50)
    let fortunes = data.tarot_interpretations[0].fortune_telling[0];

// function taken from https://stackoverflow.com/questions/13762864/image-dark-light-detection-client-sided-script
// converts each color to gray scale and returns average of all pixels
// brightness: 0 (darkest) and 255 (brightest)
function getImageLightness(imageSrc,callback) {
    img = document.createElement("img");
    img.src = imageSrc;
    img.style.display = "none";

    let colorSum = 0;

    img.onload = function() {
        // create canvas
        let canvas = document.createElement("canvas");
        canvas.width = this.width;
        canvas.height = this.height;

        let ctx = canvas.getContext("2d");

        let imageData = ctx.getImageData(0,0,canvas.width,canvas.height);
        let data = imageData.data;
        let r,g,b,avg;

        for(let x = 0, len = data.length; x < len; x+=4) {// noprotect.
            r = data[x];
            g = data[x+1];
            b = data[x+2];

            avg = Math.floor((r+g+b)/3);
            colorSum += avg;

        brightness = Math.floor(colorSum / (this.width*this.height));
        // map & round brightness to 0 - 10 value of Tarot cards
        brightness = round(brightness.map(0, 255, 0, 25), 0);
        rank = brightness;
        callback(brightness, rank);


// map 0 - 255 average brightness values to 0 - 10 Tarot card ranks
// (taken from https://stackoverflow.com/questions/10756313/javascript-jquery-map-a-range-of-numbers-to-another-range-of-numbers)
Number.prototype.map = function (in_min, in_max, out_min, out_max) {
  return (this - in_min) * (out_max - out_min) / (in_max - in_min) + out_min;

// round values
// (taken from http://www.jacklmoore.com/notes/rounding-in-javascript/)
function round(value, decimals) {
  return Number(Math.round(value+'e'+decimals)+'e-'+decimals);

// append all entries into array for ranks
// (not taken from anywhere ;) 
function find_ranks(key){
    for(i = 0; i < data.tarot_interpretations.length; i++) {
        if (data.tarot_interpretations[i].rank == key){
            console.log('found matching rank in array ' + i);
    console.log('found matching rank in arrays ' + fortune_array)
    rank = fortune_array[round((random(fortune_array.length -1)),0)];
    console.log('selected rank in array ' + rank)

// go fullscreen and resize if necessary
function windowResized() {
  resizeCanvas(windowWidth, windowHeight);

For the winter show I would like to keep working on the KOI. I have two things in my mind: 

1. Users load the KOI sketch on their phones, the little fish is swimming on their phone. Then they put their phone on little floats in a little pond filled with water - the KOIs will all be "swimming" in their screens on the surface of the pond. A swarm of floating phone-KOIs. 

2. The KOIs can cross different screens, the more users align their phone screens on a table, the bigger the virtual pond of the KOI gets. It can cross between the aligned phones. 


So much for the initial rough ideas - let's see how these develop over the next two months.


continued from first blogpost

The plan this week was to ensure everything works in sequence and to stitch all the parts of the project together. For this we worked on the following:

-> Serially communicate spinning wheel result to p5


-> Test solenoid valve using Arduino.

The basic circuit:


-> Ensure water drops only after the artist is selected on the wheel.


-> Run animation and sound on p5 after water drops on the screen.

After the first round of testing with fellow students Isa, Simon and Terrick, we realised that the users get lost after spinning the wheel, but understand the concept once we tell them.


We then decided to narrate the story before the user starts interacting with the piece. We had a voice over that would start once the headphones are lifted off the stand. We first used a proximity sensor to detect the headphones. We then switched to light sensor because the results were more accurate. This was the end of constructing the whole piece. And time to get feedback from users. We got students Hadar, Amythab, Daniel and Akmyrat to test this second protoype. Inputs from user testing:

  1. Users did not comprehend why there was a water drop. This means that they don’t pay much attention to the narration.

  2. They love playing with the spinning wheel and would keep spinning it.

  3. Few of them would actually put up their hand for the drop and some wouldn’t. This again meant that the instructions weren’t clear.

  4. Users DO NOT like to be instructed.

  5. Lighting affected their mood and hence the message that was intended.

  6. Users reported that they enjoyed mixing the music with the wheel, the "haunted" versions we produced especially for our installation piece made some of them feel uneasy and even scared. Others felt positively influenced by the music.

  7. They liked watching the water drop and even playing with it on the glass, but some did not fully understand the connection to the rest of the installation - although the voice narration explained it to them. 

  8. Some were overwhelmed by the sensory inputs and stimulations - they wished less was happening and they could focus more on music, playing with the wheel and watching the water as they felt strongest for these parts.

Keeping all of these in mind, we realized that displaying the title of the piece with a little description of the story of the drop and a hint at its spiritual roots in India can have a larger impact than the narration just at the beginning. Users do not have to hold the drop in their hand. We also realized that drop on the screen actually works out because it looks like the drop is carrying the artist from heaven(spinning wheel) to earth(screen). 

We then modified our final midterm piece based on these inputs and thoughts. 


  • wheel first touch -> water drop, plays one track, displays visuals
  • wheel further touches -> layering more tracks and water drops on screen
  • visual communication: p5 sketch
  • audio: p5 sketch

The user is acting and reacting to audiovisual stimuli, can create his own listening and viewing experience, the water drop serves as a unique, non-interactive physical stimulation.


Our Arduino code:

/*     Arduino Rotary Encoder Tutorial
 *  by Dejan Nedelkovski, www.HowToMechatronics.com
 #define outputA 2
 #define outputB 3
 #define reset_button 5

 int counterLast = 1;
 int wait_loops = 0;
 int counter = 0;
 int star = 0;
 int aState;
 int aLastState; 
 int ValveOpen = 700;
 int ValvePause = 1000;
 int milk = 0;
 int lifted = 0;
 int audio_start = 0;
 const int SolenoidPin = 4;  // Set valve to pin 2 connects to a transistor and a 9V battery

 int sensorPin = A5;
 int sensorValue = 0;

 void setup() { 
   pinMode (outputA,INPUT);
   pinMode (outputB,INPUT);
   pinMode (reset_button, INPUT);
   pinMode(SolenoidPin, OUTPUT);  // Set pin 2 as an output

   Serial.begin (115200);
   // Reads the initial state of the outputA
   aLastState = digitalRead(outputA);   
 void loop() {
  sensorValue = analogRead (sensorPin);
   if (sensorValue > 800){
      lifted = 0;
      audio_start = 0;
      delay (100);
   if (sensorValue < 800){
   lifted = 1;
   if (lifted == 1){
     if(audio_start == 0){
     star = 6;
     audio_start = 1;
     delay (100);
   aState = digitalRead(outputA); // Reads the "current" state of the outputA
   // If the previous and the current state of the outputA are different, that means a Pulse has occured
   if (aState != aLastState){ 
    wait_loops = 0;    
     // If the outputB state is different to the outputA state, that means the encoder is rotating clockwise
     if (digitalRead(outputB) != aState) { 
       counter ++;
     } else {
       counter --;
     if (counter == 40 || counter == - 40){
      counter = 0;
     if (counter > -6 && counter <= 4){
      star = 1;
     if (counter > 4 && counter <= 14){
      star = 4;
     if (counter > 14 && counter <= 24){
      star = 3;
     if (counter > 24 && counter <=34){
      star = 2;
          if (counter < -6 && counter >= - 16){
      star = 2;
     if (counter < - 16 && counter >= - 26){
      star = 3;
     if (counter < - 26 && counter >= - 36){
      star = 4;
     if (counter > 34 && counter < 40){
      star = 1;
     if (counter < - 36 && counter > -40){
      star = 1;
     //Serial.print("Position: ");
     milk = 1;
   aLastState = aState; // Updates the previous state of the outputA with the current state
//   if (digitalRead(reset_button) == HIGH){
//   }
//   else {
//   }
   wait_loops = wait_loops + 1;
   if ( milk == 1){
    if (wait_loops > 100000){
       digitalWrite(SolenoidPin, HIGH); // makes the valve open    
       digitalWrite(SolenoidPin, LOW);
       wait_loops = 0;     
       milk = milk + 1;

Our p5 code:

var portName = '/dev/cu.usbmodem1411';  // fill in your serial port name here
var options = { baudrate: 115200}; // change the data rate to whatever you wish

var serial;          // variable to hold an instance of the serialport library
var inData;                             // for incoming serial data

var star = 8;

let star_select;

var sample;

var inData_last;

var wait;

var frame_count = 0;

var plays = 0;

let john;
let amy;
let david;
let minnie;

function preload(){
  soundFormats('mp3', 'ogg');
  sample0 = loadSound('barack_vocal.mp3');
  sample1 = loadSound('sounds/minnie_vocal.mp3');
  sample2 = loadSound('sounds/john_vocal.mp3');
  sample3 = loadSound('sounds/amy_vocal.mp3');
  sample4 = loadSound('sounds/bowie_vocal.mp3');
  sample5 = loadSound('barack_vocal2.mp3');

function setup() {
  john = loadImage('pics/johncoltrane.png');
  amy = loadImage('pics/amyw.png');
  david = loadImage('pics/davidbowie.png');
  minnie = loadImage('pics/minnier.png');
  title = loadImage('pics/title.png');
  createCanvas(windowWidth, windowHeight);
  serial = new p5.SerialPort();       // make a new instance of the serialport library
  serial.on('list', print);  // set a callback function for the serialport list event
  serial.on('connected', serverConnected); // callback for connecting to the server
  serial.on('open', portOpen);        // callback for the port opening
  serial.on('data', serialEvent);     // callback for when new data arrives
  serial.on('error', serialError);    // callback for errors
  serial.on('close', portClose);      // callback for the port closing
  serial.list();                      // list the serial ports
  serial.open(portName, options);

function serverConnected() {
  print('connected to server.');
function portOpen() {
  print('the serial port opened.')
function serialEvent() {
 inData = Number(serial.read());
function serialError(err) {
  print('Something went wrong with the serial port. ' + err);
function portClose() {
  print('The serial port closed.');

function draw() {
  frame_count ++;
  if (star == 8){
  translate(windowWidth / 2, windowHeight / 2);
  image(title, 0, 0)
  //text("sensor value: " + inData, 30, 30);
  inData_last = inData;
  if (inData == 1){
      //text("Star: Minnie", 30, 100);
    star = 1;
    star_select = "minnie";
    frame_count = 0;
  if (inData == 2){
      //text("Star: John", 30, 100);
    star = 2;
    star_select = "john";
    frame_count = 0;
  if  (inData == 3){
      //text("Star: Amy", 30, 100);
    star = 3;
    star_select = "amy";
    frame_count = 0;
  if (inData == 4){
      //text("Star: David", 30, 100);
    star = 4;
    star_select = "david";
    frame_count = 0;
  if (inData == 5){
  //text("drop", 30, 100);
  //frame_count = 0;
  if (inData == 6){
    star = inData;
    //frame_count = 0;
    inData = "none"
    if (star == 6){
      star = "none";
      plays = 1;
  if (inData_last == inData){
      if (frame_count > 20){
      if (star == 1){
        //sample5.play(sample1.duration() + 2);
            star = "none";
      if (star == 2){
                //sample5.play(sample2.duration() + 2);
            star = "none";
      if (star == 3){
                //sample5.play(sample3.duration() + 2);
            star = "none";
      if (star == 4){
                //sample5.play(sample4.duration() + 2);
            star = "none";
        if (star_select == "minnie" && sample1.isPlaying()){
        translate(windowWidth / 2, windowHeight / 2);
        image(minnie, 0, 0);
      if (star_select == "john" && sample2.isPlaying()){
        translate(windowWidth / 2, windowHeight / 2);
        image(john, 0, 0);
      if (star_select == "amy" && sample3.isPlaying()){
        translate(windowWidth / 2, windowHeight / 2);
        image(amy, 0, 0);
      if (star_select == "david" && sample4.isPlaying()){
        translate(windowWidth / 2, windowHeight / 2);
        image(david, 0, 0);
            if (sample1.isPlaying() != true && sample2.isPlaying() != true){
        if (sample3.isPlaying() != true && sample4.isPlaying() != true){
                translate(windowWidth / 2, windowHeight / 2);
        image(title, 0, 0);

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);




Basic html and CSS websites have a very rustic and brutalist charm. For my simple sketch that includes various elements of HTML, CSS and JS I was inspired by the DADA & Fluxus movements. 


I like the raw aesthetics of those art movements, their playfulness and performative quality. For my p5-sketch, I tried to incorporate certain nonsensical elements into the structure of the webpage. 

ICM 5: Classes, Arrays & Koi Ponds

I felt very inspired after the last session on classes and object oriented programming - now I am able to create my own worlds and animate with a natural behavior. So powerful! 

I had the idea to create a real pond where digital Kois are floating "trapped" in their digital devices on the surface. The Koi will "live" in a sketch on the screen and push itself (and the iPhone) forward (I am planning to put motors underneath the floating phones) once it bounces against its digital wall. The audience can add their digital Koi to the swarm on the pond by loading the p5 sketch on their phones and let it float on the water - the new iPhones are waterproof ... The full swarm will be visualized in a different sketch on my website and users can drop fractions of Bitcoins into the virtual pond - like in a fountain. 

So lots to do in the next few weeks - but I feel so inspired. I want to build this installation.


So I started writing down the different elements of my class "fish". How does it feel to be a Koi in a pond? Which elements should move? How can I create a natural, floating movement?


I used the same construction mechanism as in the video. 

My beginnings were pretty simple - just a few lines that were moving. I created a skeleton for the fish and animated the fins and the body segments to move naturally.

Screen Shot 2017-10-09 at 12.09.20 AM.png

After a lot of iterations and tweaking of the random movements and the appearance of the body I had a working Koi object. I still have to work on multiple objects - they all show pretty much the same movements in sync. I want them to swim more independent from each other. Another point is the rotation. At the moment they all gravitate to the left as I cannot change the axis in a random manner. But so far I am happy with my Koi - it swims smooth and somehow natural after adding random noise to it.

I used the sketch as well for my fabrication project - a hacked book on data-visualization. 





Fabricate something using primarily two different materials.  Let’s say the project is roughly 40% one material, 40% the other.  The materials cannot be acrylic or plywood (unless you are gluing up your own plywood).  The work should be held together using fasteners of your choosing.

I always wanted to work with metal - as I never used it before, especially Aluminum. I love the buttery softness of this material. It looks industrial yet has something authentic to it. So I bought some aluminum L-Elements and decided to work with them as my first material. The second material was a bit of a journey: After first going for basswood - it's as soft as the aluminum and has a nice clean look - I finally decided for a bit more edge and used paper. A used Chinese book on data visualization I found in the trash. Somehow the shop trash shelf is really inspiring me again and again. Why combining a book on data visualization with these metal construction elements? Well, it somehow felt right at the end. But let's start from the beginning.


Because one theme of the assignment was "screws, not glues", I went screw and nuts shopping first. I loved the industrial looks of hex-head machine screws and the thumb-head machine screws. They made a perfect fit to the aluminum elements - industrial to industrial, minimal to minimal. A brutalist look. My idea was to build a heavy duty screen for my raspberry pi. Something that you can't mess around with. The basswood should cover the open parts between the aluminum elements. 

First challenge was the cutting of the metal: I wanted to have accurate cuts, the bandsaw seemed to rough for that. So I went for a metal handsaw with a angled sawing help.


And yes, I used it at a better position than in the picture above. It was pretty easy to use and the cuts were very precise. After that I sanded the cutting edges of the metal with a metal file. 

I did the same with the basswood, but used a wood handsaw. 


Same like with the metal, I sanded it after each cut. That was the easy part. Then I drilled the holes and had to pay attention to get the holes right - metal has no flex and every hole needs to sit exactly in the right place - otherwise it becomes very difficult to put all the screws in later - something I had to learn the hard way.


I combined multiple L-elements and used the basswood for the back of the screen. When I finished it I was not happy with the basswood. It was not strong enough, too soft - and I realized that wood might not be the best material for the back of a  screen regarding heat. As I wanted to use screws only, I tried to fix the position of the wood with a sandwich-concept between the two aluminum elements. 


I was not very successful - both basswood parts were still loose. I wanted to avoid putting extra screws into the aluminum from outside as I already had quite a few visible screws in the top parts. On top of this engineering issue I didn't feel to excited about the combination of the basswood with the aluminum anymore. It looked like an unequal match to the boldness of the metal.

So I looked for a more exciting combination of materials and searched the trash shelf for inspiration. After thinking about using glass (too tricky to cut.drill) I saw this book on data visualization in the trash and it felt right to use it with the screen - old and new media, re-combined. Paper and aluminum, a wood-product and a metal, a screen and a book. I had to use it. So I decided to mount the metal frame of the screen on the book. I drilled holes into the paper and used screws and a metal rail to attach the screen. 

I tried as well to add an aluminum back to the book to enforce the industrial aesthetics, but I took it off later - it was a bit too much, too far on the brutalist side.


Unfortunately while attaching this last piece, one of the aluminum standoff screws got loose /worn out from unscrewing. I am not sure but it seems the softness of the material makes it not ideal for a lot of assembling / de-assembling tasks. 

I stayed within the 40 % (paper), 40 % aluminum, 20 % (screen components, screws) ratio. 

A very rough look. But it resonated with me. I put a p5 sketch I was working on in ICM on the screen. A koi-fish is slowly cruising through his virtual pond. It somehow felt strange but right to re-imagine this technical book with an industrial construction and a poetic animation. 



After a year of messy coding in Python with avoiding pretty much any modular approach I was eager to train a more "functional" approach with p5. I started with a very basic shape.

Screen Shot 2017-10-04 at 3.40.54 AM.png

I moved on to integrate it a function into a for-loop.

Screen Shot 2017-10-04 at 7.44.39 AM.png

Then I tried to apply the modularity principles at my last sketch - and could not see a starting point.

let x;
let y;
let z = 1;
let t = 1;
let counter = 0;

function setup() { 
  createCanvas(400, 400);

function draw() {
  counter ++;
  if (counter > random(100)){
    background(80, 130, 230, random(1))
  fill(255, 50)
  rect(width/2 - 7.5, 50, 15, 300);
  fill(mouseY / 0.7, 25)
  rect(width/2 - 7.5, 50, 15, 300);
  let y1 = map(mouseY, 0, height, 45, height-100);
  if (x == width/4 || y == height/4){
  z = random(10);
  t = random(1);
  if (x <= width && y > height/2){
    stroke(120, 180, 230, 30)
    fill( 255, 255/x+30/2)
    fill( 255, 220/ x*(y), 255/ x*(y), 6)
    triangle(x + 20, y*z, y, x*z, x/2, x);
    triangle(width -x*t, y/z, height, x*t, x/2*t,x/t);
    triangle(width - x, height - y*z, height - y, width - x*z, width - x/2, width - x);
    triangle(0 + x*t, height-y/z, 0, width - x*t, width - x/2*t,width - x/t);
    x+= 1*y1/100
    else if (x <= width && y <= height){
      x += x*t;
      y += 100
        background(255, 10);
      x = 10;
      y = 0;

Which looks in the original like this:

I could technically split the draw function into "objects appearing" and "move, but in the sketch above they all are mixed together. Modularity doesn't not make a lot of sense here, at least I do not see it at this moment. So I decided to play a bit with linesand perspective. Unfortunately I couldn't finish the sketch.

Screen Shot 2017-10-04 at 9.04.16 AM.png


I had the plan to create a sound / visual controller for Arduino tone and P5 that uses fruits as an input element. The controller would sit on a steel basket that houses the fruits, it would feature several knobs and faders and would be connected to P5 via the serial port. 


To accompany the cleanliness of the steel I wanted to combine plywood and acrylic for the topin two layers: plywood cutouts of hands sitting on top of white translucent acrylic. The goal was to avoid any glue. 

After going through the Arduino Tone lab I started writing the code and ended up using a different library for sound that would give me additional volume and wave-type control in combination with the tone library. I tested as well the P5 serial library with a potentiometer after following the steps in the serial lab and running a node server on my local machine. 

#include "volume2.h"
Volume vol;
const int numReadings = 10;

int readings[numReadings];      // the readings from the analog input
int readIndex = 0;              // the index of the current reading
int total = 0;                  // the running total
int average = 0;                // the average

int inputPin = A0;
int inputPin1 = A4;

int analogPin= 3;
int raw= 0;
int Vin= 5;
float Vout= 0;
float R1= 10000;
float R2= 0;
float buffer= 0;

//int val1;
int encoder0PinA = 2;
int encoder0PinB = 4;
int encoder0Pos = 0;
int encoder0PinALast = LOW;
int n = LOW;

void setup() {
  pinMode (encoder0PinA, INPUT);
  pinMode (encoder0PinB, INPUT);
    for (int thisReading = 0; thisReading < numReadings; thisReading++) {
    readings[thisReading] = 0;

void loop() {
    int n = digitalRead(encoder0PinA);
    if ((encoder0PinALast == LOW) && (n == HIGH)) {
    if (digitalRead(encoder0PinB) == LOW) {
    } else {
    //Serial.println (encoder0Pos);
    //Serial.print ("/");
  encoder0PinALast = n;

    raw= analogRead(analogPin);
  buffer= raw * Vin;
  Vout= (buffer)/1024.0;
  buffer= (Vin/Vout) -1;
  R2= R1 * buffer;
  Serial.print("Vout: ");
  Serial.print("R2: ");
  //int Sound = analogRead(analogPot);
  //float frequency = map(Sound, 0, 40000, 100, 1000);
  //tone(8, frequency, 10);

    // subtract the last reading:
  total = total - readings[readIndex];
  // read from the sensor:
  readings[readIndex] = analogRead(inputPin);
  // add the reading to the total:
  total = total + readings[readIndex];
  // advance to the next position in the array:
  readIndex = readIndex + 1;

  // if we're at the end of the array...
  if (readIndex >= numReadings) {
    // ...wrap around to the beginning:
    readIndex = 0;

  // calculate the average:
  average = total / numReadings;
  // send it to the computer as ASCII digits
  delay(1);        // delay in between reads for stability
  int freq = analogRead(A0);
  int volume = map(analogRead(A1), 0, 1023, 0, 255);

  vol.tone(encoder0Pos*10, SQUARE, volume); // 100% Volume
  vol.tone(freq, SQUARE, volume); // 75% Volume
  vol.tone(freq, SQUARE, volume); // 50% Volume
  vol.tone(freq, SQUARE, volume); // 25% Volume
  vol.tone(freq, SQUARE, volume); // 12.5% Volume

Having sorted out the basic functionality of the code, thinking about the parts I would need for the controller and sourcing all parts, I constructed the building files for the laser-cutter in Illustrator. This took a fair amount of time as I had to measure all parts for a perfect fit. I used the analog caliber for most of the measurements. 


Especially parts like the fader needed extra engineering / construction before laser-cutting.

Screen Shot 2017-09-30 at 10.48.34 PM.png

I used the tracing of images technique to get the positions of the controller elements ergonomically right. I used the sandwich technique of etching and cutting in combination with the two different top layers to get a seamless fit. I traced my hands for the shapes around the controller elements. 

Screen Shot 2017-10-04 at 12.46.19 AM.png

The etching layer would create a rim around the cutouts for washers and speakers to fit them underneath the top-plate.

Screen Shot 2017-10-04 at 12.51.45 AM.png

After cutting both layers the etching took too long - the laser was not strong enough to etch the required amount of material (around 3mm) out of the acrylic. With the help of one of my fellow students, Itay, I managed to used a drill and a dremel to deepen the rims around both speakers and each of the cutouts of the potentiometers. 


I used knobs from the junk shelf as they fit the white/light brown colors of wood and acrylic. 

After that I started screwing in all electronic parts - thanks to the time I spent measuring the controller elements they fit perfectly. I was relieved! 


I still had to drill the screw holes for the standoffs for Arduino and fader. 


The last part of the hardware building was the wiring. I used a perf-board to organize the multiple wires of all the elements and soldered each of them. It took me a full afternoon, but thanks to a soldering workshop earlier in my PComp class I could avoid the biggest mistakes. Still - there is plenty of room to learn for me in this particular technique.

I ended up not attaching wires to the fruits in the basket as I wanted to keep the controller as open as possible - it should be usable for sound creation and control of visuals alike.


I wanted to use magnets to keep the top secure on the steel basket but didn't attach them to the acrylic yet as I wanted to test first whether I could screw them into the acrylic as well - something to be done later this week. The same goes for the top plate consisting of two layers. They are secured and stick together because of the knobs, I still need to figure out how I can attach it safely without drilling or glue. So far all elements are functional and can be assigned different functions. I used it mainly for sound production.

So far I have not tested the sound libraries a lot, so most of the sounds are very experimental - but fun! 

I have tested the P5 serial communication with a potentiometer - so far it is a very satisfying feeling to have a physical control element instead of a trackpad.

I would as well like to put lights underneath each hand/  controller element to communicate with plants: The plant listens to the music, a sensor measures the surface conductivity that is mapped to the controller elements, they light up according to this conductivity and I can react to this interaction with the plant. Sounds a bit lofty - but worth exploring!

So far no fruits in the basket below but plenty ideas on how to continue with my first controller.


 Parts used:

  • 3mm plywood
  • 6mm white translucent acrylic
  • steel basket
  • magnets
  • Arduino Uno
  • crossfader / slider
  • 2 x potentiometer
  • rotary encoder
  • 2 x 8 Ohm speaker
  • USB cable

Tools used:

  • lasercutter
  • drill-press with hole-saw drillbit
  • Dremel
  • soldering iron

ICM 2: Movement


Create a sketch that includes (all of these):

One element controlled by the mouse.
One element that changes over time, independently of the mouse.
One element that is different every time you run the sketch.

This one was so much fun - I went from the idea of raking a Japanese garden to spaced out "Holy Mountain" landscapes that get hit by an abstract sun. 

I struggled with getting a constant line around curves, which is necessary for fine raking lines.

Screen Shot 2017-09-19 at 2.36.21 PM.png

While playing with geometric patterns, I went into perspective and space again: Depth has a certain quality for the viewer. So I decided to play with repetition and shades and tied all generated line to the mouse-button. 

After that I modified the brownian motion example and got some nice color out of it - this is visible in the background.

Screen Shot 2017-09-20 at 3.15.51 AM.png

I used mostly basic functions and nested loops to let the machine do the hard-coding. 

Screen Shot 2017-09-20 at 4.41.03 AM.png

I had one issue with a "let" declaration: 

Screen Shot 2017-09-19 at 1.40.49 PM.png



Write a blog post about how computation applies to your interests. This could be a subject you've studied, a job you've worked, a personal hobby, or a cause you care about. What projects do you imagine making this term? What projects do you love? 

I like making. Machines, systems, experiences. It took me a while to realize that ideas are great, but it’s so much more rewarding to actually make them real. And once I have finished one thing, I want to do the next one. 

After working for the radio, DJing, teaching and educational technology, I slipped into programming last year while attending a machine learning for artists boot camp at School of MA in Berlin. We had to build one installation piece using neural networks in one month. Our teacher Gene Kogan told us: “Make the terminal your best friend because you’ll spend a lot of time with it”. So I ventured into my first explorations with python. And after a few days of total confusion I was able to appreciate its simplicity. I could finish my piece within the month of the bootcamp and was hooked - all of a sudden I had the tools in my hands to automate and facilitate pretty much any interaction between a computer and a human being on a physical and a digital level. Theoretically, at least. 

Since that I have been building a bunch of different experiences and installations. Some involve machine learning, some are performance pieces, some interactive sculptures. It’s been a great year of learning and playing for me since discovering the  strength of Python, pretty much all of my work I did in the past 12 months is facilitated by it.

There is only one limitation I discovered with my favorite programming language: It does not seem to be particularly good for visual stuff. So the last few months I discovered frontend programming, tried my ways around html, css, js, p5js, php. 

This year at ITP camp I got more into live web interaction (webcam, microphone), p5 makes things so much easier. I hope to build more connections with physical computing (ESP modules are my favorite) and the browser. I love the idea that with p5 I can now build digital and physical installations that can be accessed by everybody in every corner of the world with an internet connection. Earlier this year I built a simple tool that enabled users to send someone "real", physical hugs via Facebook messenger. Or at least a (pretty limited) physical representation of them. It was a very scrappy looking prototype involving straps around the chest attached to a servo … but it worked. I am so curious about these digital/physical interactions - you can move real things from everywhere, you can be to a certain extent physically present in two spaces - I love those global-digital-physical interactions / experiences and wanna build more of them in the next two years. 


I as well like conceptual pieces that are more situated on a meta level (is that the right word for it?) of media art. So much that my friend Jake Elwes and I built a homage-piece to Nam June Paiks “TV Buddha”:  A neural network is hallucinating images of buddha on a screen while a real sculpture of buddha is watching this process. The network is actually not at all accurate at this task and goes totally wild with its prediction due to the fact that we didn’t find enough images of Buddha to train the network properly. The results look a bit like Rothko paintings. I like these “imperfections” much more than total accuracy.



So hopefully I can as well continue with neural networks a bit here at ITP, so much is happening already with networks running just in the browser. That was unthinkable a year ago. So I am very excited about the future of browser based machine learning. 

Assignment: Create Your Own Screen Drawing (Use Basic Shapes, All In Setup)

But now back to basics and our assignment: While working on it I sometimes wished I would have been allowed to use for-loops to make drawing to the screen easier and less repetitive. But when doing the exercise I just realized how bad my orientation on the canvas really is. And how much better it gets when you have to draw a few shapes repetitively close to each other without using any loop-constructs. Limitation was a great learning experience!

For the main sketch I initially wanted to create some sort of tessellation as I am kind of re-discovering MC Escher via trying to understand multidimensionality in datasets. That said I scrapped the idea of tessellation pretty quickly after realizing that drawing without loops on a canvas would be really really repetitive. So I wanted to see how I can play with the limits of the exercise, namely that everything had to be out into the setup function. I asked myself how I can create some sort of motion and play with dimensionality and perspective. So I had the idea of creating some still-motion (what an inaccurate description), please forgive me for that …) that is inspired by building manuals an instructions and MC Eschers impossible physics.

 (via  wikimedia )

(via wikimedia)

For the aesthetics I looked at the minimalistic design of technical drawings.

 (via  NASA )

(via NASA)

The "motion" should be achieved through scrolling down the web-page. Like in a vertical comic but you scroll down to see the next scene. I used an extended canvas for this.

function setup() {
createCanvas(1400, 20200);
background(255, 170, 170);
// draw a lot of things

I came up with the idea of a surreal and utopian manual for a water-releasing object that morphs into different structures while the water is pouring down.

Screen Shot 2017-09-10 at 5.30.27 PM.png

It essentially pours water on itself, its different forms - which is a nod to MC Escher’s impossible perspectives in his drawings. Albeit a pretty basic one. And not that obvious when looking at it. After drawing it I realized that it probably took as well a few hints from the iOS game monument valley . But I focused on a more minimalistic visual language that should reflect the futuristic machine based vision of the drawing (maybe more a mix of Akira and Ghost in the Shell animes). The annotations that should hint at real manuals and technical drawings are inspired by Braille letters. 

 (via  wikimedia )

(via wikimedia)

I tried to explore non-letter encoded descriptions that look as if they are somehow alien but understandable (the simple dot at the last description marking the end of the story / explanation).


In my process I used mainly triangle, rectangle and circle function. I applied different opacity to the waterfall to create a see-through effect as a last argument to the fill function:

 //draw waterfall (part 3)
fill(191, 255, 249, 98)
quad(580, 10048, 590, 10048, 575, 10085, 560, 10081);
fill(191, 255, 249, 98)
triangle(520, 10079, 520, 10070, 550, 10079);
rect(520, 10079, 10, 9700);
rect(530, 10079, 10, 300);
rect(540, 10079, 10, 200);

It was a pretty smooth process and I had fun creating all the elements. Then I ran into a big difficulty: I wanted to round the edges of the annotation lines and (mistake! as I later found out by checking Chelsea's code - arc would have been so much cleverer) used the curve function for it. It took me a while to somehow understand the principle of the hidden start and end points of this curve. 

//draw illustration line 2 with end circles
line(140, 10000, 540, 10000); 
ellipse(545, 10000, 5, 5);
curve(140, 10160,120, 10020, 140, 10000, 140, 10000);
line(120, 10020, 120, 10160);
curve(120, 10200,120, 10160, 140, 10180, 280, 10150);
line(140, 10180, 430, 10180);
ellipse(435, 10180, 5, 5); 

I ended up trying to guesstimate the starting points - tedious, tricky and with a very low success rate.

Screen Shot 2017-09-07 at 5.12.42 PM.png

Apart from my difficulties with getting the curves right I really liked the simplicity of the p5 library: The shapes are easy to understand and to combine with each other. 

Regarding the storyline there is still lots of room for improvement in my sketch: the waterfall is too long and it is hard to understand what is going on while scrolling down the page. A few more narrative parts are missing here. I need to add hints for this downward-movement and maybe as well come up with something more interesting at the bottom of the page - lots to work on in the next few days. 

Screen Shot 2017-09-10 at 6.23.32 PM.png


I mainly used the classic offline editor for my work as I couldn’t figure out how to preview my sketch fullscreen in the online version without going via the sharing option. Maybe I should just always have one tab open with the shared sketch in there and use the editor in the other tab. Then it should work nicely. 

So far I really liked the exercise: Finally I learn things in a structured way and gain a solid understanding of programming without having to consult stackoverflow all the time.