IoT, Blockchain and Solar - Midterm

After countless days and hours of debugging, two bricked/one broken SD cards and a final kernel panic when running a simple python sketch it is time to take a little break and resume. So let's have a look where I was a week ago with my project - at that time everything was running fine, the Pi was accessing the blockchain and the servo reacting to changes in my account. All of that streamed in real time. At that time with two laptops:

So far so good: listening to changes in the chain and the servo re-action happened battery powered on the Pi Zero W, the streaming was done with webcams and laptops, the server part plus frontend ran on my Amazon Instance. 

My next step was to try to run the Pi-part with solar power. It worked as expected with a 5V booster and a Li-Po between Pi and solar panel. Then I tried to run the main local part (camera stream and querying blockchain) with battery (a pre-step before using solar exclusively) - after a few days of debugging and hacking web-rtc for streaming live from the pi via my server this worked as well (with a great lag due to the slow network). Great!

Motivated by this success I tried it with solar power on a sunny day - from an energy perspective a success, not so much from a computational one. My streaming code and/or Pi image started to behave strange - it had a bug. And here I probably made a few too optimistic assumptions about the energy provided by the small solar panels, running the Pi at full CPU usage requires more amps than expected. And more than the panels can provide. It should take me a few days to fully realize this - and (spoiler alert!) finally delay the final project vastly.

But at that time I was optimistic to solve this issue in the next days (I assumed it is a code issue, not an energy one) and started working on the enclosure for my project.


After running the separate codes on the py I started to see a mysterious "Bus error" - which seemed to indicate corrupt SD cards, a common issue on the Pi. I re-imaged different cards a few times  - just to run again into the same IO-Bus error. I was still optimistic to solve this - I knew that the code worked a week before on the Pi (at that time with a steady battery supply) - why should it not work anymore now? I hadn't changed a single line as far as I could remember. 

I spent 3 days in a row trying 2 new SD cards (which are completely unusable now), broke one other SD card, tried numerous debugging strategies - only to run into a kernel panic after a fresh install of Raspbian during a very late night session. 


That was the point where I had to laugh out loud - too many things went wrong before and this appeared just absurd. I realized that I could not solve this quickly - but that I had done as much as I can to solve it. It was indeed disappointing to work this hard on a project only to see that nothing work=ed anymore at the end of the week. I basically started on the top and ended up at the bottom. After another day of debugging and presenting my progress in class more and more clues popped up that this might not be a software/code issue, but related to the power supply - in my case the LiPo/Solar setup. I finally stumbled upon this forum post:

Screen Shot 2018-03-20 at 11.56.18 PM.png

I realized that I probably need more Amps to power my application - meaning a LiPo that is more powerful. The solar panel should be fine, if exposed to direct sunlight as much as possible. In a real life installation outdoors the conditions (temperature) would affect the battery life as well, for this project I assumed indoor use. 

So far in this project I learned a lot: Solar works, computation powered by it on a Pi works as well, the sleep/wake up external circuit is simply great - but better stack a few more Amps into the circuit when running computational intense operations on a Pi.




Energy: IoT, Blockchain and Solar

idea & prototype

After experimenting with blockchain implementations for IoT devices I decided to use a Raspberry Pi Zero W for a first iteration on my solar project for energy: a physical pay-for-a-smile interface for the blockchain.

basic circuit setup

Pi Zero W + Camera, micro servo, 5V booster, 3.7 V 1200mAh LiPO, charging/load circuit, solar panel

solar setup_bb.png


power requirements

I re-measured the power consumption for the project and averaged the values over time/runs. The Pi Zero showed a significantly lower power consumption compared to the Pi 3 (in mA). 

I am still contemplating on how to create a deep sleep mode that fits the concept of the project. At the moment, the Pi is listening constantly on changes in the blockchain, then reacts by turning the servo when a user sends money into the account. Conceptually the project would be online forever (so that users can continuously trigger the smiley face), the camera would need to stream constantly as well (something still not finished in pi sketch, at the moment I am using my laptop camera). This is not possible with the current circuit as it will be down after 4.5 hours if running exclusively on battery.

LiPO specifics: 3.7 V, 1200mAh

LiPO + booster: 5 V, 890mAh (rounded)

runtime for continuous sketch (ignoring the servo peaks): 4.5 hours (rounded)

This would mean it would inevitably stop at nighttime when there is no daylight to charge the LiPo from the solar panel. Assuming the physical interface would be mounted outside, it would make sense to power it off at night - as the camera would not be able to film the interaction anyway. This would as well emphasize the connection of the piece to a real environment - which a physical web-interface tries to evoke anyway. 

Nevertheless, an external deep sleep circuit is necessary. I am currently waiting for an external hat for the pi that will manage the power with an automatic shutdown/wakeup schedule. 


Hopefully the sun will come out in the next few days so that I can run the sketch at least for one or two days to test the performance of the circuit over time and alter it if necessary.

planned iterations 

I want to experiment in the next few days with writing data to the blockchain directly with the Pi Zero W to store environmental data from a temperature sensor in the network permanently with solar power.


Project Random: Infinite Random Devotion

As an iteration on last weeks ideas I am thinking of not using the quote at all and let users listen to a rhythmic pattern (generated by random numbers that are measured by a geiger counter, a granite stone is the  source of radiation and random radioactive decay), and repeat it with a hammer on the same source of radiation - as if they were trying to influence the infinite randomness of the universe. Maybe humans can? Probably not. But we can still try like Camus' Sisyphus:

"The struggle itself [...] is enough to fill a man's heart. One must imagine Sisyphus happy" .

The rhythmic signal in the headphones gets recorded and played back automatically from speakers in the background of the installation. 

This setup would not need the the prediction of the quote by Emile Borel, it could serve as the title of the piece:

"Whatever the progress of human knowledge, there will always be room for ignorance, hence for chance and probability."

The process of generating the rhythmic pattern would be started the moment the users gaze at the granite (as a hint at the role of the observer of quantum processes). The duration of the rhythmic pattern would be determined by the first random number generated by the stone.  

Here an overview of the setup:


Technical setup:

  • Mighty-Ohm geiger counter
  • 2 x Raspberry Pi Zero W with phatDAC (for audio out to speakers and headphones)
  • speakers and headphones
  • NVIDIA Jetson Tx2 plus external camera for gaze detection

Project Random: Granite, Devotion, Marbles and Gazes

After last weeks feedback session I sat down with my classmate Azelia who decided to join the project and discussed different aspects of the piece:

marbles or candles

During our conversation we talked about the role of marbles in visualizing the random process as a hint at infinity and spirituality. Azelia suggested to go even further and use candles for the visualization - I would love to build such a candle lighting machine, although I am aware of the issue of fire safety and possible misunderstanding when a candle is not lit up (audience might think the machine is not working correctly). So far we will stick to the marbles then and work on the mechanics of the machine to transport the marbles back after usage.

clean/minimalist looks vs unsettling traditional referrals to religion

We were as well discussing the possibility of using traditional materials and looks like in churches to create an unsettling atmosphere  - like in installation pieces of Beuys or Schlingensief. We later decided for a more minimalist look with cast concrete and a granite stone in the middle (that will be a natural source of radiation for producing the random numbers).



flow of user/audience interaction 

So far we are not sure what would be the best setup of the installation regarding the user interaction: The disconnect between the random guessing process that happens in some sort of "black box", the machine, and the ritual of putting the marbles with their hands on the concrete ring is still there. Azelia had the idea to activate each process via gaze tracking/user observation (which hints nicely at quantum mechanics where the particles fall into a definite state once they are observed which is the basis of the random guessing of the piece) - I think that is a great idea. It raises as well the question if the user should be part of the ritual afterwards or leave it with this ":gaze activation".

quantum computer or decay of radioactive particles

After some further research and experiments on the IBM quantum computer (which is very slow at the moment) I found another way to create real randomness with quantum processes locally: with a geiger counter connected to a raspberry pi measuring the decay of radioactive particles (e.g. a granite stone) - which is based on quantum mechanics as it is impossible to predict when specifically the particles will decay. This setup would be much more tangible for the audience the granite stone which is slightly radioactive could serve as the centerpiece of the installation. I am . not sure if the quantum computer can achieve the same presence.

Project Random: Marble Detection

Over the past week I worked on the marble detection (distinguish black and white marbles) with a RaspberryPi 3 and a camera:

So far openCV worked great for the colors red and green but not very good for white. After a few attempts in getting greater accuracy I abandoned openCV on the Pi and went for a much simpler solution - compare brightness values. Now the camera takes a picture of the marble in the slot and a python script using the python library skimage analyses the brightness of the overall image. As the background of the marble slot is always black, the overall brightness changes significantly when a white marble is in the slot (compared to a black marble). 

Here the simple code:

Screen Shot 2018-02-27 at 10.27.52 PM.png



This means I can now distinguish black and white marbles using the PiCamera and send the right marble-color to the audience who will put them on the installation piece. 

Early prototyping-setup for testing with Pi and PiCamera:


Energy: Solar Powered IoT-Art for the Blockchain

My focus this semester in many courses is the blockchain: I believe that a server-less, decentralized and consensus-based network is the future of the web. I want to explore different use cases for DApps (decentralized apps) and DAOs (decentralized autonomous organizations) in an art and design context. 

In my "Energy"-class we were assigned with the task to power computation (in any form) with solar power. After investigating the energy consumption linked to the proof-of-work algorithm in blockchain applications like bitcoin or Ethereum I decided to explore how far this could be powered by solar energy. After some research I realized that the proof of work is theoretically a great idea - but not sustainable from an environmental and economic perspective in the longer run. Ethereum is planning to replace the proof-of-work with a proof-of-stake algorithm which relies on a stake-based verification/mining system rather than the cryptographic-puzzle that needs to be solved for proof-of-work. This will require less computational power (and real energy) as a hardware arms-race (miners competing for mining a block first) is avoided by design. 

Based on my research that I presented in class I decided to look at use-cases for micro-controllers and blockchain. These should be low energy as they are powered by solar energy only and store environmental data gathered by sensors in the blockchain - creating a 'memory' of environmental data. So far the open-source IoT devices capable of running the necessary systems (node & geth) are rare: I found one git about measuring temperature with a Node 8266 board. This will be my starting point for further exploration. I plan to base my further work on this project on the hack and measure the necessary power consumption of the board, then explore an optimized use of solar energy (without relying too much on battery storage). I will investigate if it is possible or necessary to try to run a node directly on the device (highly independent but probably more power hungry) or use the device as an ultra low-energy transmission tool to another central device running a node via wifi/bluetooth/lora. 

I plan to start with measuring environmental data, but the use cases could be quite varied: I am working in my Project Development Studio Class on an installation piece that runs possibly for an infinite time trying to randomly guess with quantum processes a certain quote on probablity - it would be conceptually interesting to store the failed attempts of this process "forever" in the "world memory" of the blockchain. Solar power would make sure that this guessing runs for an infinite time (excluding material failure). 

To familiarize myself better with more abstract concepts in solidity (the language for Ethereum) I started with CryptoZombies, an interactive, game-based learning platform that teaches programming for the blockchain - by building a zombie game on the blockchain.  

Directly related to our topic of energy:

In Solidity, your users have to pay every time they execute a function on your DApp using a currency called gas. Users buy gas with Ether (the currency on Ethereum), so your users have to spend ETH in order to execute functions on your DApp.

How much gas is required to execute a function depends on how complex that function's logic is. Each individual operation has a gas cost based roughly on how much computing resources will be required to perform that operation (e.g. writing to storage is much more expensive than adding two integers). The total gas cost of your function is the sum of the gas costs of all its individual operations.

Because running functions costs real money for your users, code optimization is much more important in Ethereum than in other programming languages. If your code is sloppy, your users are going to have to pay a premium to execute your functions — and this could add up to millions of dollars in unnecessary fees across thousands of users.

from: cryptozombies-tutorial

This means I should look into the most efficient ways to write contracts and store data in the blockchain - probably based on trusted open-zeppelin contract templates and decentralized storage with IPFS.

Here the PComp side of the setup: ESP 8266 wifi-module, LiPo, Solar-Panel + adafruit mcp73871 solar charger


Lots of iterations and a few tutorials  that explain deploying a DApp later I got something working: a physical interface for the blockchain ... that smiles for Ether :) 

Project Random: Live Experiments on an IBM Quantum Computer

Today I ran my first live experiment on the public IBM Quantum Computer that will be the backend of my installation piece - I am so excited! 

 ( source )

I tried an entanglement setup:

"Two or more quantum objects are entangled when, despite being too far apart to influence one another, they behave in ways that are 1) individually random, but also 2) too strongly correlated to be explained by supposing that each object is independent from the other." (ibm-guide)

Screen Shot 2018-02-13 at 10.46.45 PM.png

Above the user interface with the applied gates to create a superposition (h-gate in blue), entangle (+ sign) and measure (pink gates) the states of the particles (q0, q1). And here the results:

Screen Shot 2018-02-13 at 10.48.38 PM.png

Recurring Concepts in Art Installation Proposal: "m e e t _"


Create or re-create an interactive piece that is not using technology as a primary means of expression. 


We propose the creation of a physical installation piece that provokes the crossing of two gazes of strangers in mid-air in an immersive and at the same time infinite space. 

Working Title

"m e e t  _" 


I am collaborating with my classmate Nicolás Escarpentier on this piece. We found a common ground in combing two main different themes that are very present in our works so far: Nico is using VR to engage people with uncomfortable topics, I am exploring the notion of futility with machine learning technologies.

As we took away the technological means of these concepts and tried to translate them into non-technological art forms we came up with the following keywords:

Physical space, human-human interaction, focus, introspection and infinity.

 Based on these keywords we started our exploration with the corner as a defining element of a physical space. It can limit a move or gaze into a direction (90 degrees) or open it to infinity (270 degrees). In both cases it can be a meeting point if two humans walk into that direction. But what happens if the physical encounter of two bodies is replaced by a virtual encounter (or better "crossing") in mid-air between two gazes based on the geometry of a corner? What if you don't see somebodies eyes, but you you know you are looking at the same point in distance? Does the awareness of this crossing of two gazes somewhere in the distance change our perception of infinite space? 

We want to play with the notion of non-physical, invisible and at the end "virtual" encounters in a real space. Ideally the mere thought of crossing of gazes of two strangers inspires the imagination of the audience, the invisible meeting point becomes "real" and helps them to define and feel infinity as a communal experience that ultimately leads to introspection. We would like to ask the audience an open question: Is alone together better than alone?

We identified the following crucial aspects for this experience:

  • The audience is aware of each others presence in a subtle way
  • Two gazes have to cross each other at a certain point in distance
  • The gazes should continue into infinity afterwards (the eye of the viewer should not be able to focus on a specific point in distance)
  • The audience should feel immersed

For the setup of our piece we were inspired by Markus Schinwald’s Austrian Pavilion at the 2011 Biennale di Venezia and John Dewey's concept of "art as an experience".

By using floating wall panels, we are able to communicate the audience the presence of someone else. The jagged path of the hallway on both sides of the room gets narrower at the end and blocks physical access to the corner of the room - which is not a corner anymore but a rounded miniature arena. This rounded element is accessible from both sides via gazes only, the audience can see inside and cross the gazes of the people while looking at the rounded wall of the element into infinity. Both viewers can feel the presence of each other, talk to each other - but they never meet the eyes of the other person or see the face of the other person. The only meeting space is the arena, the gazes cross at a certain point and "meet" virtually in their staring at infinity. 


Screenshot 2018-02-11 20.31.37.png




KINETIC CHALLENGE: The Energy - Hammock

We finished our prototype for the kinetic challenge assignment on Saturday - and it worked! (apart from some minor engineering challenges that are beyond the scope of the task).

A lot of fabrication was needed as we built the gears and belts by ourselves:

 The sheep ... meaning our foundational structure to hold gears, belts and the stepper motor. 

The sheep ... meaning our foundational structure to hold gears, belts and the stepper motor. 

 We used nails (in our first iteration) instead of cutting the gears with the cnc-router.

We used nails (in our first iteration) instead of cutting the gears with the cnc-router.

 Nails to keep the belt drive in-line.

Nails to keep the belt drive in-line.

 Measuring by eye the accuracy of the wheel drive - a tiny bit off but tolerable for a prototype.

Measuring by eye the accuracy of the wheel drive - a tiny bit off but tolerable for a prototype.

 Basic setup, rear view with circuit and motor attached.

Basic setup, rear view with circuit and motor attached.

 Our circuit (without capacitors) with full bridge rectifier built of 4 diodes.

Our circuit (without capacitors) with full bridge rectifier built of 4 diodes.

 The stepper motor with the second wheel attached to it.

The stepper motor with the second wheel attached to it.

 The self-made belt drive translating the power of the big wheel (that is connected with a rod to the swinging hammock).

The self-made belt drive translating the power of the big wheel (that is connected with a rod to the swinging hammock).


And here the final prototype in action on the ITP floor. It is providing up to 9V at 150-200 mA.

Project Random: Quantum Devotional Performance

After lining out the basic idea of my project idea of a quantum spiritual experience to my classmates, I immersed myself in devotional objects, catholic symbolism, performance art of the fluxus movement and still life. 


Above some of my ideas for visualizing the random guessing of the letters of a quote by Emile Borel ("Whatever the progress of human knowledge, there will always be room for ignorance, hence for chance and probability.") on chance and probability using quantum processes. The difficulty for me is to keep the experience sublime and "understandable" at the same time. I am asking myself how far the whole process should be explicitly visible for the audience and what should remain murky to create a spiritual, meditative and church-like experience. 

Now what is the essential core question of my project?

What are the consequences of absolute chance?

I am thinking about true randomness that is generated by quantum processes as a reasonable doubt of the reason of reason: Randomness as a principle cannot be generated with the means of logic/mathematics - but it can be proofed with those same logics/mathematics. It is in its very nature deconstruct-able with language, but can be described by explaining what it is not. This gives it a spiritual dimension, every task that is based on random action is devoid of pre-determined  choice or plan - and therefore free. The infinite repetition of a purposeless action defies any given logic and therefore constitutes its own logic. This logic can be felt in any ritual, especially in the repetition of prayers in religions. There is safety in structuring the unstructured, in addressing the void again and again. A - article addresses this in its review of the Venice Biennale 2011 "Illuminations" referring to Schlingensiefs "Church of Fear" at the German Pavilion: "(...)his aim was ‘to open mere reason up to the limitlessness that constitutes its truth."And this limitlessness of the truth, the unreasonableness of reason, should offer one thing: Hope. 

In this light I embarked on a little journey into devotional objects I am familiar with from growing up in the countryside in a village with a catholic community: rosary-beads to keep track of the prayers (and to have a sensory sensation of rhythm), candles and monstrance as bearers of holy light and spirit, the architecture of churches, medieval still-lives and iconography. I limited my research on these objects as they are taken from my cultural upbringing, I have a familiarity with their place in society and their function as ritualistic metaphors of faith. 

I played with contrasting the richness of a still-life with the cold terminal-output of my guessing-code to see how this tension between faith, fear and reason unfolds on a 2-dimensional scale.

Screen Shot 2018-02-07 at 2.30.08 PM.png

I observed a feeling of security by leaving the code running on my screen while researching on obscure devotional-object sites, the infinite nature of the repetition of the guessing - process offered me a form of digital-steadiness/prayer with ritualistic qualities.

I played as well with adding sounds to this process and generated sounds based on pitched down recordings from the ITP-floor in combination with a simple harmonic synth layer. 

I would like to explore this further under the light of a three dimensional installation piece that offers a meditative space for the observer.


Project Development Studio: Project Idea

I would like to focus this semester on random and its connection to quantum physics.

 (source:  wikimedia  )

(source: wikimedia )



In ICM last semester we talked about computer generated random numbers and their "pseudo-random" - nature. Dan Shiffman referred to read up more on this topic on This site offers an overview over the difficulties to produce random numbers with an algorithm and a service for random numbers generated by atmospheric noise. Different to the algorithmic "pseudo-random-number-generators" or PRNGs, TRNGs or "true-random-number-generators harvest random-numbers from observing physical phenomena - sometimes even on a subatomic level by observing quantum processes. On the page the authors mention that a comparison between PRNGS and TRNGS can be extended to a discussion on whether the universe is by itself deterministic or not. This philosophical question got me interested: randomness as a statement on determinism. 

Researching more about determinism, chance and probability I discovered the infinite monkey theorem: If a monkey hits a typewriter randomly for an infinite time, it will eventually come up with all works of Shakespeare (there are numerous versions of this metaphor but this is the basic one).

Emile Borel, a French mathematician, illustrated his research in statistics with this theorem and is widely regarded the originator of it. Reading more about him and the theorem I found a quote from one of his books that resonated with my passion for technology:

Quels que soient les progrès des connaissances humaines, il y aura toujours place pour l'ignorance et par suite pour le hasard et la probabilité. (Le hasard, Emile Borel, éd. Librairie Félix Alcan, 1914, p. 12-13)

(Whatever the progress of human knowledge, there will always be room for ignorance, hence for chance and probability.)

This quote fit perfectly into my research of quantum mechanics, a topic that I am interested in since my teens - and since Nov 2017, there is public access to one of three quantum computers worldwide, the IBM Q via an API.


I want to "guess" Emile Borel's quote about chance and probability using chance and probability - with quantum processes on the IBM Q, by observing superpositions to be precise. This process will be visualized with a physical installation or sonification. 


Guessing this quote will take a very long time and will very likely never be fully "guessed": As it consists in its French original of 144 characters (including two special characters), the probability for getting it completely right on the first try by random guessing is 1:18870668547844457769972080826950345531368943638112857227264.

I wrote a quick script for this guessing in Python using quantumrandom, a true random number generator from the University of Australia which is measuring the quantum fluctuations of the vacuum. This is a first step, the IBM quantum computer will be a more direct way to observe quantum processes, especially superpositions (a quantum particle will fall from a superposition, so multiple states at once, into one random state when it is observed/measured).




Assignment 1: Turn human motion into light.

After reading the Paradiso/Starner paper, “Human Generated Power for Mobile Electronics” we realized quickly that a lot of our initial ideas had been done already a few times. Very interesting was the energy chart that highlighted the amount of power that can be generated by different parts of the human body - with the legs as the most powerful ones. 

We decided to use a combination of human power and gravity for our project: The Energy Hammock.

Iteration 2

A seesaw-like setup for two people on sitting swings that encourage a playful interaction.

Iteration 1

One Person, swinging movement of the hammock to the sides creates energy at a gear-head motors mounted at the ends of the hammock.


The idea is to harvest energy in a playful way using the weight of the human body as the main means of production. 


Screen Shot 2017-12-22 at 1.44.39 AM.png


Create a simple experimental scene in Unity and place your created character in the scene. Create a surreal, experimental or musical sequence either puppeteered live or rendered out as an application.


I created an experimental game in unity called "sisyphus". The player has to roll a huge stone downhill, uphill, through rocky valleys, desserts and on bizarre concrete structures. 

Screen Shot 2017-12-22 at 3.02.48 AM.png
Screen Shot 2017-12-22 at 1.42.53 AM.png

I created the world structures, elements and terrains by hand, the player character is based on a mixamo-shape with custom textures from different assets. 

I used concrete textures on the main elements of the world including the sphere that has to be rolled. This should add visual weight and stand in a stark, brutalism - inspired contrast with the fleshy and vulnerable looking muscular appearance of the main character. The terrain features sand textures and varies between bright brown and warm red in the final rendering. In combination with the bright skylight this should invoke a dessert feel - the player/audience should feel the heat while rolling the rock around.

The formal language of structures and elements is inspired by the metaphysical/surrealist art movement around Giorgio de Chirico from the early 20th century:

 (image source:  wikimedia )

(image source: wikimedia)

I tried many times to finish the game (it goes back up to where it started, staying with the main theme of purposeless hard labour), but still have not managed it.

Here a few video-playthrough attempts to show the basic looks and functionality:



The controls feature walking in all directions and a push mode that can be activated with a click on the space bar. Some of the colliders are in various angles a bit off - nevertheless in most situations the player can control the rock relatively easily.

The camera is rigged to the head of the main figure to have a more realistic view of the action - it adds a small bit of camera shake to the game to achieve these looks.

The game was a lot of fun to build - I would love to continue with it at a certain point in time. I should try to play it through to the very end at first though.  


After a few weeks of fabrication, code and iteration our team (Anthony, Brandon and me) showed our KNOB-project to the public at the ITP Winter Show 2017. Here a little update on the latest developments and iterations since our submissions to the pcomp finals.

code / pcomp

The code/pcomp setup stayed more or less the same for this last iteration: We used a 8266 node mcu wifi-module in combination with a raspberry pi inside the knob that picked up the data from the rotary encoder at the bottom of the metal pole counting rotations of the knob. The rotations were sent to a remote server (in our case heroku-server) via a node script (proofed to be easier than python for this use). On the server-side we reprogrammed the rotations to match 1000 rotations for full brightness of the LED. The pwm-data was sent to another pi inside a white pedestal with the LED on top of it. The pi controlled the brightness of it directly. For all remote connections we used itp-sandbox as wifi-network.


For the fabrication part of the project we iterated mostly on the outer materials of the knob and the resistance of the rotational movement of the bearings. After priming it with white primer we initially wanted to paint it in rosegold - as a nod to Jeff Koons and Apple product fetishism. Then we realized that this might be a too distracting and cynical take on a generally playful concept and decided to use felt as the main outer material of the knob. The audience should want to touch it and and immediately feel comfortable with the soft material. It proved to be a good choice - the feedback from the audience was great, everybody liked the softness and feel of the felt. We chose black as it seemed to show the use by a lot of hands less fast than grey or a brighter color. It accented the general black and white color scheme of the arrangement as well. 


For the LED we built a wooden pedestal to house the raspberry pi and battery pack and painted it white.



To add more resistance to the movement we added 4 coasters to the bottom of the rotating part of the knob and padded the wooden rail in the base with thick foam. The coasters were rolling on the foam, the compression of the foam by the casters produced a slight resistance for a single caster. Multiplied by four the resistance was big enough to keep the rotating part from spinning freely when a lot of force was attached. We were initially worried about the creaking noise of one coaster, but during the show this was irrelevant as the general noise of the audience covered this. 


concept & show feedback

We changed our concept fundamentally: On Sunday, the first day of the show, we turned the LED on to full brightness with 1000 rotations clockwise on the knob. On Monday, the second show day, we reversed this tedious process and turned it slowly off with 1000 rotations counterclockwise. 

On a separate iPad screen running a webpage the audience could keep track of the number of rotations. Why a thousand? We just felt it was the right number. The direction of the rotation for the day was printed on a simple black foamboard suspended from the ceiling - it should look as simple, intuitive and familiar as a daily menu in a restaurant. 


We felt that this scaling of the interaction itself was a natural fit to the scaling of the knob: Not only the physical scale changed but as well the procedural. This focused the perception of the audience stronger on the core of the concept: to enjoy an interaction for the sake of the interaction itself - to invoke a meditative and communal state of action as the knob is usually turned with a group of people. 

In the show this iteration was well received. Not only because of its conceptually balanced approach towards the timing of the reward of the interaction, mostly the audience described a feeling of comfort in talking to strangers while performing a repetitive manual task together. One group compared the experience to a fidget spinner for groups that could be used for brainstorming activities in a board-room. Another participate recounted childhood memories of sorting peas together. 

While a few participants, mostly children, tried to raise the number of rotations, and therefore as well looked at the iPad showing the current number of rotations, the LED as main output was generally received as a minor part of the process of creating a communal experience. 

Our installation definitively hit an important aspect of technology: an interaction can create a meaningful and satisfying experience as interaction itself when it helps creating a sense of belonging and community - even without an instant gratification or a short term purpose. [link to video]

We decided not to use AR as an output as there is still a device needed by the user. This shifts the focus of the audience to the output and distracts from the physicality of the object and the interaction with this physicality - something we wanted to avoid. AR still is conceptually stronger as an output as it is in itself non-existent and weightless. It was a difficult decision but in the end a simple output as an LED and the exaggerated scale of the interaction over 1000 rotations prooved to be stronger in the context of the winter show and its abundance of interactive pieces on a small space.

We felt very honored to be part of the show and the ITP community. Thanks to whole team behind it, especially the curators Gabe and Mimi. And big thanks to our PComp and IFab professors Jeff Feddersen and Ben Light for the great guidance and feedback during the whole process. 

Here a few impressions from the show: 



invite users to contemplate on their online selves and identities by creating a semi-realtime chat with pre-recorded videos and actors - and the possibility for users to participate in this fake setup. 


  • how do we establish trust online with strangers?
  • how do we perceive ourselves in a group environment online?
  • what are the rules of communications in a video chat with strangers ?
  • how is time related to communication ?

inspiration / early sketches

We started with early sketches that were playing with video feeds in VR. Initially we wanted to give users the possibility to exchange their personalities with other users: We had the idea of a group chat setup where you could exchange "body parts" (video cutouts) with different chat group participants. This should be a playful and explorative experience for participants. How does it feel to be in parts of another identity? How do we rely on our own perception of our body to establish our identities? How do we feel about digitally augmented and changed bodies? How does that relate to our real body perception? Does it change our feeling for our own bodies and therefore our identities? How close are we to our own bodies? Do we belief in body and mind as separate entities? How is body and mind separated in a virtual and bodyless VR experience? 



After looking at our timeframe and technical skills we decided to postpone the VR element of our concept and focus on the core ideas in a 3D setup: body perception, trust and identity in virtual interactions. We chose three.js as our main language as it provided us with a lightweight environment in the browser that possibly could be deployed in an app or in a local environment. As we decided later for a local setup for our group online environment this proofed to come with a few tradeoffs regarding access to files on the local machines from javascript. We used python as backend tool to compensate the security restrictions in javascript. 


conceptual setup

Close to our initial sketches we constructed an online environment that should fulfill the following requirements:

  • group experience with person-to person interaction
  • video feeds like in a webcam-chat 
  • insecurity, vagueness and un-predicability as guiding feelings for the interaction with strangers
  • fake elements and identities to explore the role of trust in communication
  • a self-sustainable environment that has the potential to grow or feed itself

To achieve that we built a welcome page that simulates a video-chat network. The aesthetics were kept partially retro and cheap. The website should look not completely trustworthy and already invoke a feeling of insecurity - but still stimulate the interest in the unknown. 

The main page features 3 screens with the user's webcam feed in the center - right between two users that seem to establish a communication. The user should feel in-between those two other users and feel the pressure to join in in this conversation. Both users on the left and right are actors, the video-feeds pre-recorded and not live. The user in the middle does not know this - they should look like realtime chat-videos. 

While trying to establish a conversation with the fake video-chat partners, the webcam-feed of the user gets recorded via WebRTC. After 40s in this environment the recording of this feed pops up suddenly and replaces the video-feed with the actor on the left side. The users should realise now that reality is not what it seems in this video-chat. It is fake, even time seems to be unpredictable. After 5s looking at their own recorded feed, a popup on top of the screen asks the user if she wants to feed this recording of her into a next round to confuse other people. The question here is why a user would do that. In the user testing most users wanted to participate in the setup in the next round. As the users dynamically replace the videos on the left and right this could be a self-feeding chat that is never real time - you are always talking to strangers from another time. But for a few seconds they exist for you in real-time with you in this world - until you realize that this was not true. At least according to our concept of time and being. 

As mentioned before we used three.js as the main framework. On top of that we used webRCT extensively, especially the recording and download function of webcam feeds. On the backend python helped us to move around the recorded and downloaded files dynamically from download-location on the local machine to a JS-accessible folder. Python as well helped us to keep track of the position of the videos (left or right) when the browser window gets re-loaded between different users. This was a hack, a node - server would have probably been better for this task - but python was simply quicker. 

We did not use the app in a live setup as we felt we need to refine the code and as well experiment further with porting it to VR. 

So far it was very rewarding for me as I could explore a lot of three.js while working with Chian on the project. WebRTC proofed again to be a great way to set up a life video chat - with a little help from python on the backend it worked as a prototype. The VR version will probably have to run exclusively in unity. This is mainly C# - a new adventure for the next semesters! 


Here a video walkthrough:


On the code side we used a github-repo to backup our files. Here you can find all files including a short code documentation as readme. 






A giant physical knob controls a tiny virtual light-sculpture in AR. Both are separated from each other, the audience has to make an intellectual connection between the two of them.


weight, presence, physicality, virtuality, space, augmentation, contrast, minimalism, reduction, haptic interaction, visual perception

technical setup

glossolalia_schematics (1) (1).png


We used a github repo to share our code - I never used it for sharing before but as a backup for my own code. It worked pretty well for collaborative coding. The code was written in C++ (Arduino) & Javascript (NodeJS, threeJS, ARJS). 

user feedback

The term "user" might be misleading, we think more of an audience rather than users for our project. Key takeaways from two play-tests:

  • audience likes to turn the piece
  • sheer physicality/scale of the input object is regarded as a plus
  • piece is turning very fast - this can dilute the original concept of an oversized knob
  • audience does not make an immediate connection between AR-object and physical object
  • AR object is very generic, no clear formal connection to the physical object
  • separation of the two objects into different spaces is necessary to avoid overshadowing one piece with the other
  • if audience is confused initially and understands the connection between the two objects later the whole piece is stronger as the impact of a delayed gratification is more powerful
  • the idea of a physical idea is generally welcomed as well, it more seen as an ironic statement than a concept - AR as output fits better conceptually but is formally more difficult to execute and understand (AR is by itself so new that the formal language is not set yet)
  • audience expects initially a greater variety in the output and understands the contrast between the two objects only after explanation of the concept


key takeaways

  1. Wood is an organic material, it does not compute.
  2. A change in scale affects pre- and post-production of a piece. They mutually influence each other.
  3. Stick to the original concept, keep iterations in mind for future projects.
  4. Collaboration works best when everybody is on the same page and knows what to do.
  5. Browser based AR is still in its early stages but already very promising.

Our project was at a bigger scale than we were used to and involved a lot of fabrication. We learned a lot in terms of project management, sourcing materials and fabrication (e.g. using the CNC router). The tiny and fragile PCOMP parts had to be integrated in the large scale mechanics of the knob and read those movements correctly. Server side programming and the JS frontend took a lot less time than fabrication. 

Our goal was to create an interactive installation piece for a gallery environment at a larger scale that raises questions about physicality, technology and our joy of play. 

The collaboration proofed to be the best way to create such a piece in a limited timeframe. We plan to exhibit the piece at the ITP Winter Show 2017 - there is still a few improvements to do for this event: re-do the CNC cuts in a hexagon shape, re-engineer the outer layer of the knob in wood, create a backup mechanism to the rotary encoder in case of mechanical failure, slow down the movement of the knob and improve the AR-model. 






Users turn a giant knob on the 4th floor of ITP and control the brightness of a virtual LED in AR positioned on a physical column in the Tisch Lounge. 


Inside the knob will be a rotary encoder to track the exact position of the element. These values are read and continuously sent to a remote AWS server from a nodeMCU 8266 wifi module inside the knob via serial/python/websockets. On the server they are stored in a txt-file and read into a publicly accessible html-file serving the AR-LED in an AR version of three.js. We will have to test how much of a delay we will have using this communication. A public server seems to be the only way for the audience to access the AR site from outside the local NYU network. The delay might still be acceptable as knob and AR-column are situated on different floors. With iOS 11, AR on mobile is possible now on all platforms using getUserMedia/camera access. 

Here a quick test with the github-example library, a target displayed on a screen and an iPhone running iOS 11:




We found a 3ft long rotating steel arm with bearings and attachment ring in the shop junk-shelf. We will use it as the core rotating element inside the knob. 


This rotating element will be mounted on a combination of wooden (plywood) bases that will stabilize the knob when it is getting moved. The weight of the knob will rest on wheels that are running on a rail/channel in the wide base that features a wave structure inside so that the 'click' of the knob feels more accurate. Inside the knob we will build a structure with 2 solid wooden rings that are the diameter of the knob and are attached to the rotating element. On the outside we will cover the knob with lightweight wooden planks or cardboard tubes. 



We worked on the wooden base for the knob/metal pole using multi-layered plywood to keep the rotary encoder within the same wooden element as the pole - this prevents a damage of the electronics/mechanics once there is a push or tilt towards the sides of the knob.



In collaboration with Brandon Newberg & Anthony Bui.


Screenshot from 2017-11-20 11-13-35.png




  • Tue. 21 Nov. 2017: tech communication finished, fabrication parts ordered

work on automation + fabrication (knob), work on AR script

  • Tue. 28 Nov. 2017: all communications work, knob basic fabrication finished

work on fabrication finish, column, code cleaning, documentation, 

  • Tue. 5 Dec. 2017: final presentation



(second version)


(first version)


Create a short animation in After Effects that has a primary character and tells a quick story 1-3 minutes.


We wanted to animate album covers starting from Nirvana Nevermind to David Bowie’s Black Star. The baby from the Nirvana album should travel through music albums from around that time and interact with characters/scenes in each cover: Storyboard

We started with a few rough sketches:

After aiming initially at 24 animated album covers, we quickly realized that this would take far longer than expected. We had already finished the music mix that tried to stay roughly within the 3 min time limit of the assignment and was precisely timed at the first album animations. Due to  the meltdown of my aged but trusted macbook-air three days before the deadline for the assignment we had to cut the animation short to 1 minute and fewer albums than planned. 

For the second version we were challenged by the tight mixing of the music - slowing down the narrative was not an option so we added an intro instead that tells the story of a possible breakup: A voice speaks to his ex-partner on the phone and negotiates part of the record collection. Music is always about memories, that was the inspiration for this intro sequence. The main after effects animation is still very fast, the albums and the music moves fast from one to another. We added a few frames to the end to extend the animation to a few more albums - still far away from our goal to animate 24 albums of the year from the release of the Nevermind-album until now. 

For a proper version of this animation we would possibly start from scratch again and extend the movie to about 6-8minutes. This was not possible for this assignment given the maximum length of 3 minutes for the animation.