Live Web / Machine Learning for the Web Finals: Peer to Peer Meditations

For this final I combined the techniques I learned in “Live Web” with the ones from “Machine Learning for the Web” and built a peer-based meditation piece:

A user can experience the base64-code from a remote web-camera streaming through his own body.

All of this runs with webRTC, tensorflow.js and node.js as main parts on the backend: peer.js handles the web-rtc peer connection with a remote peer-server, node.js organizes the automatic connection to a random user in the peer-network, tensorflow.js runs person-segmentation (now renamed to “body-pix) on the local-users webcam - all the canvas-handling and pixel-manipulation is done in javascript.

The result is a meditation experience that reminds of the calming effects of watching static noise on tv screens. As it is peer-based, the users body becomes a vessel for the blueprint and essential idea of something or somebody else. This hopefully creates a sense of connectedness and community while meditating.

Staring at a screen for meditation might not be everyones preferred way doing it, it is worth exploring this from a futuristic perspective: we probably will be able to stream information directly into our retina, therefore an overlaid meditation experience might be worthwhile considering.

Here a short video-walkthrough streaming videoframes from my mobile phone camera as base64-code through my own silhouette (the latter is picked up by the camera on my laptop and run through a body detection algorithm). All web-based with machine-learning in the browser powered by tensorflow.js and webRTC peer-to-peer streaming (code):

Machine Learning for the Web Class 1: Playing with tensorflow.js

For our first assignment I played with the exmples provided in the tensorflowjs-models git. I used the posenet-example to create a mini-skateboarding game for the browser:

The user has to jump to avoid the rocks that are coming along the way. And that’s it! I simply changed the color, strokes and dots of the skeleton and attached a few shapes between the two ankle-points (plus some minor canvas & font additions).

This construct does work somehow, as long as the light in the room is good. Still a lot of it is pretty rough, the error detection (board hits the rock) is not very accurate and needs audio-feedback, the rocks are only on one level of the canvas and there is no counter for points. But so far so good for some first browser-machine-learning.

I tried to deploy it on my server but got lost in ES6 vs Node errors. So for now just a video and ran locally.

For those wondering about the title: It is taken directly from an old NES game from 1988.

I had a lot of fun jumping around to test the game, I guess my neighbors downstairs were not really that amused … :


LiveWeb / Machine Learning for the Web - Final Proposal

concept

For my final I want to start building an interactive, decentralized and web-based devotional space for human-machine hybrids.

background

Heart of the project is the talking stone, a piece I have been working on over the last semester already and showed at ITP Spring Show 2018. Now I would like to iterate on it in a virtual space, with a built in decentralized structure as a form of a “community”-backbone via the blockchain and as an interface geared towards future human entities.

system parts

The connection to universal randomness: a granite rock.

IMG_2739.JPG

Rough draft of user interface (website):

IMG_2936.jpg

And here the full system as an overview (as well a rough draft):

decentr_stone_schematics.png

iteration with tensorflowjs

I built a web-interface as an iteration on the idea that is fed by the camera input and displays it as Base64 code within the silhouette of the users body:

project outline

Here the basic setup: The rock is displayed via sockets as a Base64Stream on the website for logged in users. Users have to buy “manna” - tokens with Ethereum (in our case the Rinkeby-Testnet blockchain), then they get an audio-visual stream of decaying particles of the rock to start their meditation/devotion: The rhythm of particles decaying is happening - according to quantum theory - in a true random manner. Therefore the core of this devotional practice of the future is listening to an inherent truth. The duration of each “truth” - session is 10 minutes. I will have to see how “meditative” the random decay actually is - I remember it as pretty soothing to just listen to the particle decay that gets hammered into the rock with a solenoid, but I will have to find a more digital audio coloring for each particle decaying. That said - this piece is pure speculative design. I am interested in the future of devotional practice, especially for people who consider themselves non-religious. So trial and error is probably the way to go with this project as well.

If the user moves significantly from the prescribed meditation pose (Easy Pose — Sukhasana) in those 10 minutes, tokens get automatically consumed. The same happens if the user does not open a new ‘truth’ - session within 24 hours after finishing the last session.

On the screen, little dots display active users that are in a session as well. The size of the dots changes according to how much time they have left in their daily meditation practice.

The goal of this experimental devotional space is to give users an audio stimulation that is scientifically true and therefore easier to identify with - in a sense a random rhythm of the universe to meditate with. By buying into the practice with tokens, only dedicated users are using the space and their motivation to actually perform the mediation is higher as they paid for it. They only have to pay again if they do not perform the practice in a regular manner or interrupt a session.

The visuals will be dominated by code - as this is a devotional meeting place for future human-machine-hybrids that find peace and solitude in true randomness (the opposite of their non-random existence).

tech specs

  • granite rock with geiger-counter attached to it

  • raspberry pi with camera running sockets to display rock and send random decay patterns

  • remote server running node.js and sockets

  • tensorflow.js running posenet

  • ethereum contract (solidity based) running on Rinkeby-Testnet (dev blockchain)

  • MetaMask chrome extension (to access blockchain from browser with a secure wallet)

challenges

That sounds like a lot to do in 6 weeks, but I want to give it a try. I experimented with blockchain and true randomness last semester already in two different projects and the current LiveWeb / Machine Learning for the Web classes seem a great framework to give this a virtual and AI guided setting. I am still a bit uncertain about the blockchain backbone as this is the part where I feel the least at home at the moment. I only remember fragments of Solidity, web3js and MetaMask, connecting all layers together was tricky and the documentation sparse. Well, we’ll see!