Live Web / Machine Learning for the Web Finals: Peer to Peer Meditations

For this final I combined the techniques I learned in “Live Web” with the ones from “Machine Learning for the Web” and built a peer-based meditation piece:

Users can experience the base64-code from a remote web-camera streaming through their own body.

Try it with two browser windows or two computers/mobiles: Peer to Peer Meditations

All of this runs with webRTC, tensorflow.js and node.js as main parts on the backend: peer.js handles the web-rtc peer connection with a remote peer-server, node.js organizes the automatic connection to a random user in the peer-network, tensorflow.js runs person-segmentation (now renamed to “body-pix) on the local-users webcam - all the canvas-handling and pixel-manipulation is done in javascript.

The result is a meditation experience that reminds of the calming effects of watching static noise on tv screens. As it is peer-based, the users body becomes a vessel for the blueprint and essential idea of something or somebody else. This hopefully creates a sense of connectedness and community while meditating.

While staring at a screen for meditation might not be everyones preferred way of doing it, it is worth exploring this from a futuristic perspective: we probably will be able to stream information directly into our retina, therefore an overlaid, peer-based meditation experience might be worthwhile considering in the future.

Here a short video-walkthrough streaming video-frames from my mobile phone camera as base64-code through my own silhouette (the latter is picked up by the camera on my laptop and run through a body detection algorithm). All web-based with machine-learning in the browser powered by tensorflow.js and webRTC peer-to-peer streaming (code):