We are code. At least in a great part of our lives. How can we relate to this reality/non-reality with our bodies in the browser?
I experimented with tensorflow.js and their person-segmentation model running the smaller mobile-net architecture. I used multiple canvases to display the camera stream base64 code in real time behind the silhouette of the detected person from the webcam. Given the fact that I do pixel manipulation on two different canvases and run a tensorflowjs model at the same time, it still runs relatively fast in the browser - although the frame rate is visibly slower than a regular stream with just the model.
A brief screen recording:
Another version with a bigger screen: