LiveWeb / Machine Learning for the Web - Final Proposal

concept

For my final I want to start building an interactive, decentralized and web-based devotional space for human-machine hybrids.

background

Heart of the project is the talking stone, a piece I have been working on over the last semester already and showed at ITP Spring Show 2018. Now I would like to iterate on it in a virtual space, with a built in decentralized structure as a form of a “community”-backbone via the blockchain and as an interface geared towards future human entities.

system parts

The connection to universal randomness: a granite rock.

IMG_2739.JPG

Rough draft of user interface (website):

IMG_2936.jpg

And here the full system as an overview (as well a rough draft):

decentr_stone_schematics.png

project outline

Here the basic setup: The rock is displayed via sockets as a Base64Stream on the website for logged in users. Users have to buy “manna” - tokens with Ethereum (in our case the Rinkeby-Testnet blockchain), then they get an audio-visual stream of decaying particles of the rock to start their meditation/devotion: The rhythm of particles decaying is happening - according to quantum theory - in a true random manner. Therefore the core of this devotional practice of the future is listening to an inherent truth. The duration of each “truth” - session is 10 minutes. I will have to see how “meditative” the random decay actually is - I remember it as pretty soothing to just listen to the particle decay that gets hammered into the rock with a solenoid, but I will have to find a more digital audio coloring for each particle decaying. That said - this piece is pure speculative design. I am interested in the future of devotional practice, especially for people who consider themselves non-religious. So trial and error is probably the way to go with this project as well.

If the user moves significantly from the prescribed meditation pose (Easy Pose — Sukhasana) in those 10 minutes, tokens get automatically consumed. The same happens if the user does not open a new ‘truth’ - session within 24 hours after finishing the last session.

On the screen, little dots display active users that are in a session as well. The size of the dots changes according to how much time they have left in their daily meditation practice.

The goal of this experimental devotional space is to give users an audio stimulation that is scientifically true and therefore easier to identify with - in a sense a random rhythm of the universe to meditate with. By buying into the practice with tokens, only dedicated users are using the space and their motivation to actually perform the mediation is higher as they paid for it. They only have to pay again if they do not perform the practice in a regular manner or interrupt a session.

The visuals will be dominated by code - as this is a devotional meeting place for future human-machine-hybrids that find peace and solitude in true randomness (the opposite of their non-random existence).

tech specs

  • granite rock with geiger-counter attached to it

  • raspberry pi with camera running sockets to display rock and send random decay patterns

  • remote server running node.js and sockets

  • tensorflow.js running posenet

  • ethereum contract (solidity based) running on Rinkeby-Testnet (dev blockchain)

  • MetaMask chrome extension (to access blockchain from browser with a secure wallet)

challenges

That sounds like a lot to do in 6 weeks, but I want to give it a try. I experimented with blockchain and true randomness last semester already in two different projects and the current LiveWeb / Machine Learning for the Web classes seem a great framework to give this a virtual and AI guided setting. I am still a bit uncertain about the blockchain backbone as this is the part where I feel the least at home at the moment. I only remember fragments of Solidity, web3js and MetaMask, connecting all layers together was tricky and the documentation sparse. Well, we’ll see!

LiveWeb: Midterm Project Idea

Quite often the rapid prototyping process that we celebrate here at ITP (and can be really difficult to get used to as I like very much to spend more time with a project before going on to the next one …) has the great side effect that you go through a lot of different ideas, topics and digital narratives that can be told. This sometimes means you end up finding a hidden gem that really sticks with you for a while.

Last semester it was the rock-project and its devotional aspects that kept me occupied (and still does).

This semester I am fascinated by creating apps for human-machine hybrids for a probably not so distant future.

For the last Live-Web class I developed an image-chat app that shows bare Base64 chat images instead of decoded (human readable) images. This means only a human-machine hybrid can fully enjoy the chat: a machine itself is limited to the non-conscious decoding of data and can’t enjoy the visual image, a human cannot enjoy it either as the other participants are hidden behind a wall of code (Base64). Only a human with machine-like capabilities could fully participate and see who else is in the chat.

So far so good.

But the prototype was very rushed and lacks a few key features that are conceptually important:

  • real continuous live stream instead of image stream

  • hashing/range of number participants/pairing

  • audio

  • interface

  • further: reason to actually be in such a chat/topic/moderation

I would love to try to address these with the following questions for my midterm (and possibly as well final as this seems to be a huge task):

  • can this be live-streamed with webRTC (as code) instead of images every 3 seconds?

  • how and by who can it be encoded that the stream is only readable by a select circle that possesses a key? Is the rock coming back as a possible (true random) moderator?

  • how would the audio side sound like? or is there something in the future that is kind of like audio, just as a pure data stream that opens up new sounds? and how does data sound like?

  • how to compose the arrangement of interfaces for the screen?

  • which design aesthetics can be applied to pure code?

  • a bit further out there: how would a chat look/feel/sound like in the future with human-machine hybrids? what lies beyond VR/AR/screen/mobile as interfaces?

Let’s get started!

Another iteration on this for Digital Fabrication:

IMG_2860.JPG


LiveWeb: MachineFaceTime

I created an image chat application that can be fully used or seen only by machines. Or you need to be fluent in Base64 to see who you are connected to. Every three seconds an image from the chat participants is displayed and sent around via sockets - it is kept in the original data format for image encoding on the web, Base64 and shown in a very small font. The focus of the eye shifts on the code as an entity and creates rhythmically restructuring patterns. The user-id is displayed along on top with each received image.

To make it work I had to hide various elements on the page that are still transmitting data via the sockets. It works as well on mobile. Just in case you are in need of some calming Base64 visuals on the go.

(code)



ezgif.com-video-to-gif.gif


Live Web: Collective Drawing in Red and Black

I modified our example from class a little bit … and added a button to change the collective drawing color: it is either red or black, everybody is forced to adapt to the new color but can choose the canvas they want to work in. As the canvas is split between a red and black background the “pen” can be either used as an obscurer (black on black/red on red) or a highlighter (red on black/ black on red). The results look strangely beautiful as the dots seem to have depth or resemble a point-cloud:

Screen Shot 2018-10-02 at 4.57.17 AM.png

… and here some code. And more random drawings:

Screen Shot 2018-10-02 at 11.26.58 AM.png

Live_Web: Censor_Chat

I collaborated with Aazalea Vaseghi on this chat-project: A self correcting/censoring chat application that only allows positive sentiments. It runs sentiment analyses on each user input and matches the darkness of the message background accordingly. When the message is too negative it gets censored with an entire black background so that the negative message disappears. Code is on github.

 


Live_Web: Self Portrait Exercise

I used the video playback speed functionality in html to make the most out of a very short moment in time of my life - waking up in the morning a few days ago. The user can wake me up again and again, the moment is played back at random speeds and either extended or shortened and finally looped.

 
 

Here the code (including code sources that where used):

As part of our assignment we should as well look for interactive websites that offer live interactions to its users. I chose NTS.live, an online radio-station with two live shows and the possibility to listen back to shows in an archive. The interesting interaction here is the live radio show that is broadcasted over the internet, users can interact with the hosts via all social networks, these interactions are then re-told/narrated/moderated by the show-hosts. This is a highly filtered interaction, as not every communication between user and show host will be re-broadcasted or commented on - but a very charming mix of the old-fashioned live radio show with social media.