LiveWeb: Midterm Project Idea

Quite often the rapid prototyping process that we celebrate here at ITP (and can be really difficult to get used to as I like very much to spend more time with a project before going on to the next one …) has the great side effect that you go through a lot of different ideas, topics and digital narratives that can be told. This sometimes means you end up finding a hidden gem that really sticks with you for a while.

Last semester it was the rock-project and its devotional aspects that kept me occupied (and still does).

This semester I am fascinated by creating apps for human-machine hybrids for a probably not so distant future.

For the last Live-Web class I developed an image-chat app that shows bare Base64 chat images instead of decoded (human readable) images. This means only a human-machine hybrid can fully enjoy the chat: a machine itself is limited to the non-conscious decoding of data and can’t enjoy the visual image, a human cannot enjoy it either as the other participants are hidden behind a wall of code (Base64). Only a human with machine-like capabilities could fully participate and see who else is in the chat.

So far so good.

But the prototype was very rushed and lacks a few key features that are conceptually important:

  • real continuous live stream instead of image stream

  • hashing/range of number participants/pairing

  • audio

  • interface

  • further: reason to actually be in such a chat/topic/moderation

I would love to try to address these with the following questions for my midterm (and possibly as well final as this seems to be a huge task):

  • can this be live-streamed with webRTC (as code) instead of images every 3 seconds?

  • how and by who can it be encoded that the stream is only readable by a select circle that possesses a key? Is the rock coming back as a possible (true random) moderator?

  • how would the audio side sound like? or is there something in the future that is kind of like audio, just as a pure data stream that opens up new sounds? and how does data sound like?

  • how to compose the arrangement of interfaces for the screen?

  • which design aesthetics can be applied to pure code?

  • a bit further out there: how would a chat look/feel/sound like in the future with human-machine hybrids? what lies beyond VR/AR/screen/mobile as interfaces?

Let’s get started!

Another iteration on this for Digital Fabrication:

IMG_2860.JPG


LiveWeb: MachineFaceTime

I created an image chat application that can be fully used or seen only by machines. Or you need to be fluent in Base64 to see who you are connected to. Every three seconds an image from the chat participants is displayed and sent around via sockets - it is kept in the original data format for image encoding on the web, Base64 and shown in a very small font. The focus of the eye shifts on the code as an entity and creates rhythmically restructuring patterns. The user-id is displayed along on top with each received image.

To make it work I had to hide various elements on the page that are still transmitting data via the sockets. It works as well on mobile. Just in case you are in need of some calming Base64 visuals on the go.

(code)



ezgif.com-video-to-gif.gif


Live Web: Collective Drawing in Red and Black

I modified our example from class a little bit … and added a button to change the collective drawing color: it is either red or black, everybody is forced to adapt to the new color but can choose the canvas they want to work in. As the canvas is split between a red and black background the “pen” can be either used as an obscurer (black on black/red on red) or a highlighter (red on black/ black on red). The results look strangely beautiful as the dots seem to have depth or resemble a point-cloud:

Screen Shot 2018-10-02 at 4.57.17 AM.png

… and here some code. And more random drawings:

Screen Shot 2018-10-02 at 11.26.58 AM.png

Live_Web: Censor_Chat

I collaborated with Aazalea Vaseghi on this chat-project: A self correcting/censoring chat application that only allows positive sentiments. It runs sentiment analyses on each user input and matches the darkness of the message background accordingly. When the message is too negative it gets censored with an entire black background so that the negative message disappears. Code is on github.

 


Live_Web: Self Portrait Exercise

I used the video playback speed functionality in html to make the most out of a very short moment in time of my life - waking up in the morning a few days ago. The user can wake me up again and again, the moment is played back at random speeds and either extended or shortened and finally looped.

 
 

Here the code (including code sources that where used):

As part of our assignment we should as well look for interactive websites that offer live interactions to its users. I chose NTS.live, an online radio-station with two live shows and the possibility to listen back to shows in an archive. The interesting interaction here is the live radio show that is broadcasted over the internet, users can interact with the hosts via all social networks, these interactions are then re-told/narrated/moderated by the show-hosts. This is a highly filtered interaction, as not every communication between user and show host will be re-broadcasted or commented on - but a very charming mix of the old-fashioned live radio show with social media.