LiveWeb: Midterm Project Idea

Quite often the rapid prototyping process that we celebrate here at ITP (and can be really difficult to get used to as I like very much to spend more time with a project before going on to the next one …) has the great side effect that you go through a lot of different ideas, topics and digital narratives that can be told. This sometimes means you end up finding a hidden gem that really sticks with you for a while.

Last semester it was the rock-project and its devotional aspects that kept me occupied (and still does).

This semester I am fascinated by creating apps for human-machine hybrids for a probably not so distant future.

For the last Live-Web class I developed an image-chat app that shows bare Base64 chat images instead of decoded (human readable) images. This means only a human-machine hybrid can fully enjoy the chat: a machine itself is limited to the non-conscious decoding of data and can’t enjoy the visual image, a human cannot enjoy it either as the other participants are hidden behind a wall of code (Base64). Only a human with machine-like capabilities could fully participate and see who else is in the chat.

So far so good.

But the prototype was very rushed and lacks a few key features that are conceptually important:

  • real continuous live stream instead of image stream

  • hashing/range of number participants/pairing

  • audio

  • interface

  • further: reason to actually be in such a chat/topic/moderation

I would love to try to address these with the following questions for my midterm (and possibly as well final as this seems to be a huge task):

  • can this be live-streamed with webRTC (as code) instead of images every 3 seconds?

  • how and by who can it be encoded that the stream is only readable by a select circle that possesses a key? Is the rock coming back as a possible (true random) moderator?

  • how would the audio side sound like? or is there something in the future that is kind of like audio, just as a pure data stream that opens up new sounds? and how does data sound like?

  • how to compose the arrangement of interfaces for the screen?

  • which design aesthetics can be applied to pure code?

  • a bit further out there: how would a chat look/feel/sound like in the future with human-machine hybrids? what lies beyond VR/AR/screen/mobile as interfaces?

Let’s get started!

Another iteration on this for Digital Fabrication:

IMG_2860.JPG