After play-testing with the original-concept my focus for my pcomp final has changed a lot - and the project looks entirely different.
Two things came to my mind after the feedback session:
- Is this a project that engages the audience in a direct interaction?
- Is this a project that engages me in direct interaction with somebody else while building it?
The answer in both cases tends towards a "no" - I do this pretty much solo, there is no direct interaction with the audience. While this is still a PComp project, I thought about switching the topic completely: I would like it to be interactive and build it in collaboration with somebody else. I think I can learn more from this experience for my studies than from flying solo with a non-interactive piece - no matter how great it is.
A Finished Project
In my cuneiform-project I realized that my original plan to laser-cut the neural-network generated tablets was the conceptually strongest: The journey from human-made cuneiform on clay tablets would be completed with the creation of a contemporary physical object that is as well an artifact of the digital machine-generated cuneiform. Both versions of the cuneiform are now physical manifestations of dreams - one human, the other artificial. Cuneiform was never more alive than now.
Finishing the project with this physical representation of the output of a neural network made much more sense than trying to squeeze in some human interaction / intervention.
Here are a few results:
I spoke to my classmates Antony and Brandon about my latest insights and offered them to collaborate on a project. We came up with the idea of having an oversized (bigger than human) knob situated on the 4th floor adjusting the brightness of a tiny LED installed in the Tisch window downstairs. The audience member has to use her full body to turn the knob. While this interaction requires a lot of effort, the result is minuscule: The interaction itself becomes the main focus. It’s a classic pcomp interaction in an entirely different setting. We would use serial via local wifi for the communication for this. Brandon would help with the fabrication part as he is involved in another project already, the core team would be Antony and me.
While thinking about this more over the past few days, I tweaked the idea a bit and came up with a possible modification / iteration:
What if the big knob would be a piece of furniture for a few people to sit on? What if the output would not be a LED but a live audio broadcast from this conversation / room - the knob would regulate how “public” it would be by mixing in more or less audio noise(text to speech of latest tweets)? The focus of the audience would balance between the object itself (giant knob to sit/lounge on) and the question how public we want to make our conversations. The twitter “ noise” should highlight the defragmentation of conversations online - if we make something public, how public is it when everybody is public? By focussing on audio the audience would not be distracted by playing with the video feed, the visual stimulus is coming from the physical object of the giant furniture know instead. We would use webrtc for streaming the audio on a webpage.