“In another reality I am half-human, half-machine. I can read Base64 and see you.”
Kim and I created self-portraits for the future.
The self-portraits are Base64 representations of images taken by a web-camera. The ideal viewer is a human-machine hybrid/cyborg that is capable of both decoding etched Base64 and recognizing the human element of the artifact.
We went through several iterations with felt, honey and rituals: Our initial idea of capturing a moment in time and etch or cut it into an artifact morphed into an exploration of laser-cut materials with honey. We were interested in the symbolic meaning of honey as a natural healing material. Our goal was to incorporate it into a ritual to heal our digital selves, represented by etched Base64-portraits that were taken with a webcam and encoded. We used felt as it is made out of wool fibers that are not woven but stick to each other through applying heat and pressure. This chaotic structure seemed a great fit for honey and clean code:
Soon we dropped the idea of using felt as it seemed to be too much of a conceptual add-on and reduced it to honey and digital etchings - the healing should be applied directly onto the material in a ritual.
After gathering feedback from peers and discussions about the ritualistic meaning we struggled with a justification for the honey as well: most of the people we talked to liked the idea better that only human-machine hybrids of a near or far future are technically able to see the real human portrait behind the code. After a few discussions about both variations of our concept dealing with digital identities and moments in time we favored the latter one and dropped the honey.
So we finally settled on etching timeless digital code onto a physical medium that ages over time - self-portraits for the future: Maybe we look back at them in 20 years and can see through the code our own selves from back in the day?
A little bit about the technical process: The image is taken with a nodejs chat app that I created for LiveWeb: It takes a picture with the user’s webcam every three seconds and shows the Base64 code on the page - again an example of an interface for human-machine hybrids or cyborgs of the future.
After taking portraits with my webapp, we copy/pasted the code for each of us into MacOS TextEdit, exported it as a pdf, copy/pasted the contents of it into Adobe Illustrator and outlined the font, Helvetica. We chose this font as it is a very legible font, even at a very small font size. Our laser computer did not have Helvetica installed, therefore we outlined all letters.
The files were very large, as portraits varied between 160,000 and 180,000 characters.
These digital preparations were followed by 2 days of test-etchings on crescent medium gray and crescent black mounting-boards and experiments with different laser settings (speed, power and focus).
We discovered that getting the focus right proved to be difficult: The laser beam seems to get weaker once it travels all to the right of the bed. This makes the etching illegible on that side, whereas the far left it is fine. Focusing a bit deeper on the material with the manual focus produces satisfying results with white cardboard fixed this issue, whereas the fonts looked blurry on the black cardboard on the left side - it was etching too deep. Finding the right balance for the focus fitting both sides equally well took a long time and a lot of trial and error.
Once we got the focus right, we started with the the final etchings: Speed 90, Power 55, 600 dpi and focus slightly closer to the material than usual turned out the best results on our 75-watt laser.
Each portrait took 1:33 hours to etch, in total we completed 4 portraits.
We see them as a starting point for further iterations, as “exploration stage one”: The concept of creating physical artifacts for the future that preserve personal digital moments in time with code and laser is very appealing to us.
We will iterate on our portraits for the future probably for the final.