Digital Fabrication Midterm: Future Portraits

“In another reality I am half-human, half-machine. I can read Base64 and see you.”

final iteration

Kim and I created self-portraits for the future.

concept

The self-portraits are Base64 representations of images taken by a web-camera. The ideal viewer is a human-machine hybrid/cyborg that is capable of both decoding etched Base64 and recognizing the human element of the artifact.

process

After going through several iterations with felt, honey and rituals, we finally settled on etching timeless digital code onto a physical medium that ages over time.

The image is taken with a nodejs chat app that I created for LiveWeb: It takes a picture with the user’s webcam every three seconds and shows the Base64 code on the page - again an example of an interface for human-machine hybrids or cyborgs of the future.

machine_facetime.gif

After taking portraits with my webapp, we copy/pasted the code for each of us into MacOS TextEdit, exported it as a pdf, copy/pasted the contents of it into Adobe Illustrator and outlined the font, Helvetica. We chose this font as it is a very legible font, even at a very small font size. Our laser computer did not have Helvetica installed, therefore we outlined all letters.

fullsizeoutput_310.jpeg

The files were very large, as portraits varied between 160,000 and 180,000 characters. After 2 days of test-etchings on crescent medium gray and crescent black mounting-boards and experiments with different laser settings (speed, power and focus), we started the final etchings: Speed 90, Power 55, 600 dpi and focus slightly closer to the material than usual turned out the best results on our 75-watt laser. Each portrait took 1:33 hours to etch, in total we completed 4 portraits.

IMG_6999.JPG
IMG_2887.JPG
IMG_2864.JPG
IMG_7004.JPG
IMG_6998.JPG

LiveWeb: Midterm Project Idea

Quite often the rapid prototyping process that we celebrate here at ITP (and can be really difficult to get used to as I like very much to spend more time with a project before going on to the next one …) has the great side effect that you go through a lot of different ideas, topics and digital narratives that can be told. This sometimes means you end up finding a hidden gem that really sticks with you for a while.

Last semester it was the rock-project and its devotional aspects that kept me occupied (and still does).

This semester I am fascinated by creating apps for human-machine hybrids for a probably not so distant future.

For the last Live-Web class I developed an image-chat app that shows bare Base64 chat images instead of decoded (human readable) images. This means only a human-machine hybrid can fully enjoy the chat: a machine itself is limited to the non-conscious decoding of data and can’t enjoy the visual image, a human cannot enjoy it either as the other participants are hidden behind a wall of code (Base64). Only a human with machine-like capabilities could fully participate and see who else is in the chat.

So far so good.

But the prototype was very rushed and lacks a few key features that are conceptually important:

  • real continuous live stream instead of image stream

  • hashing/range of number participants/pairing

  • audio

  • interface

  • further: reason to actually be in such a chat/topic/moderation

I would love to try to address these with the following questions for my midterm (and possibly as well final as this seems to be a huge task):

  • can this be live-streamed with webRTC (as code) instead of images every 3 seconds?

  • how and by who can it be encoded that the stream is only readable by a select circle that possesses a key? Is the rock coming back as a possible (true random) moderator?

  • how would the audio side sound like? or is there something in the future that is kind of like audio, just as a pure data stream that opens up new sounds? and how does data sound like?

  • how to compose the arrangement of interfaces for the screen?

  • which design aesthetics can be applied to pure code?

  • a bit further out there: how would a chat look/feel/sound like in the future with human-machine hybrids? what lies beyond VR/AR/screen/mobile as interfaces?

Let’s get started!

Another iteration on this for Digital Fabrication:

IMG_2860.JPG


Digital Fabrication: Midterm Iterations on Healing, Honey and Felt

“In your backbone you feel a pointed something and it works its way up. The base of your spine is tingling, tingling, tingling, tingling. Then n|om makes your thoughts nothing in your head”

[Kxao ≠Oah - a healer from |Kae|kae area, quoted in Biesele, Katz & St Denis 1997:19]

(taken from JU|’HOANSI HEALING SONGS on NTS-radio)

IMG_6856.jpg



Screen Shot 2018-10-11 at 12.28.37 AM.png

For our midterm in Digital Fabrication, we iterated on the idea of the swarm based behavior. After reading more about Joseph Beuys’ use of felt and honey in his fluxus-performances in the late 60s and the JU|’HOANSI tribe and their use of healing through dance, we shifted our focus to the idea of a healing ritual for the digital age.

We are thinking about making a piece that uses three components in an interactive installation:

  • an image in Base64 code on paper (used for image encoding on the web)

  • felt with a perforated structure

  • honey

As shown in the image above, a depiction of the artists in a waterfall-like silhouette made from Base64 encoding is printed on paper. This is the foundation of the sculpture that reflects ourselves that we want to heal. Above the paper is a layer of felt cut in strips and shaped in a web. It filters the honey dripping from above which acts as a part of the healing ritual: Beuys saw honey as a healing material as it is gathered by bees, which represented a “peaceful” entity.

A coincidence we found out while cutting felt with the laser: it smells like honey afterwards.

IMG_2852.JPG

LiveWeb: MachineFaceTime

I created an image chat application that can be fully used or seen only by machines. Or you need to be fluent in Base64 to see who you are connected to. Every three seconds an image from the chat participants is displayed and sent around via sockets - it is kept in the original data format for image encoding on the web, Base64 and shown in a very small font. The focus of the eye shifts on the code as an entity and creates rhythmically restructuring patterns. The user-id is displayed along on top with each received image.

To make it work I had to hide various elements on the page that are still transmitting data via the sockets. It works as well on mobile. Just in case you are in need of some calming Base64 visuals on the go.

(code)



ezgif.com-video-to-gif.gif


Live Web: Collective Drawing in Red and Black

I modified our example from class a little bit … and added a button to change the collective drawing color: it is either red or black, everybody is forced to adapt to the new color but can choose the canvas they want to work in. As the canvas is split between a red and black background the “pen” can be either used as an obscurer (black on black/red on red) or a highlighter (red on black/ black on red). The results look strangely beautiful as the dots seem to have depth or resemble a point-cloud:

Screen Shot 2018-10-02 at 4.57.17 AM.png

… and here some code. And more random drawings:

Screen Shot 2018-10-02 at 11.26.58 AM.png

Understanding Networks: Traceroute

assignment

Traceroute at least three of the sites you regularly visit. Do it from all of the locations you regularly connect from. Save the trace in a file, and make a map of the routes, indicating the network providers that show up every time. Identify who the major tier 1 providers are in your life. Identify the networks your traffic crosses in the course of your daily life. Figure out whose hands the data about your life goes through on a regular basis. Look for patterns from your network-browsing habits through analysis and graphing of your network traces.

setup

I traced four websites that I frequently visit:

I chose two networks/locations for tracing that I spend most of my time at:

  • university wifi (+ mobile network comparison in same location), at Washington Square/Manhattan

  • home wifi (+ mobile network comparison in same location) in Brooklyn

I wrote two custom tracerouting tools to automate the process as far as possible:

  • tracer.py: combined traceroute and isp-lookup/geoip for given list of urls, returns json-txt file

  • mapper.ipynb: matplotlib based tool using basemap to create traceroute maps from json-txt file

server footprint from home

 
 

I live in Brooklyn. This is where I spend the few hours that I am not on the floor at ITP.

Looking at the footprints I observed that connecting to my homepage (which I registered in Europe with Squarespace) from my homw wifi causes the trace to bounce back and forth between EU and US - until finally reaching the US - and the routes my signal travels on are owned by Time Warner Cable and Akamai Technologies:

The route looks similar when using a mobile connection, but with different providers: Zayo Bandwidth and Level 3 Technologies stick out, Akamai Technologies comes up again as ISP in Europe:

Looking at another footprint (nts.live), Time Warner Cable dominates the servers in the US:

The same trace with mobile emphasizes again Zayo Bandwidth for mobile-networks:

Connecting to NYU and Stackoverflow does not yield too many interesting results, both stay more or less close to or entirely in New York. The only strange behavior comes from trying to traceroute Stackoverflow on mobile - it does not allow to traceroute, the signal gets stuck in a local private network (in all locations).

Here the connection to Stackoverflow via wifi which travels to Denver and then comes back to New York:

 

server footprint from ITP/NYU

 

ITP/NYU is situated at Washington Square in Manhattan. Here I spend most of my time, logged into NYU wifi or the mobile network.

Comparing the traces of the two wifi-networks (home and university), the paths through the servers look different for my homepage - in the US the network provider is not clearly identifiable, NYU provides the last visible trace before it goes to Europe. There GTT Communications comes up as a provider:

The trace for nts.live shows a connection to Europe that does not come in the map from my home wifi, the network provider that pops up is Hibernia Networks Netherlands. Why? It might have to do with NTS having multiple IP-addresses, maybe the server was more easy to access from NYU. Maybe. I can only speculate at the moment. Anyway, here the map, accessed from NYU wifi:

On mobile the connection stays in the US (and again in the hands of Zayo Bandwidth as ISP):

Takeaways

To make it short - this is all very interesting! My online life is in the hands of very few network providers, they vary depending on which type of network I am connected to - and the routes vary sometimes substantially, detours to Europe are not always explainable for me. I really enjoyed understanding much more of this physical layer of the internet and how every request from a laptop of phone travels around the globe at -literally - light-speed.

I thoroughly enjoyed building custom tools in python for it and dive a little bit into data visualization with matplotlib and basemap - although I encountered quite a few challenges on my way: nested dictionaries are great, but need a logical setup, building tools takes way more time than actually collecting and visualizing the data.

Let’s finish this blogpost with a screenshot from parts of the json-data (a little bit obscured):

Digital Fabrication: Felt_Laser Explorations

Our assignment this week took us into unknown lands of laser-cutting (at least for us unknown…): I teamed up with Kimberly Lin and we went to mood-fabrics in the NY fashion-district to buy different shades of grey felt in a thicker quality (around 1/8 inch).

 

We opted for grey felt because of its iconic role in sculpture and art installations in the 20th century: Robert Morris, Joseph Beuys and Bianca Pratorious were our main inspirations for the choice of material. We were curious on how the material could be cut with the laser - and how the digital process could alter the artistic output and execution. So we decided to play !

IMG_2831.JPG
IMG_6596.JPG
 

We used Vectorworks to create simple slots and lines for our first prototype. After copying it into Adobe Illustrator we started laser-cutting - and to our surprise the felt turned out to be a great choice: it has a certain sturdiness and structural integrity that helps maintaining the form. There is still a lot of movement and room for re-shaping of the object. And the laser cuts it pretty fast, efficient and precise. We did three rounds of cutting (to prevent the material from too much burning) at conservative settings: 500hz frequency, 30 speed, 10 power worked well on the 60watt laser.

We prototyped different shapes and arrangements of the fabric pieces as we plan to build a large-scale kinetic sculpture later in the semester.

IMG_6624.jpg
IMG_6620.jpg
IMG_6690.jpg
IMG_6677.jpg

After playing with different combinations we decided to keep a self-standing organic structure for now and later on experimented how it would behave in motion.

Digital Fabrication: 2-D Object Drawing

I chose the Teenage Engineering OP-1 synthesizer for my Vectorworks object drawing assignment.

IMG_2818.JPG
 

It was challenging to get all the measurements correctly into the drawing, after a while I became a bit detail obsessed - as Vectorworks gives you the opportunity to be very exact:

Screen Shot 2018-09-19 at 3.26.36 AM.png

It took me quite a while to get used to the basic tools in Vectorworks - but it was great fun, very meditative.

Here my measured drawing:

op1_vectorworks1.png
 

And because it is such a beautiful object, here without measurements:

Live_Web: Censor_Chat

I collaborated with Aazalea Vaseghi on this chat-project: A self correcting/censoring chat application that only allows positive sentiments. It runs sentiment analyses on each user input and matches the darkness of the message background accordingly. When the message is too negative it gets censored with an entire black background so that the negative message disappears. Code is on github.

 


Summer Internship 2018 Recap: Havas New York

For my summer internship 2018 I worked at the advertising agency Havas New York in the creative team for a corporate client as AI-researcher: For two and a half months I was exploring machine learning techniques for rapid prototyping with existing neural networks.

As this was client based work I will add results of my work to my portfolio once it is publicly available.

As part of my internship I worked as well on an intern-project for a non-profit as creative technologist.

I thoroughly enjoyed my work at Havas New York, it was a great learning experience that helped me to clarify my focus for the next year in school: I want to explore machine learning on a deeper level, clearly understand the maths behind it and start building my own models and networks.

I am currently enrolled in the Udacity course “AI programming in Python” that covers the math basics and combines it with the necessary libraries in Python to build networks from scratch at a lower programming level. I hope that this course will reinforce my existing Python knowledge, help me to understand neural networks from a mathematical perspective and give me a solid background of the model mechanics when using higher level APIs in Tensorflow or PyTorch.

Thanks Havas New York for giving me such a great opportunity to use my machine learning skills in an agency landscape and especially Ali Madad, Marc Maleh, Marc Blanchard, Joseph Delhommer and Nick Elliott for their guidance and mentorship.

Understanding Networks: Ball Drop Game - Controller

I created a game controller with the RaspberryPi Zero W using websockets in Python and the GPIOs connected to switches. I chose this setup as the PiZero offers a terminal to run shell commands, can be accessed headless via ssh and offers the possibility to run Python scripts. It is also very portable and can be mounted on a small controller.

The connection to the game server via sockets was very smooth and reliable, the GPIO connection with the switches worked well - after quite a few hickups regarding the wiring of the switches: I initially wired all switches directly into one power source without separating them via resistors. As a result all switches fired with each other once one was triggered. After a few hours of hardware debugging (and de-soldering …), I mounted the resistors in between each power connection and the switch and everything worked smoothly.

I finally added a LED to indicate a successful socket connection to the server.

 

Here the code for the socket connection and GPIO wiring in Python:

import RPi.GPIO as GPIO # Import Raspberry Pi GPIO library
import socket
import time

s=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
s.connect(('Server-IP',Port))

GPIO.setwarnings(False) # Ignore warning for now
GPIO.setmode(GPIO.BOARD) # Use physical pin numbering
GPIO.setup(35, GPIO.OUT) 
GPIO.output(35, GPIO.LOW)

#check connection
data=s.recv(1024)
if len(data) > 0:
 print('yay, we are connected')
 GPIO.output(35, GPIO.HIGH)

msg1='l'
msg2='r'
byt1=msg1.encode()
byt2=msg2.encode()
msg3='l'
msg4='r'
byt3=msg3.encode()
byt4=msg4.encode()

#for i in range(0,50):
 #       s.send(byt2)
  #      time.sleep(2)
   #     s.send(byt1)
    #    time.sleep(2)
#s.close()
def button_callback_l(channel):
    print("go left!")
    s.send(byt1)
def button_callback_r(channel):
    print("go right!")
    s.send(byt2)
def button_callback_u(channel):
    print("go up!")
    s.send(byt3)
def button_callback_d(channel):
    print("go down!")
    s.send(byt4)

GPIO.setup(11, GPIO.IN, pull_up_down=GPIO.PUD_DOWN) # Set pin 10 to be an input pin and set ini$
GPIO.add_event_detect(11,GPIO.FALLING,bouncetime=500, callback=button_callback_l) # Setup event o$
GPIO.setup(7,GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.add_event_detect(7, GPIO.FALLING,bouncetime=500, callback=button_callback_r)
GPIO.setup(13,GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.add_event_detect(13, GPIO.FALLING,bouncetime=500, callback=button_callback_u)
GPIO.setup(15,GPIO.IN, pull_up_down=GPIO.PUD_DOWN)
GPIO.add_event_detect(15, GPIO.FALLING,bouncetime=500, callback=button_callback_d)
message = input("Press enter to quit\n\n") # Run until someone presses enter
GPIO.cleanup() # Clean up

Live_Web: Self Portrait Exercise

I used the video playback speed functionality in html to make the most out of a very short moment in time of my life - waking up in the morning a few days ago. The user can wake me up again and again, the moment is played back at random speeds and either extended or shortened and finally looped.

 
 

Here the code (including code sources that where used):

As part of our assignment we should as well look for interactive websites that offer live interactions to its users. I chose NTS.live, an online radio-station with two live shows and the possibility to listen back to shows in an archive. The interesting interaction here is the live radio show that is broadcasted over the internet, users can interact with the hosts via all social networks, these interactions are then re-told/narrated/moderated by the show-hosts. This is a highly filtered interaction, as not every communication between user and show host will be re-broadcasted or commented on - but a very charming mix of the old-fashioned live radio show with social media.

A Slimmed Down First and a Well Known Second Final

project solar-geiger-counter

This weekend I took my Energy final project out in the sun and it worked - at least most parts of it: I can power the geiger-counter entirely with a 12 V / 3.4 Watt solar panel - no LiPo involved. I cannot get enough current out of the panel to power a solenoid on the side as well. To avoid using another solar panel and keeping the project still portable I decided to slim this part down and accept the final outcome: a solar powered geiger counter with an Arduino Nano as a data outlet. I cannot run my entire art installation from project development on a small solar panel - but it is remarkable that the small panel can power the geiger-counter and therefore produce voltages up to 400V (needed to activate the geiger-mueller tube)! 

Here the prototype test outside:

 The  mightyohm-geiger kit  assembled and running on solar - a beauty!

The mightyohm-geiger kit assembled and running on solar - a beauty!

And a live test:

project solar-blockchain-physical-streaming-interface

Thinking about the course and my projects in it so far I thought since a long time that it might be worth revisiting my only partially finished project for the midterms. 

It took me a few hours to successfully debug my hardware + code and had three insights:

  • a bigger power supply (in my case a 6600 mA Li-Ion Pack) is great for the Raspberry Pi 3 
  • don't copy paste into Github from terminal - check for tabs and spaces
  • always check if your project share a common ground

I got the wallet / blockchain connection and the servo part working on solar (with a gentle support from the Li-Ion pack ...), here the rough prototype (here just running from Li-Ion pack):

After compiling ffmpeg on my raspberry pi locally (takes a long time ...) I tested streaming to youtube from the pi and then embed this into my website. Streaming from the Pi to Youtube following this tutorial worked, at full resolution only if the Raspberry Pi is plugged into a standard power socket - the Li-Ion Pack can only support a streaming resolution of 320x240 at the moment, maybe the bottleneck is the booster. Here the battery powered prototype:

After a lot of tweaking of the ffmpeg-stream parameters I finally decided to re-solder some of the connections - using thicker wires. And I finally got all parts (digging the blockchain, running the servo while streaming live to youtube - with battery/solar power only) working. Very buggy, it crashed after a minute ... but it worked!

I am super happy! And I know that I need to iterate on this: This is a temporary fix as I ideally would like to the use webRTC as I used it on the mac-version of the project.

This class really was a journey, a lot of insights and learning moments - so far am so satisfied that I can run both projects independently with solar on a raspberry pi and an Arduino!

A Stone, finally! And Some Numbers on Randomness

I am super happy that my classmate Yen, who happens to work at ITP as well, heard about my project and gave me a beautiful big natural stone that he kept in his office . Very likely it is granite and therefore a good source for natural radiation / random numbers. 

radiation tests

So last night I sat down and did a few experiments measuring the amount of decaying particles with and without a stone close to the geiger-counter. I conducted 3 test rows, so three times with the stone, three times without the stone. 

Here are the results measured over the time of 3 minutes each:

round 1

(86 vs 67)

round 2

(76 vs 67)

round 3

(75 vs 62)

verdict

The stone from Yen's office seems to be slightly radioactive, around 15 - 20% compared to "normal" conditions that mainly detect gamma rays from cosmic radiation. It is still very low compared to carbon-monoxide detectors, big granite kitchen countertops, old watch luminescent hands, which emit many more decaying particles over this amount of time. 

installation prototype setup

I decided to do a quick run to test parts of the setup. Here a solenoid knocks the random particle decay from inside the stone that gets detected by the geiger counter back into the stone  - the heart of the stone is talking to its audience:

 

So far so good - tomorrow I have to test the setup powered entirely with two solar panels - let's hope the sun is out and the panels can generate enough current to power the entire installation! 

More C4D experiments for Randomness_Project

As I am currently looking for a digital translation of the physical interface of the random-project (stone and knocks with hand or hammer / solenoid as the "hammer of randomness") that might be part of the installation piece and create a more immersive space for the audience, I am exploring cinema4d shapes and materials to find the right fit for the  "devotional device" that the installation tries to create: the rhythm of the animation would be triggered by the particle decay inside the granite (and some gamma-particle noise probably caused by solar flare). The materials of the digital objects should remind of devotional objects: precious stones, gold and fur. These become animated and "alive" through true randomness.

And I am quickly realizing the relationship between computational power, render-time and animation/material complexity - it takes forever to render a short animation in high-res ....

Here a few more screenshots and animations:

 
 

(animation above based on mograph-tutorial)

 
 

Merging "Energy" and "Project Development Studio" - Finals

For my final in my energy-class I decided to create a solar powered version of my installation piece for project development studio: In this iteration of my project, the audience can only "connect" to the true randomness of the granite when the sun is out. 

I will build a more complex installation for the spring show that will not run on solar, this one for my energy-final will run entirely on it.

Here a modified wiring schematics that is based on an Adafruit tutorial for using the piezo and solenoid in combination:

 

solar_random_setup.jpg
 

I am using a logic level converter between Arduino nano running on 5V and the geiger-counter running on 3V. The solar panel is providing power for the Arduino and the solenoid. The piezo is running on a separated power-circuit provided by the Arduino. This circuit feeds as well into the logic-level converter which is providing power to the geiger counter (convert voltage down) and enables listening to its pulse (convert signal voltage up). 

The code for the installation is still in the making, I still have to merge the solenoid trigger into the piezo and geiger counter code. The comparison of geiger counter beeps and knocks to identify whether the knocks of a user are in sync with the randomness of the particle decay inside the granite needs to be improved as well. 

Random Random Random ... and my first cinema4d attempts

Last week I took a few moments to play with cinema4d  - I somehow felt explorative with creating animations for my random-project. So far I am not sure how exactly it could fit in, but there is something to this motion ... maybe three screens / panels on each of the side-walls of the room that are triggered by the random-ness of the geiger-counter underneath the black granite?

 
hairy_fruit1.gif
hairy_fruit1.gif

 

 

Random Devotion meets Physical Computation meets Stone-Carving

This past week I focused on sourcing a bigger piece of granite (granite is by nature slightly radioactive and will be the source for generating random numbers with the geiger counter) and the circuit for the user interaction. I used an example from the adafruit-learn section as the basis for my code which will interface with the geiger-counter. The circuit will perform the following tasks:

  • listen to knocks of the users against the stone with a piezo element
  • listen to geiger counter (convert 3.3 V of geiger-counter pulse to 5 V of Arduino digital-in)
  • compare the knocks of the user with the inner true random decay pattern of the granite (measured with the geiger counter)
  • trigger a solenoid for 10s in the true random decay pattern of the granite if user and stone pattern align / if user and stone are "in sync"
  • repeat 
 
 voltage conversion testing

voltage conversion testing

 voltage converter wiring detail

voltage converter wiring detail

Random Devotion: Geiger-Counters and Pink Granite

Two weeks ago I had the idea to center the devotional piece around a granite rock as source of randomness  - with a geiger counter measuring the (random) decay of radioactive particles from the rock. Here a chart tracing the decay of Radon gas and an explanation of the true random nature of this subatomic process:

 ( source )
 
 ( source )
IMG_2423.JPG
 

Last week I assembled a geiger-counter kit - lots of soldering and lots of fun:

IMG_2428.JPG
IMG_2433.JPG
IMG_2485.JPG
 

Now the setup can be more finalized:

devotional_object.png