Digital Fabrication Final: Clay VR Future Nudes

Continuing from our explorations with base64 and clay, Kim and I created a VR/physical sculpture that mimics cyborg art in the future.

concept

We meditated the whole semester on the relationship between our physical and the digital body, explicitly on the space between physical and digital object. We tried to deconstruct the notion of sculpture as a form of human memory to the blueprint of code - only to re-imagine it in VR.

We called our piece “future nudes”, as only cyborgs in the future will be able to see the real body behind the sculpture without the help of VR goggles as we imagine them to be able to decipher code in real-time.

process

A depiction of the artists body gets laser-etched in binary code into wet clay, spread out on 12 clay tablets. These tablets get fired in a traditional kiln like the earliest forms of human written media, Babylonian cuneiform. Images of these tablets get put into a immersive VR environment and when the audience is touching the real clay tablet they can see their hands interacting with the physical object of the tablet - which in itself is the depiction of a real person converted into a digital blueprint, binary-code. In the final version the pixels of the body parts on each tablet pop up in the VR environment when the fingers are touching it.

In a first step the highly pixelated image (to minimize the binary file size) gets transferred to binary via a python-script (here one part of the image is shown):

male_torso_illustrator_binary.png

After that we prepared the low-firing raku-clay for laser-etching and kiln-firing. After applying manual focus on the 75 watt laser we used the following settings:

  • 600 dpi raster etching

  • 90 % speed

  • 30 % power

We got the best results when using out-of-packet fresh clay without adding any water to it. The clay fired well at cone 04 for the bisque fire.

Here a few pics from our process:

IMG_7751.JPG
IMG_7827.JPG
IMG_8074.JPG
Oi%H9qYxSQ+WRE0a4f5V7A.jpg
IMG_8350.jpg

And finally the experimental and still unfinished port into VR using Unity VR, HTC Vive and Leap Motion (in the final version the pixels of the body part pop up in the VR environment when the fingers of the audience touch the tablet):

IMG_8114.JPG


Live Web / Machine Learning for the Web Finals: Peer to Peer Meditations

For this final I combined the techniques I learned in “Live Web” with the ones from “Machine Learning for the Web” and built a peer-based meditation piece:

A user can experience the base64-code from a remote web-camera streaming through his own body.

All of this runs with webRTC, tensorflow.js and node.js as main parts on the backend: peer.js handles the web-rtc peer connection with a remote peer-server, node.js organizes the automatic connection to a random user in the peer-network, tensorflow.js runs person-segmentation (now renamed to “body-pix) on the local-users webcam - all the canvas-handling and pixel-manipulation is done in javascript.

The result is a meditation experience that reminds of the calming effects of watching static noise on tv screens. As it is peer-based, the users body becomes a vessel for the blueprint and essential idea of something or somebody else. This hopefully creates a sense of connectedness and community while meditating.

Staring at a screen for meditation might not be everyones preferred way doing it, it is worth exploring this from a futuristic perspective: we probably will be able to stream information directly into our retina, therefore an overlaid meditation experience might be worthwhile considering.

Here a short video-walkthrough streaming videoframes from my mobile phone camera as base64-code through my own silhouette (the latter is picked up by the camera on my laptop and run through a body detection algorithm). All web-based with machine-learning in the browser powered by tensorflow.js and webRTC peer-to-peer streaming (code):

Understanding Networks: RESTful API and Controller

As outlined in the first blogpost, the assignment for our group was two-folded:

  • build an API & Server for our classmates Vidia and Lucas - they built an interactive limbo-stick after our specifications

  • build a physical controller for our classmates Simon and Keerthana - they built a dance-step projector

The assignment proved to be a great challenge, especially the coordination tasks that come with working as group with two other groups (that do the same) on two different projects over a longer period of time with no pre-defined work environment - but we finally succeeded!

Lucas and Vidia made a video for their limbo-stick machine, built after our API specs/deployed server.

And here a pic and code from the physical controller we built for Simon and Keerthana (and their dance machine): laser-cut/etched acrylic, lcd-screen, arduino mkr1000 wifi, potentiometers, bamboo box

wVoNicRHSQiiz2rabdSpAA.jpg

And a little prototype video of the controller in action (controlling the node-based dance step controller built by Simon and Keerthana):

Machine Learning for the Web: Code Bodies with tensorflow.js

concept

We are code. At least in a great part of our lives. How can we relate to this reality/non-reality with our bodies in the browser?

tech setup

I experimented with tensorflow.js and their person-segmentation model running the smaller mobile-net architecture. I used multiple canvases to display the camera stream base64 code in real time behind the silhouette of the detected person from the webcam. Given the fact that I do pixel manipulation on two different canvases and run a tensorflowjs model at the same time, it still runs relatively fast in the browser - although the frame rate is visibly slower than a regular stream with just the model.

prototypes

A brief screen recording:

Another version with a bigger screen:

 


 

Digital Fabrication Final: Future Nudes

For our final project, Kim and I were experimenting with laser-etching wet clay this week - a poetic and exciting exploration!

Too make it short - so far the results were surprisingly great. That said we still need to fire the clay in a kiln, then we can give a final verdict on it.

Here the basic setup for cutting the clay:

IMG_2948.JPG

We used low firing white raku clay as we will use a kiln that fires at cone 06. It had a nice plasticity and was easy to work with. The tricky part was to get a consistent height for the clay slab. To achieve that, we used a kitchen roller and two pieces of plywood with similar height to roll it evenly over the clay. We then cut it in shape.

IMG_2951.JPG
IMG_2952.JPG

To keep clay particles from falling through the laser bed we used felt.

After applying manual focus on the 75 watt laser we used the following settings:

  • 600 dpi raster etching

  • 90 % speed

  • 30 % power

and it worked well with two goes. Now we just have to let it dry for two days do a first firing, glaze one part and then fire it again. It’s still not clear how it will turn out - something to look forward to!

IMG_2957.JPG

Understanding Networks: RESTful API

update

assignment week1/2

Decide on your application, then describe and sketch the control surface and the specify the REST interface. Describe your application and present your REST API. Don’t build the project, just describe it and its functions. Write a blog post detailing this description. You’ll be assigned someone else’s specification to build this week.

concept

Inspired by the playfulness and shared group experience of parties Sandy, Kellee and I decided to create an interactive and RESTful Limbo-Stick.

context

A little bit of research on the origins of Limbo showed that the dance is considered the unofficial national dance of Trinidad & Tobago, together with steel drums and calypso as national heritage. In its original form the bar is raised slowly from the lowest levels, resembling the entry of death into life. When the dance gained more popularity this form was flipped - dancers are starting now high and the bar slowly lowers.

system overview

 
 

physical interface (draft)

Usual limbo setups look like our rough sketch:

 
 

RESTful API

For the RESTful API for the dance/game we followed an example to define our five basic states: “off”, “on”, “height adjustment”, “idle”, “collision”.

POST-requests

//*On/Off Mode*
POST /switchMain/{bool} 
//value: 1 to turn it on,  0 to turn limbo stick off

//*Height Adjustment Mode*
POST /height/{bool}
// Value: 1 to lower the height of the limbo stick one step on the predefined scale, 0 to keep current position

//*Collision Mode*
POST /collision/{bool}
//value: 1 if collision is detected, 0 if no collision is detected

//*Alarm Mode
POST /alarm/{bool}
//Value: 1 to turn it on, 0 to turn alarm off

//*Home Mode*
// If POST /height counter  > 6, go into home mode:
POST /home/{bool}
//value: 1 to home the stick to highest position on the predefined scale, 0 to keep it current position

GET-request

GET /state/

//returns
{
  "switchMain": {0, 1},
  "height": {0, 1},
  "collision": {0, 1},
  "alarm": {0,1},
  "home": {0, 1}
}

physcial interface/controller

 
Screen Shot 2018-11-19 at 8.08.17 PM.png
 

Digital Fabrication Final Proposal: Future Nudes

Kim and I want to create nude portraits/sculptures for the future (see here more detailed blogpost on Kim’s website .

After laser-etching our portraits in Base64 code on mat-board for our midterms, we decided to continue with this topic and play more with form and materials:


We want to either use Base64 or binary code for the encoding of the image. Regarding the base material for the project we want to iterate with etching wet clay. It seems to be the best material as it preserves our portraits pretty much forever (the earliest form of human writing, cuneiform, was “hand etched” into clay). It has the advantage that we could form it later into a sculpture or play with the shapes before burning. And it has a lot of symbolic meaning regarding the human body in different contexts (bible, Kabbalah, …).

It is pretty experimental to etch into clay and then form it, there are only very few sources online that have tried etching wet clay with mixed success. So we gotta play!

Why nude portraits? We like the idea that the portraits of our bodies will only be visible in a distant future - once humans are able to decipher code naturally, maybe as human-machine hybrids. We abstract our bodies into code and preserve them for a time where our bodies will have changed a lot: we might be half human, half machines by then. This aspect of preservation for a distant future reminds of the skor-codex from a few years ago.

For the looks of ceramics we are inspired by rough surfaces of dark stoneware, dark clay and playful sculptural explorations.

An interesting iteration would be a collecting a couple of different nudes, then cutting up the pieces into a row of beaded curtains that people could walk through - so they could interact with our bodies in code.

Machine Learning for the Web Class 1: Playing with tensorflow.js

For our first assignment I played with the exmples provided in the tensorflowjs-models git. I used the posenet-example to create a mini-skateboarding game for the browser:

The user has to jump to avoid the rocks that are coming along the way. And that’s it! I simply changed the color, strokes and dots of the skeleton and attached a few shapes between the two ankle-points (plus some minor canvas & font additions).

This construct does work somehow, as long as the light in the room is good. Still a lot of it is pretty rough, the error detection (board hits the rock) is not very accurate and needs audio-feedback, the rocks are only on one level of the canvas and there is no counter for points. But so far so good for some first browser-machine-learning.

I tried to deploy it on my server but got lost in ES6 vs Node errors. So for now just a video and ran locally.

For those wondering about the title: It is taken directly from an old NES game from 1988.

I had a lot of fun jumping around to test the game, I guess my neighbors downstairs were not really that amused … :


LiveWeb / Machine Learning for the Web - Final Proposal

concept

For my final I want to start building an interactive, decentralized and web-based devotional space for human-machine hybrids.

background

Heart of the project is the talking stone, a piece I have been working on over the last semester already and showed at ITP Spring Show 2018. Now I would like to iterate on it in a virtual space, with a built in decentralized structure as a form of a “community”-backbone via the blockchain and as an interface geared towards future human entities.

system parts

The connection to universal randomness: a granite rock.

IMG_2739.JPG

Rough draft of user interface (website):

IMG_2936.jpg

And here the full system as an overview (as well a rough draft):

decentr_stone_schematics.png

iteration with tensorflowjs

I built a web-interface as an iteration on the idea that is fed by the camera input and displays it as Base64 code within the silhouette of the users body:

project outline

Here the basic setup: The rock is displayed via sockets as a Base64Stream on the website for logged in users. Users have to buy “manna” - tokens with Ethereum (in our case the Rinkeby-Testnet blockchain), then they get an audio-visual stream of decaying particles of the rock to start their meditation/devotion: The rhythm of particles decaying is happening - according to quantum theory - in a true random manner. Therefore the core of this devotional practice of the future is listening to an inherent truth. The duration of each “truth” - session is 10 minutes. I will have to see how “meditative” the random decay actually is - I remember it as pretty soothing to just listen to the particle decay that gets hammered into the rock with a solenoid, but I will have to find a more digital audio coloring for each particle decaying. That said - this piece is pure speculative design. I am interested in the future of devotional practice, especially for people who consider themselves non-religious. So trial and error is probably the way to go with this project as well.

If the user moves significantly from the prescribed meditation pose (Easy Pose — Sukhasana) in those 10 minutes, tokens get automatically consumed. The same happens if the user does not open a new ‘truth’ - session within 24 hours after finishing the last session.

On the screen, little dots display active users that are in a session as well. The size of the dots changes according to how much time they have left in their daily meditation practice.

The goal of this experimental devotional space is to give users an audio stimulation that is scientifically true and therefore easier to identify with - in a sense a random rhythm of the universe to meditate with. By buying into the practice with tokens, only dedicated users are using the space and their motivation to actually perform the mediation is higher as they paid for it. They only have to pay again if they do not perform the practice in a regular manner or interrupt a session.

The visuals will be dominated by code - as this is a devotional meeting place for future human-machine-hybrids that find peace and solitude in true randomness (the opposite of their non-random existence).

tech specs

  • granite rock with geiger-counter attached to it

  • raspberry pi with camera running sockets to display rock and send random decay patterns

  • remote server running node.js and sockets

  • tensorflow.js running posenet

  • ethereum contract (solidity based) running on Rinkeby-Testnet (dev blockchain)

  • MetaMask chrome extension (to access blockchain from browser with a secure wallet)

challenges

That sounds like a lot to do in 6 weeks, but I want to give it a try. I experimented with blockchain and true randomness last semester already in two different projects and the current LiveWeb / Machine Learning for the Web classes seem a great framework to give this a virtual and AI guided setting. I am still a bit uncertain about the blockchain backbone as this is the part where I feel the least at home at the moment. I only remember fragments of Solidity, web3js and MetaMask, connecting all layers together was tricky and the documentation sparse. Well, we’ll see!

Understanding Networks: Packet Analysis

Assignment:

Capture and analyze traffic on your home network while going about your usual network activities. Present your results in summary form, using graphical analysis where appropriate.

background traffic analysis

I used wireshark throughout the assignment in the ITP-sandbox network.

After a few basic experiments with querying a very basic webpage and looking at the sent/received packets, I went back to zero: I started measuring just traffic when all applications are off over the course of 5 minutes. Surprisingly there is still a lot of background network traffic happening, especially application data, without me actively engaging in any online activities.

OS X source traffic (my IP: 128.122.6.150)

OS X destination traffic (my IP: 128.122.6.150)

Ubuntu source traffic (my IP: 128.122.6.149)

Ubuntu destination traffic (my IP: 128.122.6.149)

I used two operating systems to compare the background traffic: OS X and Ubuntu. Ubuntu only shows one third of overall network traffic compared to OS X.

The latter one shows a lot of connections to IP addresses starting with 17 - these are registered to Apple in Cupertino. A more detailed view shows a variety of different activities that my OS X operating system is performing and sending data back and forth between Apple servers and my machine:

Screen Shot 2018-10-27 at 3.59.12 PM.png
Screen Shot 2018-10-27 at 4.43.15 PM.png
Screen Shot 2018-10-27 at 4.42.20 PM.png
Screen Shot 2018-10-27 at 4.37.04 PM.png



Some further research on the sent TCP-packets showed that those processes refer to Apple cloud services (as I am using iCloud to sync my photos, calendar etc.) or programs like iTunes.

On my Ubuntu machine, I do not use any cloud service that that is tied to the operating system, therefore 1/3 of background activities.

As I got interested in this “hidden” traffic I did some further research on the TLS - Layer and used wireshark to go through each step of the TLS -protocol:

 
 
tls.png
 
 

Looking at the cipher change protocol specifically, I did further research on the encryption part (here the negotiated cypher-suite) of TLS - and finally understood Diffie-Hellman and RSA encryption. That was worth the extra hours … ! I have to confess that trying to sketch all components of RSA encryption still gives me headaches, compared to that Diffie-Hellman seems a bit more simple and elegant. To me it was not obvious to choose which one over the other, intuitively I would choose Diffie-Hellman over RSA. And I wondered why Apple is “downgrading” my (possible) SHA384 encryption to SHA 256 in the cipher suite negotiation in the protocol.

Screen Shot 2018-10-27 at 7.25.07 PM.png

After reading the Diffie-Hellman vs RSA post on stackexchange, I was a bit less confused: Non-ephemeral RSA encryption seems to be the industry standard for now as its generally faster to compute. Diffie-Hellman is more secure, but more difficult to compute.

I suspect Apple is using the lower encryption standard (RSA with SHA 256) due to the sheer volume of traffic on their servers.

usual daily network traffic analysis

For my usual daily network traffic analysis I had a couple of browser windows open, mail and terminal. In one of the browser windows I was running an online-radio station (nts.live). I captured data over the course of 5 minutes.

Here the output looking at the source-traffic (my IP: 128.122.6.150, running OS X):

Here the destination side of traffic:

Finding out on which data channel my online-radio station is running proofed to be difficult as the stream is hosted not under the website-ip address but on a different server. I suspected either one of the ip addresses below as they showed a continuously high traffic via TCP (initially I expected UDP but as the stream is https, TCP makes sense). Both pointed to amazon-servers.

Screen Shot 2018-10-29 at 11.30.57 PM.png

To find out which one might be the one hosting the stream, I just closed the tab running the online radio station. Here the results:

no_radio.png

To my surprise both were muted now and didn’t appear on the traffic overview anymore. As the website is hosting two streams at the same time, I guess it might load both from different servers even when I would only be able to listen to one? My fellow classmate Beverly pointed me into the right direction: I should check the sent packets in Chrome directly - and here two streams are loaded at the same time! This is for sure eating into the bandwidth …

Screen Shot 2018-10-29 at 11.48.07 PM.png

Surprisingly the connection to the Apple servers was somehow quiet during multiple wireshark-captures while running Chrome and Mail. The server-IP starting with 17 (apple server range) does not appear in the traffic overview at all. Why this is the case, is not quite clear to me. Maybe background processes are only run while no other traffic is using the bandwidth? I can only guess at that point.

Now enough of packet sniffing, TCP, TLS and UDP - I learned a lot and got a lot more interested in encryption, which will be the topic of a future class in a few weeks. Awesome!

Digital Fabrication Midterm: Future Portraits

“In another reality I am half-human, half-machine. I can read Base64 and see you.”

final iteration

Kim and I created self-portraits for the future.

concept

The self-portraits are Base64 representations of images taken by a web-camera. The ideal viewer is a human-machine hybrid/cyborg that is capable of both decoding etched Base64 and recognizing the human element of the artifact.

process

We went through several iterations with felt, honey and rituals: Our initial idea of capturing a moment in time and etch or cut it into an artifact morphed into an exploration of laser-cut materials with honey. We were interested in the symbolic meaning of honey as a natural healing material. Our goal was to incorporate it into a ritual to heal our digital selves, represented by etched Base64-portraits that were taken with a webcam and encoded. We used felt as it is made out of wool fibers that are not woven but stick to each other through applying heat and pressure. This chaotic structure seemed a great fit for honey and clean code:

IMG_2855.JPG

Soon we dropped the idea of using felt as it seemed to be too much of a conceptual add-on and reduced it to honey and digital etchings - the healing should be applied directly onto the material in a ritual.

IMG_2854.JPG

After gathering feedback from peers and discussions about the ritualistic meaning we struggled with a justification for the honey as well: most of the people we talked to liked the idea better that only human-machine hybrids of a near or far future are technically able to see the real human portrait behind the code. After a few discussions about both variations of our concept dealing with digital identities and moments in time we favored the latter one and dropped the honey.

So we finally settled on etching timeless digital code onto a physical medium that ages over time - self-portraits for the future: Maybe we look back at them in 20 years and can see through the code our own selves from back in the day?

A little bit about the technical process: The image is taken with a nodejs chat app that I created for LiveWeb: It takes a picture with the user’s webcam every three seconds and shows the Base64 code on the page - again an example of an interface for human-machine hybrids or cyborgs of the future.

machine_facetime.gif

After taking portraits with my webapp, we copy/pasted the code for each of us into MacOS TextEdit, exported it as a pdf, copy/pasted the contents of it into Adobe Illustrator and outlined the font, Helvetica. We chose this font as it is a very legible font, even at a very small font size. Our laser computer did not have Helvetica installed, therefore we outlined all letters.

fullsizeoutput_310.jpeg

The files were very large, as portraits varied between 160,000 and 180,000 characters.

These digital preparations were followed by 2 days of test-etchings on crescent medium gray and crescent black mounting-boards and experiments with different laser settings (speed, power and focus).

IMG_2912.JPG

We discovered that getting the focus right proved to be difficult: The laser beam seems to get weaker once it travels all to the right of the bed. This makes the etching illegible on that side, whereas the far left it is fine. Focusing a bit deeper on the material with the manual focus produces satisfying results with white cardboard fixed this issue, whereas the fonts looked blurry on the black cardboard on the left side - it was etching too deep. Finding the right balance for the focus fitting both sides equally well took a long time and a lot of trial and error.

IMG_2915.JPG

Once we got the focus right, we started with the the final etchings: Speed 90, Power 55, 600 dpi and focus slightly closer to the material than usual turned out the best results on our 75-watt laser.

Each portrait took 1:33 hours to etch, in total we completed 4 portraits.

We see them as a starting point for further iterations, as “exploration stage one”: The concept of creating physical artifacts for the future that preserve personal digital moments in time with code and laser is very appealing to us.

We will iterate on our portraits for the future probably for the final.

IMG_6999.JPG
IMG_2887.JPG
IMG_2864.JPG
IMG_7004.JPG
IMG_6998.JPG

LiveWeb: Midterm Project Idea

Quite often the rapid prototyping process that we celebrate here at ITP (and can be really difficult to get used to as I like very much to spend more time with a project before going on to the next one …) has the great side effect that you go through a lot of different ideas, topics and digital narratives that can be told. This sometimes means you end up finding a hidden gem that really sticks with you for a while.

Last semester it was the rock-project and its devotional aspects that kept me occupied (and still does).

This semester I am fascinated by creating apps for human-machine hybrids for a probably not so distant future.

For the last Live-Web class I developed an image-chat app that shows bare Base64 chat images instead of decoded (human readable) images. This means only a human-machine hybrid can fully enjoy the chat: a machine itself is limited to the non-conscious decoding of data and can’t enjoy the visual image, a human cannot enjoy it either as the other participants are hidden behind a wall of code (Base64). Only a human with machine-like capabilities could fully participate and see who else is in the chat.

So far so good.

But the prototype was very rushed and lacks a few key features that are conceptually important:

  • real continuous live stream instead of image stream

  • hashing/range of number participants/pairing

  • audio

  • interface

  • further: reason to actually be in such a chat/topic/moderation

I would love to try to address these with the following questions for my midterm (and possibly as well final as this seems to be a huge task):

  • can this be live-streamed with webRTC (as code) instead of images every 3 seconds?

  • how and by who can it be encoded that the stream is only readable by a select circle that possesses a key? Is the rock coming back as a possible (true random) moderator?

  • how would the audio side sound like? or is there something in the future that is kind of like audio, just as a pure data stream that opens up new sounds? and how does data sound like?

  • how to compose the arrangement of interfaces for the screen?

  • which design aesthetics can be applied to pure code?

  • a bit further out there: how would a chat look/feel/sound like in the future with human-machine hybrids? what lies beyond VR/AR/screen/mobile as interfaces?

Let’s get started!

Another iteration on this for Digital Fabrication:

IMG_2860.JPG


Digital Fabrication: Midterm Iterations on Healing, Honey and Felt

“In your backbone you feel a pointed something and it works its way up. The base of your spine is tingling, tingling, tingling, tingling. Then n|om makes your thoughts nothing in your head”

[Kxao ≠Oah - a healer from |Kae|kae area, quoted in Biesele, Katz & St Denis 1997:19]

(taken from JU|’HOANSI HEALING SONGS on NTS-radio)

IMG_6856.jpg



Screen Shot 2018-10-11 at 12.28.37 AM.png

For our midterm in Digital Fabrication, we iterated on the idea of the swarm based behavior. After reading more about Joseph Beuys’ use of felt and honey in his fluxus-performances in the late 60s and the JU|’HOANSI tribe and their use of healing through dance, we shifted our focus to the idea of a healing ritual for the digital age.

We are thinking about making a piece that uses three components in an interactive installation:

  • an image in Base64 code on paper (used for image encoding on the web)

  • felt with a perforated structure

  • honey

As shown in the image above, a depiction of the artists in a waterfall-like silhouette made from Base64 encoding is printed on paper. This is the foundation of the sculpture that reflects ourselves that we want to heal. Above the paper is a layer of felt cut in strips and shaped in a web. It filters the honey dripping from above which acts as a part of the healing ritual: Beuys saw honey as a healing material as it is gathered by bees, which represented a “peaceful” entity.

A coincidence we found out while cutting felt with the laser: it smells like honey afterwards.

IMG_2852.JPG

LiveWeb: MachineFaceTime

I created an image chat application that can be fully used or seen only by machines. Or you need to be fluent in Base64 to see who you are connected to. Every three seconds an image from the chat participants is displayed and sent around via sockets - it is kept in the original data format for image encoding on the web, Base64 and shown in a very small font. The focus of the eye shifts on the code as an entity and creates rhythmically restructuring patterns. The user-id is displayed along on top with each received image.

To make it work I had to hide various elements on the page that are still transmitting data via the sockets. It works as well on mobile. Just in case you are in need of some calming Base64 visuals on the go.

(code)



ezgif.com-video-to-gif.gif


Live Web: Collective Drawing in Red and Black

I modified our example from class a little bit … and added a button to change the collective drawing color: it is either red or black, everybody is forced to adapt to the new color but can choose the canvas they want to work in. As the canvas is split between a red and black background the “pen” can be either used as an obscurer (black on black/red on red) or a highlighter (red on black/ black on red). The results look strangely beautiful as the dots seem to have depth or resemble a point-cloud:

Screen Shot 2018-10-02 at 4.57.17 AM.png

… and here some code. And more random drawings:

Screen Shot 2018-10-02 at 11.26.58 AM.png

Understanding Networks: Traceroute

assignment

Traceroute at least three of the sites you regularly visit. Do it from all of the locations you regularly connect from. Save the trace in a file, and make a map of the routes, indicating the network providers that show up every time. Identify who the major tier 1 providers are in your life. Identify the networks your traffic crosses in the course of your daily life. Figure out whose hands the data about your life goes through on a regular basis. Look for patterns from your network-browsing habits through analysis and graphing of your network traces.

setup

I traced four websites that I frequently visit:

I chose two networks/locations for tracing that I spend most of my time at:

  • university wifi (+ mobile network comparison in same location), at Washington Square/Manhattan

  • home wifi (+ mobile network comparison in same location) in Brooklyn

I wrote two custom tracerouting tools to automate the process as far as possible:

  • tracer.py: combined traceroute and isp-lookup/geoip for given list of urls, returns json-txt file

  • mapper.ipynb: matplotlib based tool using basemap to create traceroute maps from json-txt file

server footprint from home

 
 

I live in Brooklyn. This is where I spend the few hours that I am not on the floor at ITP.

Looking at the footprints I observed that connecting to my homepage (which I registered in Europe with Squarespace) from my homw wifi causes the trace to bounce back and forth between EU and US - until finally reaching the US - and the routes my signal travels on are owned by Time Warner Cable and Akamai Technologies:

The route looks similar when using a mobile connection, but with different providers: Zayo Bandwidth and Level 3 Technologies stick out, Akamai Technologies comes up again as ISP in Europe:

Looking at another footprint (nts.live), Time Warner Cable dominates the servers in the US:

The same trace with mobile emphasizes again Zayo Bandwidth for mobile-networks:

Connecting to NYU and Stackoverflow does not yield too many interesting results, both stay more or less close to or entirely in New York. The only strange behavior comes from trying to traceroute Stackoverflow on mobile - it does not allow to traceroute, the signal gets stuck in a local private network (in all locations).

Here the connection to Stackoverflow via wifi which travels to Denver and then comes back to New York:

 

server footprint from ITP/NYU

 

ITP/NYU is situated at Washington Square in Manhattan. Here I spend most of my time, logged into NYU wifi or the mobile network.

Comparing the traces of the two wifi-networks (home and university), the paths through the servers look different for my homepage - in the US the network provider is not clearly identifiable, NYU provides the last visible trace before it goes to Europe. There GTT Communications comes up as a provider:

The trace for nts.live shows a connection to Europe that does not come in the map from my home wifi, the network provider that pops up is Hibernia Networks Netherlands. Why? It might have to do with NTS having multiple IP-addresses, maybe the server was more easy to access from NYU. Maybe. I can only speculate at the moment. Anyway, here the map, accessed from NYU wifi:

On mobile the connection stays in the US (and again in the hands of Zayo Bandwidth as ISP):

Takeaways

To make it short - this is all very interesting! My online life is in the hands of very few network providers, they vary depending on which type of network I am connected to - and the routes vary sometimes substantially, detours to Europe are not always explainable for me. I really enjoyed understanding much more of this physical layer of the internet and how every request from a laptop of phone travels around the globe at -literally - light-speed.

I thoroughly enjoyed building custom tools in python for it and dive a little bit into data visualization with matplotlib and basemap - although I encountered quite a few challenges on my way: nested dictionaries are great, but need a logical setup, building tools takes way more time than actually collecting and visualizing the data.

Let’s finish this blogpost with a screenshot from parts of the json-data (a little bit obscured):

Digital Fabrication: Felt_Laser Explorations

Our assignment this week took us into unknown lands of laser-cutting (at least for us unknown…): I teamed up with Kimberly Lin and we went to mood-fabrics in the NY fashion-district to buy different shades of grey felt in a thicker quality (around 1/8 inch).

 

We opted for grey felt because of its iconic role in sculpture and art installations in the 20th century: Robert Morris, Joseph Beuys and Bianca Pratorious were our main inspirations for the choice of material. We were curious on how the material could be cut with the laser - and how the digital process could alter the artistic output and execution. So we decided to play !

IMG_2831.JPG
IMG_6596.JPG
 

We used Vectorworks to create simple slots and lines for our first prototype. After copying it into Adobe Illustrator we started laser-cutting - and to our surprise the felt turned out to be a great choice: it has a certain sturdiness and structural integrity that helps maintaining the form. There is still a lot of movement and room for re-shaping of the object. And the laser cuts it pretty fast, efficient and precise. We did three rounds of cutting (to prevent the material from too much burning) at conservative settings: 500hz frequency, 30 speed, 10 power worked well on the 60watt laser.

We prototyped different shapes and arrangements of the fabric pieces as we plan to build a large-scale kinetic sculpture later in the semester.

IMG_6624.jpg
IMG_6620.jpg
IMG_6690.jpg
IMG_6677.jpg

After playing with different combinations we decided to keep a self-standing organic structure for now and later on experimented how it would behave in motion.

Digital Fabrication: 2-D Object Drawing

I chose the Teenage Engineering OP-1 synthesizer for my Vectorworks object drawing assignment.

IMG_2818.JPG
 

It was challenging to get all the measurements correctly into the drawing, after a while I became a bit detail obsessed - as Vectorworks gives you the opportunity to be very exact:

Screen Shot 2018-09-19 at 3.26.36 AM.png

It took me quite a while to get used to the basic tools in Vectorworks - but it was great fun, very meditative.

Here my measured drawing:

op1_vectorworks1.png
 

And because it is such a beautiful object, here without measurements:

Live_Web: Censor_Chat

I collaborated with Aazalea Vaseghi on this chat-project: A self correcting/censoring chat application that only allows positive sentiments. It runs sentiment analyses on each user input and matches the darkness of the message background accordingly. When the message is too negative it gets censored with an entire black background so that the negative message disappears. Code is on github.

 


Summer Internship 2018 Recap: Havas New York

For my summer internship 2018 I worked at the advertising agency Havas New York in the creative team for a corporate client as AI-researcher: For two and a half months I was exploring machine learning techniques for rapid prototyping with existing neural networks.

As this was client based work I will add results of my work to my portfolio once it is publicly available.

As part of my internship I worked as well on an intern-project for a non-profit as creative technologist.

I thoroughly enjoyed my work at Havas New York, it was a great learning experience that helped me to clarify my focus for the next year in school: I want to explore machine learning on a deeper level, clearly understand the maths behind it and start building my own models and networks.

I am currently enrolled in the Udacity course “AI programming in Python” that covers the math basics and combines it with the necessary libraries in Python to build networks from scratch at a lower programming level. I hope that this course will reinforce my existing Python knowledge, help me to understand neural networks from a mathematical perspective and give me a solid background of the model mechanics when using higher level APIs in Tensorflow or PyTorch.

Thanks Havas New York for giving me such a great opportunity to use my machine learning skills in an agency landscape and especially Ali Madad, Marc Maleh, Marc Blanchard, Joseph Delhommer and Nick Elliott for their guidance and mentorship.