PCOMP FINAL: KNOB @ ITP WINTER SHOW 2017

After a few weeks of fabrication, code and iteration our team (Anthony, Brandon and me) showed our KNOB-project to the public at the ITP Winter Show 2017. Here a little update on the latest developments and iterations since our submissions to the pcomp finals.

code / pcomp

The code/pcomp setup stayed more or less the same for this last iteration: We used a 8266 node mcu wifi-module in combination with a raspberry pi inside the knob that picked up the data from the rotary encoder at the bottom of the metal pole counting rotations of the knob. The rotations were sent to a remote server (in our case heroku-server) via a node script (proofed to be easier than python for this use). On the server-side we reprogrammed the rotations to match 1000 rotations for full brightness of the LED. The pwm-data was sent to another pi inside a white pedestal with the LED on top of it. The pi controlled the brightness of it directly. For all remote connections we used itp-sandbox as wifi-network.

fabrication

For the fabrication part of the project we iterated mostly on the outer materials of the knob and the resistance of the rotational movement of the bearings. After priming it with white primer we initially wanted to paint it in rosegold - as a nod to Jeff Koons and Apple product fetishism. Then we realized that this might be a too distracting and cynical take on a generally playful concept and decided to use felt as the main outer material of the knob. The audience should want to touch it and and immediately feel comfortable with the soft material. It proved to be a good choice - the feedback from the audience was great, everybody liked the softness and feel of the felt. We chose black as it seemed to show the use by a lot of hands less fast than grey or a brighter color. It accented the general black and white color scheme of the arrangement as well. 

IMG_2253.JPG
IMG_2307.JPG
IMG_2256.JPG
IMG_2259.JPG

For the LED we built a wooden pedestal to house the raspberry pi and battery pack and painted it white.

IMG_2249.JPG

 

To add more resistance to the movement we added 4 coasters to the bottom of the rotating part of the knob and padded the wooden rail in the base with thick foam. The coasters were rolling on the foam, the compression of the foam by the casters produced a slight resistance for a single caster. Multiplied by four the resistance was big enough to keep the rotating part from spinning freely when a lot of force was attached. We were initially worried about the creaking noise of one coaster, but during the show this was irrelevant as the general noise of the audience covered this. 

IMG_2250.JPG
IMG_2254.JPG

concept & show feedback

We changed our concept fundamentally: On Sunday, the first day of the show, we turned the LED on to full brightness with 1000 rotations clockwise on the knob. On Monday, the second show day, we reversed this tedious process and turned it slowly off with 1000 rotations counterclockwise. 

On a separate iPad screen running a webpage the audience could keep track of the number of rotations. Why a thousand? We just felt it was the right number. The direction of the rotation for the day was printed on a simple black foamboard suspended from the ceiling - it should look as simple, intuitive and familiar as a daily menu in a restaurant. 

IMG_2300.JPG

We felt that this scaling of the interaction itself was a natural fit to the scaling of the knob: Not only the physical scale changed but as well the procedural. This focused the perception of the audience stronger on the core of the concept: to enjoy an interaction for the sake of the interaction itself - to invoke a meditative and communal state of action as the knob is usually turned with a group of people. 

In the show this iteration was well received. Not only because of its conceptually balanced approach towards the timing of the reward of the interaction, mostly the audience described a feeling of comfort in talking to strangers while performing a repetitive manual task together. One group compared the experience to a fidget spinner for groups that could be used for brainstorming activities in a board-room. Another participate recounted childhood memories of sorting peas together. 

While a few participants, mostly children, tried to raise the number of rotations, and therefore as well looked at the iPad showing the current number of rotations, the LED as main output was generally received as a minor part of the process of creating a communal experience. 

Our installation definitively hit an important aspect of technology: an interaction can create a meaningful and satisfying experience as interaction itself when it helps creating a sense of belonging and community - even without an instant gratification or a short term purpose. [link to video]

We decided not to use AR as an output as there is still a device needed by the user. This shifts the focus of the audience to the output and distracts from the physicality of the object and the interaction with this physicality - something we wanted to avoid. AR still is conceptually stronger as an output as it is in itself non-existent and weightless. It was a difficult decision but in the end a simple output as an LED and the exaggerated scale of the interaction over 1000 rotations prooved to be stronger in the context of the winter show and its abundance of interactive pieces on a small space.

We felt very honored to be part of the show and the ITP community. Thanks to whole team behind it, especially the curators Gabe and Mimi. And big thanks to our PComp and IFab professors Jeff Feddersen and Ben Light for the great guidance and feedback during the whole process. 

Here a few impressions from the show: 

PCOMP FINAL: KNOB_AR / SIMPLE UNREALITIES SUMMARY

 
knob_pyramid.png
 

concept

A giant physical knob controls a tiny virtual light-sculpture in AR. Both are separated from each other, the audience has to make an intellectual connection between the two of them.

keywords

weight, presence, physicality, virtuality, space, augmentation, contrast, minimalism, reduction, haptic interaction, visual perception

technical setup

glossolalia_schematics (1) (1).png

code

We used a github repo to share our code - I never used it for sharing before but as a backup for my own code. It worked pretty well for collaborative coding. The code was written in C++ (Arduino) & Javascript (NodeJS, threeJS, ARJS). 

user feedback

The term "user" might be misleading, we think more of an audience rather than users for our project. Key takeaways from two play-tests:

  • audience likes to turn the piece
  • sheer physicality/scale of the input object is regarded as a plus
  • piece is turning very fast - this can dilute the original concept of an oversized knob
  • audience does not make an immediate connection between AR-object and physical object
  • AR object is very generic, no clear formal connection to the physical object
  • separation of the two objects into different spaces is necessary to avoid overshadowing one piece with the other
  • if audience is confused initially and understands the connection between the two objects later the whole piece is stronger as the impact of a delayed gratification is more powerful
  • the idea of a physical idea is generally welcomed as well, it more seen as an ironic statement than a concept - AR as output fits better conceptually but is formally more difficult to execute and understand (AR is by itself so new that the formal language is not set yet)
  • audience expects initially a greater variety in the output and understands the contrast between the two objects only after explanation of the concept

process

key takeaways

  1. Wood is an organic material, it does not compute.
  2. A change in scale affects pre- and post-production of a piece. They mutually influence each other.
  3. Stick to the original concept, keep iterations in mind for future projects.
  4. Collaboration works best when everybody is on the same page and knows what to do.
  5. Browser based AR is still in its early stages but already very promising.

Our project was at a bigger scale than we were used to and involved a lot of fabrication. We learned a lot in terms of project management, sourcing materials and fabrication (e.g. using the CNC router). The tiny and fragile PCOMP parts had to be integrated in the large scale mechanics of the knob and read those movements correctly. Server side programming and the JS frontend took a lot less time than fabrication. 

Our goal was to create an interactive installation piece for a gallery environment at a larger scale that raises questions about physicality, technology and our joy of play. 

The collaboration proofed to be the best way to create such a piece in a limited timeframe. We plan to exhibit the piece at the ITP Winter Show 2017 - there is still a few improvements to do for this event: re-do the CNC cuts in a hexagon shape, re-engineer the outer layer of the knob in wood, create a backup mechanism to the rotary encoder in case of mechanical failure, slow down the movement of the knob and improve the AR-model. 

 

 

 

PCOMP FINAL: KNOB-AR

concept

Users turn a giant knob on the 4th floor of ITP and control the brightness of a virtual LED in AR positioned on a physical column in the Tisch Lounge. 

electronics

Inside the knob will be a rotary encoder to track the exact position of the element. These values are read and continuously sent to a remote AWS server from a nodeMCU 8266 wifi module inside the knob via serial/python/websockets. On the server they are stored in a txt-file and read into a publicly accessible html-file serving the AR-LED in an AR version of three.js. We will have to test how much of a delay we will have using this communication. A public server seems to be the only way for the audience to access the AR site from outside the local NYU network. The delay might still be acceptable as knob and AR-column are situated on different floors. With iOS 11, AR on mobile is possible now on all platforms using getUserMedia/camera access. 

Here a quick test with the github-example library, a target displayed on a screen and an iPhone running iOS 11:

IMG_2101.PNG

 

mechanics

We found a 3ft long rotating steel arm with bearings and attachment ring in the shop junk-shelf. We will use it as the core rotating element inside the knob. 

IMG_8318.JPG
IMG_8319.JPG

This rotating element will be mounted on a combination of wooden (plywood) bases that will stabilize the knob when it is getting moved. The weight of the knob will rest on wheels that are running on a rail/channel in the wide base that features a wave structure inside so that the 'click' of the knob feels more accurate. Inside the knob we will build a structure with 2 solid wooden rings that are the diameter of the knob and are attached to the rotating element. On the outside we will cover the knob with lightweight wooden planks or cardboard tubes. 

IMG_2103.JPG

fabrication

We worked on the wooden base for the knob/metal pole using multi-layered plywood to keep the rotary encoder within the same wooden element as the pole - this prevents a damage of the electronics/mechanics once there is a push or tilt towards the sides of the knob.

IMG_2116.JPG
IMG_2114.JPG

collaboration

In collaboration with Brandon Newberg & Anthony Bui.

BOM

Screenshot from 2017-11-20 11-13-35.png

 

 

timeline

  • Tue. 21 Nov. 2017: tech communication finished, fabrication parts ordered

work on automation + fabrication (knob), work on AR script

  • Tue. 28 Nov. 2017: all communications work, knob basic fabrication finished

work on fabrication finish, column, code cleaning, documentation, 

  • Tue. 5 Dec. 2017: final presentation

PCOMP FINAL UPDATE: AN INSIGHT, A FINISHED PROJECT & NEW IDEAS

An Insight

After play-testing with the original-concept my focus for my pcomp final has changed a lot - and the project looks entirely different.

Two things came to my mind after the feedback session:

  • Is this a project that engages the audience in a direct interaction?
  • Is this a project that engages me in direct interaction with somebody else while building it? 

The answer in both cases tends towards a "no" - I do this pretty much solo, there is no direct interaction with the audience. While this is still a PComp project, I thought about switching the topic completely: I would like it to be interactive and build it in collaboration with somebody else. I think I can learn more from this experience for my studies than from flying solo with a non-interactive piece - no matter how great it is.

A Finished Project

In my cuneiform-project I realized that my original plan to laser-cut the neural-network generated tablets was the conceptually strongest:  The journey from human-made cuneiform on clay tablets would be completed with the creation of a contemporary physical object that is as well an artifact of the digital machine-generated cuneiform. Both versions of the cuneiform are now physical manifestations of dreams - one human, the other artificial. Cuneiform was never more alive than now. 

Finishing the project with this physical representation of the output of a neural network made much more sense than trying to squeeze in some human interaction / intervention. 

Here are a few results:

IMG_2090.JPG
IMG_2091.JPG

New Ideas

I spoke to my classmates Antony and Brandon about my latest insights and offered them to collaborate on a project. We came up with the idea of having an oversized (bigger than human) knob situated on the 4th floor adjusting the brightness of a tiny LED installed in the Tisch window downstairs. The audience member has to use her full body to turn the knob. While this interaction requires a lot of effort, the result is minuscule: The interaction itself becomes the main focus. It’s a classic pcomp interaction in an entirely different setting. We would use serial via local wifi for the communication for this. Brandon would help with the fabrication part as he is involved in another project already, the core team would be Antony and me. 

turn_knob_led.png

 

While thinking about this more over the past few days, I tweaked the idea a bit and came up with a possible modification / iteration:

What if the big knob would be a piece of furniture for a few people to sit on? What if the output would not be a LED but a live audio broadcast from this conversation / room - the knob would regulate how “public” it would be by mixing in more or less audio noise(text to speech of latest tweets)? The focus of the audience would balance between the object itself (giant knob to sit/lounge on) and the question how public we want to make our conversations. The twitter “ noise” should highlight the defragmentation of conversations online - if we make something public, how public is it when everybody is public? By focussing on audio the audience would not be distracted by playing with the video feed, the visual stimulus is coming from the physical object of the giant furniture know instead. We would use webrtc for streaming the audio on a webpage.

turn_knob.png

PCOMP & ICM: FINAL PROJECT OUTLINE

update:

 laser cut tablet of neural network generated cuneiform

laser cut tablet of neural network generated cuneiform

IMG_2091.JPG
 
glossolalia setup.png
 

title

glossolalia / speaking in tongues

question

Are neural networks our new gods?

concept 

My installation piece aims to create a techno-spiritual experience with a neural network trained on ancient cuneiform writing for an exhibition audience in an art context. 

context

The title 'glossolalia / speaking in tongues' of the piece refers to a phenomenon where people seem to speak in a language unknown to them - mostly in a spiritual context. In the art piece both "speaker" (machine) and "recipient" are unaware of the language of their communication: The unconscious machine is constantly dreaming up new pieces of cuneiform tablets that the audience cannot translate. Two things to mention: First, after 3000 years, one of the oldest forms of human writing (means encoding of thoughts) becomes "alive" again with the help of a neural network. Second, it is difficult to evaluate how accurate the new cuneiform is - only a few scholars can fully decode and translate cuneiform today. 

 original cuneiform tablet ( wikimedia )

original cuneiform tablet (wikimedia)

 part of original Sumerian cuneiform tablet (paper impression)

part of original Sumerian cuneiform tablet (paper impression)

 cuneiform generated by neural network

cuneiform generated by neural network

Observing the machine creating these formerly human artifacts in its "deep dream" is in itself a spiritual experience: It is in a way a time-machine. By picking up on the thoughts of the existing, cuneiform writing corpus, the machine breathes new life into culture at Sumerian times. The moment the machine finished training on about 20 000 tablets (paper impressions) and dreamed up its first new tablet, the past 2000 - 3000 year hiatus became irrelevant - for the neural network old Babylon is the only world that exists. 

In the installation piece, the audience get the opportunity to observe the neural network the moment they kneel down on a small bench and look at the top of an acrylic pyramid. This activates the transmission of generated images from the network in the cloud to the audience that hover as an extruded hologram over the pyramid.

cunei_bstract [Recovered].png
 side, above: digital abstractions of generated cuneiform

side, above: digital abstractions of generated cuneiform

The audience can pick up headphones with ambient sounds that intensify the experience (optional).

It is important to mention that the network is not activated by the audience: The audience gets the opportunity to observe its constant and ongoing dream. The network is not a slave of the audience, it is regarded as a form of new entity in itself. When an audience member gets up from the bench, the transmission stops - the spiritual moment is over.

 

BOM

-- "H" for "have", "G" for "get" --

PCOMP parts:

  • raspberry pi or nvidia jetson tx2 -- G
  • if raspberry pi: phat DAC for audio out -- G
  • 8266 wifi module -- G
  • 2 x buttons / switches -- H
  • headphones -- G
  • audio cables -- G
  • screen -- G
  • hdmi cable -- H
  • local wifi-network (phone) -- H

ICM parts: 

  • GPU cloud server trained on 20 000 cuneiform tablets (scientific drawings, monochrome) -- H
  • processing sketch -- H

Fabrication parts:

  • wood for kneeling bench -- G
  • wood for column -- G
  • acrylic for pyramid -- H
  • wooden stick for headphone holder -- G
 

system diagram

glossolalia_schematics.png

 

timeline

  • Tue. 7 Nov. 2017: playtesting with paper prototype / user feedback

work on server-local comm, processing sketch, order/buy fab parts

  • Tue. 14 Nov. 2017: server-local comm works, all fabrication parts received

work on input-local comm, processing sketch, fabrication (peppers ghost)

  • Tue. 21 Nov. 2017: processing sketch finished, input-local comm works

work on automation, fabrication (bench, column)

  • Tue. 28 Nov. 2017: all comms work, basic fabrication finished

work on fabrication finish, code cleaning, documentation, 

  • Tue. 5 Dec. 2017: final presentation

 

 

 

 

PCOMP & ICM: IDEAS FOR FINALS

I got a few things in PCOMP and ICM I am playing with at the moment and would like to take further:

- KI: virtual bonsai (open in safari, chrome wrecks the script ...) living on the screen, needs to be watered with real water to grow

Screen Shot 2017-10-30 at 10.41.53 PM.png

 

- KOI: virtual Koi living on the screen (somehow a theme here), his pond can be made bigger with more screens put together (using openCV for screen location detection?) - it will cross the screens OR the audience can open the sketch on their phones, put their phones on special floats and let their virtual Kois swim in a real pond (koi-phone-swarm). For the latter version I would need to control a gentle current in the water to keep the phone-Kois floating. 

- ELEVATOR PITCH: 2 X 8266 modules with gyro-sensors located in the 2 ITP elevators create music together (maybe seeding that into tensorflow magenta for more variation), pitch is matched to the floor number.

IMG_2075.JPG

I am experimenting with autonomous cuneiform generation with a DCGAN as well, I would need some PCOMP for automation and physical representation of the cuneiform (ideally automatic laser engraving into acrylic). 

cunei123.png

One of my fields of interest at the moment is the blockchain protocol. I am looking for physical implementations regarding hashing/mining/proof of work - maybe this could be an angle for a PCOMP project. 

PCOMP: MIDTERM PART II

continued from first blogpost

The plan this week was to ensure everything works in sequence and to stitch all the parts of the project together. For this we worked on the following:

-> Serially communicate spinning wheel result to p5

 

-> Test solenoid valve using Arduino.

The basic circuit:

IMG_3754.JPG

-> Ensure water drops only after the artist is selected on the wheel.

 

-> Run animation and sound on p5 after water drops on the screen.

After the first round of testing with fellow students Isa, Simon and Terrick, we realised that the users get lost after spinning the wheel, but understand the concept once we tell them.

IMG_8628.JPG

We then decided to narrate the story before the user starts interacting with the piece. We had a voice over that would start once the headphones are lifted off the stand. We first used a proximity sensor to detect the headphones. We then switched to light sensor because the results were more accurate. This was the end of constructing the whole piece. And time to get feedback from users. We got students Hadar, Amythab, Daniel and Akmyrat to test this second protoype. Inputs from user testing:

  1. Users did not comprehend why there was a water drop. This means that they don’t pay much attention to the narration.

  2. They love playing with the spinning wheel and would keep spinning it.

  3. Few of them would actually put up their hand for the drop and some wouldn’t. This again meant that the instructions weren’t clear.

  4. Users DO NOT like to be instructed.

  5. Lighting affected their mood and hence the message that was intended.

  6. Users reported that they enjoyed mixing the music with the wheel, the "haunted" versions we produced especially for our installation piece made some of them feel uneasy and even scared. Others felt positively influenced by the music.

  7. They liked watching the water drop and even playing with it on the glass, but some did not fully understand the connection to the rest of the installation - although the voice narration explained it to them. 

  8. Some were overwhelmed by the sensory inputs and stimulations - they wished less was happening and they could focus more on music, playing with the wheel and watching the water as they felt strongest for these parts.

Keeping all of these in mind, we realized that displaying the title of the piece with a little description of the story of the drop and a hint at its spiritual roots in India can have a larger impact than the narration just at the beginning. Users do not have to hold the drop in their hand. We also realized that drop on the screen actually works out because it looks like the drop is carrying the artist from heaven(spinning wheel) to earth(screen). 

We then modified our final midterm piece based on these inputs and thoughts. 

Interaction:

  • wheel first touch -> water drop, plays one track, displays visuals
  • wheel further touches -> layering more tracks and water drops on screen
  • visual communication: p5 sketch
  • audio: p5 sketch

The user is acting and reacting to audiovisual stimuli, can create his own listening and viewing experience, the water drop serves as a unique, non-interactive physical stimulation.

IMG_2040.JPG

Our Arduino code:

/*     Arduino Rotary Encoder Tutorial
 *      
 *  by Dejan Nedelkovski, www.HowToMechatronics.com
 *  
 */
 
 #define outputA 2
 #define outputB 3
 #define reset_button 5

 
 int counterLast = 1;
 int wait_loops = 0;
 int counter = 0;
 int star = 0;
 int aState;
 int aLastState; 
 int ValveOpen = 700;
 int ValvePause = 1000;
 int milk = 0;
 int lifted = 0;
 int audio_start = 0;
 
 const int SolenoidPin = 4;  // Set valve to pin 2 connects to a transistor and a 9V battery

 int sensorPin = A5;
 int sensorValue = 0;


 
 void setup() { 
   
   pinMode (outputA,INPUT);
   pinMode (outputB,INPUT);
   pinMode (reset_button, INPUT);
   pinMode(SolenoidPin, OUTPUT);  // Set pin 2 as an output

   
   Serial.begin (115200);
   // Reads the initial state of the outputA
   aLastState = digitalRead(outputA);   
 } 
 void loop() {
  sensorValue = analogRead (sensorPin);
  //delay(10);
  //Serial.write(sensorValue);
  
   if (sensorValue > 800){
      lifted = 0;
      audio_start = 0;
      //Serial.write(sensorValue);
      delay (100);
    }
    
   if (sensorValue < 800){
   lifted = 1;
   }
   if (lifted == 1){
     if(audio_start == 0){
     star = 6;
     Serial.write(star);
     audio_start = 1;
     delay (100);
     }
   }
   aState = digitalRead(outputA); // Reads the "current" state of the outputA
   // If the previous and the current state of the outputA are different, that means a Pulse has occured
   if (aState != aLastState){ 
    wait_loops = 0;    
     // If the outputB state is different to the outputA state, that means the encoder is rotating clockwise
     if (digitalRead(outputB) != aState) { 
       counter ++;
     } else {
       counter --;
     }
     if (counter == 40 || counter == - 40){
      counter = 0;
     }
     if (counter > -6 && counter <= 4){
      star = 1;
      //Serial.println("Minnie");
     }
     if (counter > 4 && counter <= 14){
      star = 4;
      //Serial.println("John");
     }
     if (counter > 14 && counter <= 24){
      star = 3;
      //Serial.println("Amy");
     }
     if (counter > 24 && counter <=34){
      star = 2;
      //Serial.println("David");
     }
          if (counter < -6 && counter >= - 16){
      star = 2;
      //Serial.println("David");
     }
     if (counter < - 16 && counter >= - 26){
      star = 3;
      //Serial.println("Amy");
     }
     if (counter < - 26 && counter >= - 36){
      star = 4;
      //Serial.println("John");
     }
     if (counter > 34 && counter < 40){
      star = 1;
      //Serial.println("Minnie");
     }
     if (counter < - 36 && counter > -40){
      star = 1;
      //Serial.println("Minnie");
     }
     
     //Serial.print("Position: ");
     //Serial.println(counter);
     Serial.write(star);
     milk = 1;
     delay(1);
   }
   aLastState = aState; // Updates the previous state of the outputA with the current state
//   if (digitalRead(reset_button) == HIGH){
//   }
//   else {
//   }
   wait_loops = wait_loops + 1;
   if ( milk == 1){
    if (wait_loops > 100000){
      Serial.write(5);
       digitalWrite(SolenoidPin, HIGH); // makes the valve open    
       delay(ValveOpen); 
       digitalWrite(SolenoidPin, LOW);
       delay(ValvePause);
       wait_loops = 0;     
       milk = milk + 1;
    }
    }
   }
 

Our p5 code:

var portName = '/dev/cu.usbmodem1411';  // fill in your serial port name here
var options = { baudrate: 115200}; // change the data rate to whatever you wish

var serial;          // variable to hold an instance of the serialport library
var inData;                             // for incoming serial data

var star = 8;

let star_select;

var sample;

var inData_last;

var wait;

var frame_count = 0;

var plays = 0;

let john;
let amy;
let david;
let minnie;

function preload(){
  soundFormats('mp3', 'ogg');
  sample0 = loadSound('barack_vocal.mp3');
  sample1 = loadSound('sounds/minnie_vocal.mp3');
  sample2 = loadSound('sounds/john_vocal.mp3');
  sample3 = loadSound('sounds/amy_vocal.mp3');
  sample4 = loadSound('sounds/bowie_vocal.mp3');
  sample5 = loadSound('barack_vocal2.mp3');
}

function setup() {
  imageMode(CENTER);
  john = loadImage('pics/johncoltrane.png');
  amy = loadImage('pics/amyw.png');
  david = loadImage('pics/davidbowie.png');
  minnie = loadImage('pics/minnier.png');
  title = loadImage('pics/title.png');
  
  
  createCanvas(windowWidth, windowHeight);
  serial = new p5.SerialPort();       // make a new instance of the serialport library
  serial.on('list', print);  // set a callback function for the serialport list event
  serial.on('connected', serverConnected); // callback for connecting to the server
  serial.on('open', portOpen);        // callback for the port opening
  serial.on('data', serialEvent);     // callback for when new data arrives
  serial.on('error', serialError);    // callback for errors
  serial.on('close', portClose);      // callback for the port closing
 
  serial.list();                      // list the serial ports
  serial.open(portName, options);
}

function serverConnected() {
  print('connected to server.');
}
 
function portOpen() {
  print('the serial port opened.')
}
 
function serialEvent() {
 inData = Number(serial.read());
}
 
function serialError(err) {
  print('Something went wrong with the serial port. ' + err);
}
 
function portClose() {
  print('The serial port closed.');
}

function draw() {
  frame_count ++;
  background(0);
  fill(255);
  //textSize(60);
  if (star == 8){
  push();
  translate(windowWidth / 2, windowHeight / 2);
  scale(0.23);
  image(title, 0, 0)
  pop();
  }
  //text("sensor value: " + inData, 30, 30);
  inData_last = inData;
  
  if (inData == 1){
      //text("Star: Minnie", 30, 100);
    star = 1;
    star_select = "minnie";
    frame_count = 0;
  }
  if (inData == 2){
      //text("Star: John", 30, 100);
    star = 2;
    star_select = "john";
    frame_count = 0;
  }
  if  (inData == 3){
      //text("Star: Amy", 30, 100);
    star = 3;
    star_select = "amy";
    frame_count = 0;
  }
  if (inData == 4){
      //text("Star: David", 30, 100);
    star = 4;
    star_select = "david";
    frame_count = 0;
  }
  if (inData == 5){
  //text("drop", 30, 100);
  //frame_count = 0;
  }
  if (inData == 6){
    star = inData;
    //frame_count = 0;
    inData = "none"
    if (star == 6){
      sample0.play(6);
      star = "none";
      plays = 1;
    }
  }
 
  if (inData_last == inData){
      if (frame_count > 20){
      if (star == 1){
              sample1.play();
        //sample5.play(sample1.duration() + 2);
        print("wait");
            star = "none";
          }
      if (star == 2){
              sample2.play();
                //sample5.play(sample2.duration() + 2);
        print("wait");
            star = "none";
          }
      if (star == 3){
              sample3.play();
                //sample5.play(sample3.duration() + 2);
        print("wait");
            star = "none";
          }
      if (star == 4){
        sample4.play();
                //sample5.play(sample4.duration() + 2);
        print("wait");
            star = "none";
          }
        if (star_select == "minnie" && sample1.isPlaying()){
        //push();
        translate(windowWidth / 2, windowHeight / 2);
        //scale(1.0)
        scale(0.4);
        rotate(radians(frameCount));
        image(minnie, 0, 0);
        //pop();
      }
      if (star_select == "john" && sample2.isPlaying()){
        //push();
        translate(windowWidth / 2, windowHeight / 2);
        scale(0.4);
        rotate(radians(frameCount));
        image(john, 0, 0);
        //pop();
      }
      if (star_select == "amy" && sample3.isPlaying()){
        //push();
        translate(windowWidth / 2, windowHeight / 2);
        scale(0.4);
        rotate(radians(frameCount));
        image(amy, 0, 0);
        //pop();
      }
      if (star_select == "david" && sample4.isPlaying()){
        //push();
        translate(windowWidth / 2, windowHeight / 2);
        scale(0.4);
        rotate(radians(frameCount));
        image(david, 0, 0);
        //pop();
      }
            if (sample1.isPlaying() != true && sample2.isPlaying() != true){
        if (sample3.isPlaying() != true && sample4.isPlaying() != true){
        push();
                translate(windowWidth / 2, windowHeight / 2);
        scale(0.23);
        image(title, 0, 0);
        pop();
        }
      }
  }
  }
}

function windowResized() {
  resizeCanvas(windowWidth, windowHeight);
}

 



 

IFAB 6: MOTORS, SERVOS, STEPPERS, MOVEMENT

Assignment

For the last assignment you will be mounting a motor/servo/stepper (one or more) to something as well as mounting something to that motor/servo/stepper.  It can be completely DIY, or off-the-shelf components, or a combination of the two.

The last assignment in IFAB was again a journey: I started off with a rough idea of working with magnets, cord and geometrical patterns and finished our 7 week introduction with a three-legged robo-dog that will mine doge-coin. 

But let's start with the first attempt to capture movement. The idea was to have a system of cords with magnets attached to them running in the background of an acrylic screen. The servos where supposed to move the magnets, the change in position would move magnetic material in the front of the acrylic plate. Here a very rough first sketch:

IMG_1994.JPG

After trying to prototype everything with cardboard, servos, fishing wire and magnets, I realized that this form of prototyping reaches its limits with moving elements. I thought a lot how I could move the magnets on the strings in the back without them interfering with each other - and came to the conclusion that this is not possible in our limited timeframe. Here parts of the failed cardboard prototype.

IMG_1949.JPG

Unfortunately I had bought all parts already for this project before prototyping - something to consider next time. 

So I was going back to a blank page and thought about my second idea - a robot arm. I like robotics and tried to come up with a fresh idea that would go a bit further than building an arm. Robots are always considered perfect machines - I set out to build a non-perfect robot, a robot that obviously is not just a mere slave for humans but got some imperfect characteristics: A three-legged robot dog. And as imperfection is a great source for creativity (and randomised operations) it should mine bitcoin (or doge-coin) while walking around. Its functionality should be considered secondary, the character should dominate the appearance of the robot.

IMG_1992.JPG

The three legs should be powered by three digital metal-gear servos that were once part of a mini-plotter. I wanted to keep the shape as minimal as possible - but still organic.

Now I had to tackle the mounting of the servo to the acrylic baseplate and of the legs to the servo horns. They were solid metal, so I knew they would not wear out that quickly. I decided to try to laser-cut the horn directly into the upper part of the leg. 

I took a picture of the spindle of the servo horn, hand-traced it in illustrator and tried out different diameters after measuring the horn to achieve a tight fit.

Screen Shot 2017-10-17 at 12.42.50 PM.png
IMG_1954.JPG
Screen Shot 2017-10-17 at 2.56.50 PM.png

I was very surprised after laser-cutting in 3mm cast acrylic that the last opening fit perfectly with the servo horn and seemed to be sturdy enough to withstand light movement. My hand-tracing was far from perfect or symmetric but this seemed not to be a real issue.

IMG_1955.JPG

After that I tried to come up with a concept for mounting the servo between to acrylic sheets like in a sandwich. Initially I wanted to cut acrylic clamps to hold the servos on the baseplate, then decided for zip ties. This proofed to be only in parts effective as the servos still had a bit of movement - I fixed this with glue in the end. I am not very happy with this mounting as I could not really glue them in perfectly after I attached the zip ties. I should have used glue and the zip-ties in combination for the beginning. But I wanted to avoid glue as much as possible for the mounting of the servos.

IMG_1993.JPG

After conceptualizing the mounting for the servo and the leg-attachment to the servo I created the shape of the dog in Illustrator and laser-cut all parts. I made one mistake: I copied one-side of the dog to the other not realizing that only one front side would need a leg opening - as it's a three-legged dog. Too late. 

Screen Shot 2017-10-17 at 2.38.11 PM.png

Before assembling the parts, I cut the two acrylic baseplate "sandwiches" that would sit in the middle of the side-panels with the bandsaw. Laser-cutting was not needed here, the rougher surface from the band-saw cutting took the glue very well. 

I used the laser-cutter again for the openings for the latches of the zip ties.

IMG_1966.JPG
IMG_1968.JPG

After that I put a spacer into the non-leg corner, attached the two side panels with acrylic glue and the help of clamps to the two baseplates, drilled holes for the node-mcu / ESP8266 controller that would control the servos and finally mounted the legs.

IMG_1971.JPG
IMG_1973.JPG
IMG_1976.JPG
IMG_1979.JPG
IMG_1980.JPG

It stood by itself but I realized that any movement of the servos would tilt the construction to the side without the leg. To prevent the dog from falling a built a little stand to stabilize it and reduce the weight on the legs. 

IMG_1986.JPG

I kept the wiring and code very basic - the three-legged dog will not mine bitcoin yet with its random movements, I will have to re-wire and re-program using a python mining library. Here is the Arduino code specifically for the 8266 module that I found online and modified to my needs:

/* Sweep
 by BARRAGAN <http://barraganstudio.com> 
 This example code is in the public domain.
 modified 28 May 2015
 by Michael C. Miller
 modified 8 Nov 2013
 by Scott Fitzgerald
 http://arduino.cc/en/Tutorial/Sweep
*/ 

#include <Servo.h> 
 
Servo myservo_front;  // create servo object to control a servo 
                // twelve servo objects can be created on most boards
Servo myservo_b1;
Servo myservo_b2;

void setup() 
{ 
  myservo_front.attach(13);  // attaches the servo on GIO2 to the servo object 
  myservo_b1.attach(12);
  myservo_b2.attach(14);
} 
 
void loop() 
{ 
  int pos;
  // front leg
  for(pos = 60; pos <= 100; pos += 1) // goes from 0 degrees to 180 degrees 
  {                                  // in steps of 1 degree 
    myservo_front.write(pos);              // tell servo to go to position in variable 'pos' 
    delay(5);                       // waits 15ms for the servo to reach the position 
  } 
  for(pos = 100; pos>=60; pos-=1)     // goes from 180 degrees to 0 degrees 
  {                                
    myservo_front.write(pos);              // tell servo to go to position in variable 'pos' 
    delay(5);                       // waits 15ms for the servo to reach the position 
  } 
  // back leg 1
    for(pos = 60; pos <= 100; pos += 1) // goes from 0 degrees to 180 degrees 
  {                                  // in steps of 1 degree 
    myservo_b1.write(pos);              // tell servo to go to position in variable 'pos' 
    delay(5);                       // waits 15ms for the servo to reach the position 
  } 
  for(pos = 100; pos>=60; pos-=1)     // goes from 180 degrees to 0 degrees 
  {                                
    myservo_b1.write(pos);              // tell servo to go to position in variable 'pos' 
    delay(5);                       // waits 15ms for the servo to reach the position 
  } 

  // back leg 2
    // back leg 1
    for(pos = 60; pos <= 100; pos += 1) // goes from 0 degrees to 180 degrees 
  {                                  // in steps of 1 degree 
    myservo_b2.write(pos);              // tell servo to go to position in variable 'pos' 
    delay(5);                       // waits 15ms for the servo to reach the position 
  } 
  for(pos = 100; pos>=60; pos-=1)     // goes from 180 degrees to 0 degrees 
  {                                
    myservo_b2.write(pos);              // tell servo to go to position in variable 'pos' 
    delay(5);                       // waits 15ms for the servo to reach the position 
  } 
} 

And here is the three-legged dog in-motion. An imperfect robot with character. I am very happy with the result! 

PCOMP 6: SERIAL COMMUNICATION I / MIDTERM PROTOYPING

This week the focus was on prototyping for the midterms: after ideation my project-partner Keerthana and I prototyped all parts of our idea and tested the electronic components.

Topic

Halloween

Idea

Bring back the souls of musicians that have passed away for a few brief moments in a modern form of ritual. 

Execution

IMG_2355.JPG

The audience picks up headphones situated in front of them. Audio will guide them throughout the ritual.

They are asked to turn a wheel of fortune with graphics of four musicians etched on it. The wheel stops at a certain position - a musician gets selected by chance. The artist wants to come back for a few moments from the dead into our world.

Now the participant will put her hands with palms upwards on a screen below the wheel. A drop of water - symbolizing the soul of the musician - will drop from underneath the wheel on the hand of the participant. The moment it hits her palm, a graphic depicting the artist will start rotating on the screen situated underneath the participants hand and a song of the artist is played in the headphones.

After this audio-visual and touch stimulation the participant will listen to the song until it finishes, then take the drop of water left in her palm and put it into a little plant on top of the wheel. The soul of the musician has briefly visited earth and is taken back to heaven now. The plant is watered by the souls of those great artists that have passed away and symbolizes the eternal cycle of creation. 

IMG_1276.JPG

We chose 4 musicians: Amy Winehouse, David Bowie, Minnie Ripperton and John Coltrane. Both female and male artists from across different genres. 

To prototype the wheel we measured the rotary encoder (used to track the exact position of the wheel to identify the selected musician), constructed all parts in Illustrator and laser-cut in black card-board. The musicians are etched outlines of their silhouettes.

IMG_1923.JPG
IMG_1925.JPG
IMG_1927.JPG

Unfortunately the outlines of Minnie Ripperton were cut instead of etched - something to fix for the final version.

After the laser-cutting we worked on the arduino-code to operate the rotary encoder and the connection to p5. p5 will run the main code with the graphics for the screen, arduino will control the rotary encoder and the water-drop system (solenoid valve).

We used a code-example for a rotary-encoder and customized it to our needs:

 
# based on a script by Dejan Nedelkovski (

#define outputA 2
 #define outputB 3
 #define reset_button 5
 int counter = 0;
 int star = 0;
 int aState;
 int aLastState;
 void setup() { 
 pinMode (outputA,INPUT);
 pinMode (outputB,INPUT);
 pinMode (reset_button, INPUT);
 
 Serial.begin (115200);
 // Reads the initial state of the outputA
 aLastState = digitalRead(outputA); 
 } 
 void loop() { 
 aState = digitalRead(outputA); // Reads the "current" state of the outputA
 // If the previous and the current state of the outputA are different, that means a Pulse has occured
 if (aState != aLastState){ 
 // If the outputB state is different to the outputA state, that means the encoder is rotating clockwise
 if (digitalRead(outputB) != aState) { 
 counter ++;
 } else {
 counter --;
 }
 if (counter == 40 || counter == - 40){
counter = 0;
 }
 if (counter > -6 && counter <= 4){
star = 1;
//Serial.println("Minnie");
 }
 if (counter > 4 && counter <= 14){
star = 4;
//Serial.println("John");
 }
 if (counter > 14 && counter <= 24){
star = 3;
//Serial.println("Amy");
 }
 if (counter > 24 && counter <=34){
star = 2;
//Serial.println("David");
 }
if (counter < -6 && counter >= - 16){
star = 2;
//Serial.println("David");
 }
 if (counter < - 16 && counter >= - 26){
star = 3;
//Serial.println("Amy");
 }
 if (counter < - 26 && counter >= - 36){
star = 4;
//Serial.println("John");
 }
 if (counter > 34 && counter < 40){
star = 1;
//Serial.println("Minnie");
 }
 if (counter < - 36 && counter > -40){
star = 1;
//Serial.println("Minnie");
 }
 //Serial.print("Position: ");
 //Serial.println(counter);
 Serial.write(star);
 delay(1);
 }
 aLastState = aState; // Updates the previous state of the outputA with the current state
 if (digitalRead(reset_button) == HIGH){
 }
 else {
 }
 }

We already included the serial communication with p5 in the code above. In p5 we used a basic script for reading the serial port and displaying names of the musicians to test the functionality. We will change this to rotating graphics depicting the musicians later this week.

var portName = '/dev/cu.usbmodemFA131';  // fill in your serial port name here
var options = { baudrate: 115200}; // change the data rate to whatever you wish

var serial;          // variable to hold an instance of the serialport library
var inData;                             // for incoming serial data

var star;

function setup() {
  createCanvas(600, 600);
  serial = new p5.SerialPort();       // make a new instance of the serialport library
  serial.on('list', print);  // set a callback function for the serialport list event
  serial.on('connected', serverConnected); // callback for connecting to the server
  serial.on('open', portOpen);        // callback for the port opening
  serial.on('data', serialEvent);     // callback for when new data arrives
  serial.on('error', serialError);    // callback for errors
  serial.on('close', portClose);      // callback for the port closing
 
  serial.list();                      // list the serial ports
  serial.open(portName, options);
}

function serverConnected() {
  print('connected to server.');
}
 
function portOpen() {
  print('the serial port opened.')
}
 
function serialEvent() {
 inData = Number(serial.read());
}
 
function serialError(err) {
  print('Something went wrong with the serial port. ' + err);
}
 
function portClose() {
  print('The serial port closed.');
}

function draw() {
  background(0);
  fill(255);
  textSize(60);
  //text("sensor value: " + inData, 30, 30);
  if (inData == 1){
      text("Star: Minnie", 30, 100);
  }
  if (inData == 2){
      text("Star: David", 30, 100);
  }
  if (inData == 3){
      text("Star: Amy", 30, 100);
  }
  if (inData == 4){
      text("Star: John", 30, 100);
  }
}

We prototyped as well the water drop mechanism with the solenoid valve with a 9V battery. 

In the next few days we will work on automating everything with p5 / Arduino and fabrication.

PCOMP 5: ANALOG / DIGITAL I/O REFRESHER, WIFI-CONTROL

After a lot of audio synthesis with the Arduino-tone library I went back to the labs and played with servo, peak detection and explored WIFI-control. 

I mounted a thumb-piano on the servo and controlled the movements with an analog pressure sensor.

I got excited with this control (servos are really really exciting!), so I rewrote a WIFI-control script to control everything from my ESP8266 nodeMCU module via WIFI (locally).

Here my (pretty messy ...) re-edit of the code:

#include <ESP8266WiFi.h>
#include <WiFiClient.h>
#include <ESP8266WebServer.h>
#include <ESP8266mDNS.h>

#include <Servo.h>
     // include the servo library
 
Servo servoMotor;       // creates an instance of the servo object to control a servo
int servoPin = 13;       // Control pin for servo motor
 
MDNSResponder mdns;
 
ESP8266WebServer server(80);
String webPage;
const char* ssid     = "enter SSID here";      //wifi name
const char* password = "enter your pw here";  //wifi password
 
void setup() {
  servoMotor.attach(servoPin);  // attaches the servo on pin 2 to the servo object

 
  pinMode(16, OUTPUT);  //led pin 16
 
  webPage += "<h1>ESP8266 Web Server</h1><p>Socket #1 ";
  webPage += "<a href=\"socket1On\"><button>ON</button></a>&nbsp;";
  webPage += "<a href=\"socket1Off\"><button>OFF</button></a></p>";
  
  Serial.begin(74880);
  delay(100);
 
 
  Serial.println();
  Serial.println();
  Serial.print("Connecting to ");
  Serial.println(ssid);
  
  WiFi.begin(ssid, password);
  
  while (WiFi.status() != WL_CONNECTED) {
    delay(500);
    Serial.print(".");
  }
 
  Serial.println("");
  Serial.println("WiFi connected");  
  Serial.println("IP address: ");
  Serial.println(WiFi.localIP());
  
  if (mdns.begin("esp8266", WiFi.localIP())) 
    Serial.println("MDNS responder started");
 
  server.on("/", [](){
    int servoAngle = 180;
    server.send(200, "text/html", webPage);
  });
  server.on("/socket1On", [](){
    int servoAngle = 180;
    server.send(200, "text/html", webPage);
    // Turn off LED
    servoMotor.write(servoAngle - 90);
    delay(1000);
  });
  server.on("/socket1Off", [](){
    int servoAngle = 180;
    server.send(200, "text/html", webPage);
    //Turn on LED
    servoMotor.write(servoAngle);
    delay(1000); 
  });

  server.begin();
  Serial.println("HTTP server started");
}
 
void loop() {
  server.handleClient();
  
}

... and here we go:

After this I completed the last labs with state change, treshold, peak detection. I liked the possibility to clean up a signal by adding a noise-variable to it. 

On the weekend an ITP-alumn showed me how to control from p5 and write this into a txt-file via php on my server. This is awesome! I can check the contents of this file from my ESPmodule via web-sockets and control a servo in Micro-Python with it. I need to put all the parts together this week, but that means I can write a sketch that lets the user control physical devices from everywhere. I am excited! 

 

 

PCOMP 4 / IFAB 4 / ICM 4: ENCLOSURES, ARDUINO IO, P5 SERIAL

I had the plan to create a sound / visual controller for Arduino tone and P5 that uses fruits as an input element. The controller would sit on a steel basket that houses the fruits, it would feature several knobs and faders and would be connected to P5 via the serial port. 

IMG_1827.JPG

To accompany the cleanliness of the steel I wanted to combine plywood and acrylic for the topin two layers: plywood cutouts of hands sitting on top of white translucent acrylic. The goal was to avoid any glue. 

After going through the Arduino Tone lab I started writing the code and ended up using a different library for sound that would give me additional volume and wave-type control in combination with the tone library. I tested as well the P5 serial library with a potentiometer after following the steps in the serial lab and running a node server on my local machine. 

#include "volume2.h"
Volume vol;
const int numReadings = 10;

int readings[numReadings];      // the readings from the analog input
int readIndex = 0;              // the index of the current reading
int total = 0;                  // the running total
int average = 0;                // the average

int inputPin = A0;
int inputPin1 = A4;

int analogPin= 3;
int raw= 0;
int Vin= 5;
float Vout= 0;
float R1= 10000;
float R2= 0;
float buffer= 0;

//int val1;
int encoder0PinA = 2;
int encoder0PinB = 4;
int encoder0Pos = 0;
int encoder0PinALast = LOW;
int n = LOW;

void setup() {
  pinMode (encoder0PinA, INPUT);
  pinMode (encoder0PinB, INPUT);
  Serial.begin(9600);
    for (int thisReading = 0; thisReading < numReadings; thisReading++) {
    readings[thisReading] = 0;
  }
}


void loop() {
    int n = digitalRead(encoder0PinA);
    if ((encoder0PinALast == LOW) && (n == HIGH)) {
    if (digitalRead(encoder0PinB) == LOW) {
      encoder0Pos--;
    } else {
      encoder0Pos++;
    }
    //Serial.println (encoder0Pos);
    //Serial.print ("/");
  }
  encoder0PinALast = n;

  
    raw= analogRead(analogPin);
  if(raw) 
  {
  buffer= raw * Vin;
  Vout= (buffer)/1024.0;
  buffer= (Vin/Vout) -1;
  R2= R1 * buffer;
  
  Serial.print("Vout: ");
  Serial.println(Vout);
  Serial.print("R2: ");
  Serial.println(R2);
  //
  }
  //int Sound = analogRead(analogPot);
  //float frequency = map(Sound, 0, 40000, 100, 1000);
  //tone(8, frequency, 10);
  //Serial.println(frequency);

  
    // subtract the last reading:
  total = total - readings[readIndex];
  // read from the sensor:
  readings[readIndex] = analogRead(inputPin);
  // add the reading to the total:
  total = total + readings[readIndex];
  // advance to the next position in the array:
  readIndex = readIndex + 1;

  // if we're at the end of the array...
  if (readIndex >= numReadings) {
    // ...wrap around to the beginning:
    readIndex = 0;
  }

  // calculate the average:
  average = total / numReadings;
  // send it to the computer as ASCII digits
  //Serial.println(average);
  delay(1);        // delay in between reads for stability
  int freq = analogRead(A0);
  int volume = map(analogRead(A1), 0, 1023, 0, 255);
  
  //Serial.println(freq);
  Serial.println(encoder0Pos);

  
  
  vol.tone(encoder0Pos*10, SQUARE, volume); // 100% Volume
  vol.delay(analogRead(A3));
  vol.tone(freq, SQUARE, volume); // 75% Volume
  vol.delay(analogRead(A3));
  vol.tone(freq, SQUARE, volume); // 50% Volume
  vol.delay(analogRead(A3));
  vol.tone(freq, SQUARE, volume); // 25% Volume
  vol.delay(analogRead(A3));
  vol.tone(freq, SQUARE, volume); // 12.5% Volume
  vol.delay(analogRead(A3));
}
IMG_1760.JPG

Having sorted out the basic functionality of the code, thinking about the parts I would need for the controller and sourcing all parts, I constructed the building files for the laser-cutter in Illustrator. This took a fair amount of time as I had to measure all parts for a perfect fit. I used the analog caliber for most of the measurements. 

IMG_1771.JPG
IMG_1768.JPG

Especially parts like the fader needed extra engineering / construction before laser-cutting.

Screen Shot 2017-09-30 at 10.48.34 PM.png

I used the tracing of images technique to get the positions of the controller elements ergonomically right. I used the sandwich technique of etching and cutting in combination with the two different top layers to get a seamless fit. I traced my hands for the shapes around the controller elements. 

Screen Shot 2017-10-04 at 12.46.19 AM.png

The etching layer would create a rim around the cutouts for washers and speakers to fit them underneath the top-plate.

Screen Shot 2017-10-04 at 12.51.45 AM.png

After cutting both layers the etching took too long - the laser was not strong enough to etch the required amount of material (around 3mm) out of the acrylic. With the help of one of my fellow students, Itay, I managed to used a drill and a dremel to deepen the rims around both speakers and each of the cutouts of the potentiometers. 

IMG_1789.JPG
IMG_1791.JPG
IMG_1802.JPG
IMG_1806.JPG

I used knobs from the junk shelf as they fit the white/light brown colors of wood and acrylic. 

After that I started screwing in all electronic parts - thanks to the time I spent measuring the controller elements they fit perfectly. I was relieved! 

IMG_1803.JPG

I still had to drill the screw holes for the standoffs for Arduino and fader. 

IMG_1811.JPG
IMG_1812.JPG

The last part of the hardware building was the wiring. I used a perf-board to organize the multiple wires of all the elements and soldered each of them. It took me a full afternoon, but thanks to a soldering workshop earlier in my PComp class I could avoid the biggest mistakes. Still - there is plenty of room to learn for me in this particular technique.

I ended up not attaching wires to the fruits in the basket as I wanted to keep the controller as open as possible - it should be usable for sound creation and control of visuals alike.

IMG_1818.JPG

I wanted to use magnets to keep the top secure on the steel basket but didn't attach them to the acrylic yet as I wanted to test first whether I could screw them into the acrylic as well - something to be done later this week. The same goes for the top plate consisting of two layers. They are secured and stick together because of the knobs, I still need to figure out how I can attach it safely without drilling or glue. So far all elements are functional and can be assigned different functions. I used it mainly for sound production.

So far I have not tested the sound libraries a lot, so most of the sounds are very experimental - but fun! 

I have tested the P5 serial communication with a potentiometer - so far it is a very satisfying feeling to have a physical control element instead of a trackpad.

I would as well like to put lights underneath each hand/  controller element to communicate with plants: The plant listens to the music, a sensor measures the surface conductivity that is mapped to the controller elements, they light up according to this conductivity and I can react to this interaction with the plant. Sounds a bit lofty - but worth exploring!

So far no fruits in the basket below but plenty ideas on how to continue with my first controller.

IMG_1823.JPG

 Parts used:

  • 3mm plywood
  • 6mm white translucent acrylic
  • steel basket
  • magnets
  • Arduino Uno
  • crossfader / slider
  • 2 x potentiometer
  • rotary encoder
  • 2 x 8 Ohm speaker
  • USB cable

Tools used:

  • lasercutter
  • drill-press with hole-saw drillbit
  • Dremel
  • soldering iron

PCOMP 3: ANALOG / DIGITAL IO

Labs

After going through the basics of a circuit we were discussing analog and digital inputs/outputs as parts of the the introduction to micro-controllers. I did the labs and played a bit with the idea of using a light-sensor to augment storytelling - in this case the approach and impact of a big comet thousands of years ago which supposedly didn't exactly augment the lives of dinosaurs ...

But let's look at the labs first! 

I managed to map the range of the light sensor to the led brightness but had issues with the force sensor. Its values on the serial monitor were not constant, so even after mapping was still a flickering visible on the other led. 

Mini-application using digital-analog IO: Augmented Storytelling

I liked the idea of a light sensor adjusting the speed of a servo, so translating something intangible (light) into something very tangible (movement, force). While looking for parts in the shop trash shelf I found a bright-yellow rubber lizard arm and hand. I pictured the rest of the animal and immediately associated the arm with a yellow T-Rex. I mounted the arm on a servo and connected it with a light sensor.

#include <Servo.h>

Servo myservo;

int analog2 = 0;

void setup() {
  // intialise serial communication and servo GPIO pin
  Serial.begin(9600);
  myservo.attach(2);
}

void loop() {
  // read brightness of light sensor via serial
  analog2 = analogRead(A0);
  int brightness2 = map(analog2, 550, 950, 0, 20);
  Serial.println(brightness2);
  // constantly read and update light sensor while running servo
  // map brightness to correct range, match it to delay
  // modify speed of servo sweep
  int pos;
  for(pos = 0; pos <= 180; pos += 1) { 
    analog2 = analogRead(A0);
    brightness2 = map(analog2, 550, 950, 0, 20);
    Serial.println(brightness2);
    if (brightness2 > 2) {
      myservo.write(pos);               
      delay(20 -brightness2);                        
    }
  } 
  for(pos = 180; pos>=0; pos-=1) {      
    analog2 = analogRead(A0);
    brightness2 = map(analog2, 550, 950, 0, 20); 
    Serial.println(brightness2);
    if (brightness2 > 2) {                             
      myservo.write(pos);               
      delay(20 - brightness2);                        
    }
  }  
}

I tested it with a first prototype and adjusted the brightness values slightly with an if-statement to exclude values below zero that blocked the script from execution.

After that I constructed the shape of a T-Rex with illustrator, laser-cut a version with cardboard to test and finally the two sides with acrylic. I made a mistake while mounting the servo: because I put the shorter leg of the T-Rex on the same side as the arm, as a result the dino tilted to the right hand side and could not stand by itself anymore. 

I fixed it by attaching a baseplate to the T-Rex.

IMG_1720.JPG

In this example the story of a planetary catastrophe caused by a meteor impacting earth thousands of years ago which probably led to the extinct of dinosaurs gets augmented by a light sensor: The T-Rex is waving a white flag to hold the meteor back, the closer the meteor gets, the slower the movements are - finally they stop completely.

Assignment

Pick a piece of interactive technology in public, used by multiple people. Write down your assumptions as to how it’s used, and describe the context in which it’s being used. Watch people use it, preferably without them knowing they’re being observed. Take notes on how they use it, what they do differently, what appear to be the difficulties, what appear to be the easiest parts. Record what takes the longest, what takes the least amount of time, and how long the whole transaction takes. Consider how the readings from Norman and Crawford reflect on what you see.

I observed automated cashiers / self-checkout machines at a pharmacy & supermarket chain in Manhattan. These machines replace the human cashier and are supposed to fasten up the process of paying for goods before leaving the store. The machines are located at the entrance / exit area facing a wall. They look like a mix of packaging machine and scale with a credit card reader and a screen attached to it. They are supervised by a security guard / human cashier who is guiding customers towards using these machines instead of human cashiers, making sure that the customers encounter no issues during checkout and that they use it correctly.

Notes: 

  • Customers get asked about their customer card
  • General procedure: Pick up items on the right-hand side, scanner is in the middle, place it in the bag on the left-hand side, pay
  • Sometimes the scanner doesn't work and then customers have to rescan again-  customers constantly check the screen whether it got scanned or not
  • During the whole process customers get constant guidance and information about price by a robot voice
  • After that customers a select payment option on central screen then move to the card reader on the right-hand site
  • Customers have to wait until the payment is confirmed, they get updated visually on the screen
  • After that customers grab their bag and take their credit card
  • Timeframe : depending on amount of items, around 3s per item plus checkout time at the end (about 10s) - usual process is slower than with the human cashier
  • Observation: a lot of stimulimixed together (visual, audio, manual), confusing graphics (some users search on screen with fingers)

Summary: Customers seem to show very different speeds and comfort at using the machines, training may have an effect on this. Older people generally avoid them and pay at the human cashier. The credit card transaction / final payment involves a certain waiting time that takes the longest. Scanning is usually without issues, the vocal guidance seems to enforce the guidance on the screen. Regarding the location of the four areas (item tray, scanning area with screen, bag area, payment area) I felt reminded of Norman and Crawford that both highlighted the important role of the spatial dimension of the operating elements. In this case it was unclear why the movement from right to left was interrupted by the final movement for the payment to the right. This seemed to be counterintuitive for customers. 

PCOMP 2: Electricity

The main focus of this class was electricity: Volts, Amps and Ohms. I did all the labs, great fun and great learning!  

Labs

Switch Idea: Pulse Lights

I added stronger magnets to my IFAB project (flashlight) and redid a few solder connections - now it works reliable as a proper switch - the lights start pulsating organically once they are close to each other. 

 

 

PCOMP 1: INTRODUCTIONS

It all started! A few days ago we had our first class at ITP, Intro to Physical Computation and we are all very excited to peak into the all the little corners of this amazing world of physical computing.

 drawing by Azelia for our first team assignment: fantasy interaction device connecting wild animals with humans via brainwave scanning and touch-emotions.&nbsp;

drawing by Azelia for our first team assignment: fantasy interaction device connecting wild animals with humans via brainwave scanning and touch-emotions. 

 

Assignment

After this class’ discussion and exercise, and reading Chris Crawford’s definition and Bret Victor’s rant, how would you define physical interaction? What makes for good physical interaction? Are there works from others that you would say are good examples of digital technology that are not interactive? 

 

I will give it a try:

A high degree of physical interaction characterizes events between living or non-living entities that are based mainly on non-verbal/non-visual, body based forms of processed communication in a mostly limited timeframe that is perceivable by all parties.

Writing down this description with the two texts in my head I immediately had the feeling that I took the easy way out by starting  with "A high degree of ... ". I took that from Chris Crawfords definition as I liked its flexibility. It probably gets a higher degree of accuracy through that (pun intended). "Characterizes" offers the same extra space in this definition - we are talking about what main features physical interaction should show. I am then talking about "events" in my definition to highlight the fact that these interactions can happen by chance or are planned. "living or non-living entities" refers to the agents in this event: the action between them can happen between animals (includes humans), plants or any form of living entity that shows a sign of life (microbes, genes, etc as well) and everything that is not alive in a strict sense machines, mixed-agent systems, networks, ecosystems (now it is getting a bit broad). I am to an extent excluding verbal and visual interaction to focus on the physical side of things: physics here include any physically perceivable ways of communication. The human voice and even light (and therefore sight) can as well have a physical dimension, in this limited definition the focus is intentionally on the mostly (technologically) neglected parts of interaction. Here I follow Bret Victor and his call for body sensations/stimulations instead of mainly visual/audible ones. The term "processed" points out that a form of information operation should go on after each interaction on each agent's side. All agents should ideally be able to initiate those events. Furthermore operations should be understandable by each agent - this includes the dimension of time: If the interaction spans thousands of years and one agent is human, it has a lower degree of instant physical interaction. Although climate change could be seen as a sort of slow-motion physical interaction as well.

Referring to the second question on what could be seen as a "good" interaction, I would say that the points listed above attempt to come up with a flexible framework for meaningful physical interactions in our contemporary environment - with time and body-based physicality as main parts.  

Thinking of good examples for non-interactive works, something tech-related came to my mind: pretty much all of our "interactive" technology is very one-sided interactive. The (mostly) human user initiates the majority of actions and is clearly dominating the communication. This could be seen as good design - you don't wanna be dominated by a machine. But this lowers the degree of interactivity by definition. If the machine mainly follows your orders, what implications does this have in a broader sense on our behavior, our best and our worst? I am very curious about how evolution will deal with this. How will our genome be altered by these interactions and these new environments? 

I got carried away - but there is something very inspiring about this assignment. I hope I can follow up on these thoughts later in class with meaningful and really interactive physical computing projects . 

Coming back to the question: If we define interactivity like I tried to define it earlier above, the degree of interactivity of a lot of tech is very limited: As Bret Victor pointed out, most of the tools neglect 90 percent of our senses and exclusively focus on audio-visual interaction. Every social network, take Instagram or Facebook, is therefore in a strict sense only a low-level interactive technology. If we narrow it down to physical interaction, even a vacuum cleaner offers more technologically driven physical interactivity than those networks. Plus technically speaking, they merely translate (and reduce) the complexity of human interactions into arguably limited representations of the latter. So the real agents here are humans, this is not even interactive technology. It is a translation tool. Siri, Alexa or Verdana are first attempts to break out of this "translation" definition as they tend to offer a more meaningful and active processing part. But they are as well still addressing only a fraction of our interactive potential - audio and visual communication channels. Therefore it might be questionable to regard them as highly interactive technology that is stimulating and meaningful for all agents. 

We probably have to differ between interactive technologies and technologies that enable interaction. New developments in machine learning could help to move beyond the notion of technology as a tool for interaction and create new experiences that could create technologies with a higher degree of interactivity as defined earlier in this post. 

I am excited!