After going through the basics of a circuit we were discussing analog and digital inputs/outputs as parts of the the introduction to micro-controllers. I did the labs and played a bit with the idea of using a light-sensor to augment storytelling - in this case the approach and impact of a big comet thousands of years ago which supposedly didn't exactly augment the lives of dinosaurs ...

But let's look at the labs first! 

I managed to map the range of the light sensor to the led brightness but had issues with the force sensor. Its values on the serial monitor were not constant, so even after mapping was still a flickering visible on the other led. 

Mini-application using digital-analog IO: Augmented Storytelling

I liked the idea of a light sensor adjusting the speed of a servo, so translating something intangible (light) into something very tangible (movement, force). While looking for parts in the shop trash shelf I found a bright-yellow rubber lizard arm and hand. I pictured the rest of the animal and immediately associated the arm with a yellow T-Rex. I mounted the arm on a servo and connected it with a light sensor.

#include <Servo.h>

Servo myservo;

int analog2 = 0;

void setup() {
  // intialise serial communication and servo GPIO pin

void loop() {
  // read brightness of light sensor via serial
  analog2 = analogRead(A0);
  int brightness2 = map(analog2, 550, 950, 0, 20);
  // constantly read and update light sensor while running servo
  // map brightness to correct range, match it to delay
  // modify speed of servo sweep
  int pos;
  for(pos = 0; pos <= 180; pos += 1) { 
    analog2 = analogRead(A0);
    brightness2 = map(analog2, 550, 950, 0, 20);
    if (brightness2 > 2) {
      delay(20 -brightness2);                        
  for(pos = 180; pos>=0; pos-=1) {      
    analog2 = analogRead(A0);
    brightness2 = map(analog2, 550, 950, 0, 20); 
    if (brightness2 > 2) {                             
      delay(20 - brightness2);                        

I tested it with a first prototype and adjusted the brightness values slightly with an if-statement to exclude values below zero that blocked the script from execution.

After that I constructed the shape of a T-Rex with illustrator, laser-cut a version with cardboard to test and finally the two sides with acrylic. I made a mistake while mounting the servo: because I put the shorter leg of the T-Rex on the same side as the arm, as a result the dino tilted to the right hand side and could not stand by itself anymore. 

I fixed it by attaching a baseplate to the T-Rex.


In this example the story of a planetary catastrophe caused by a meteor impacting earth thousands of years ago which probably led to the extinct of dinosaurs gets augmented by a light sensor: The T-Rex is waving a white flag to hold the meteor back, the closer the meteor gets, the slower the movements are - finally they stop completely.


Pick a piece of interactive technology in public, used by multiple people. Write down your assumptions as to how it’s used, and describe the context in which it’s being used. Watch people use it, preferably without them knowing they’re being observed. Take notes on how they use it, what they do differently, what appear to be the difficulties, what appear to be the easiest parts. Record what takes the longest, what takes the least amount of time, and how long the whole transaction takes. Consider how the readings from Norman and Crawford reflect on what you see.

I observed automated cashiers / self-checkout machines at a pharmacy & supermarket chain in Manhattan. These machines replace the human cashier and are supposed to fasten up the process of paying for goods before leaving the store. The machines are located at the entrance / exit area facing a wall. They look like a mix of packaging machine and scale with a credit card reader and a screen attached to it. They are supervised by a security guard / human cashier who is guiding customers towards using these machines instead of human cashiers, making sure that the customers encounter no issues during checkout and that they use it correctly.


  • Customers get asked about their customer card
  • General procedure: Pick up items on the right-hand side, scanner is in the middle, place it in the bag on the left-hand side, pay
  • Sometimes the scanner doesn't work and then customers have to rescan again-  customers constantly check the screen whether it got scanned or not
  • During the whole process customers get constant guidance and information about price by a robot voice
  • After that customers a select payment option on central screen then move to the card reader on the right-hand site
  • Customers have to wait until the payment is confirmed, they get updated visually on the screen
  • After that customers grab their bag and take their credit card
  • Timeframe : depending on amount of items, around 3s per item plus checkout time at the end (about 10s) - usual process is slower than with the human cashier
  • Observation: a lot of stimulimixed together (visual, audio, manual), confusing graphics (some users search on screen with fingers)

Summary: Customers seem to show very different speeds and comfort at using the machines, training may have an effect on this. Older people generally avoid them and pay at the human cashier. The credit card transaction / final payment involves a certain waiting time that takes the longest. Scanning is usually without issues, the vocal guidance seems to enforce the guidance on the screen. Regarding the location of the four areas (item tray, scanning area with screen, bag area, payment area) I felt reminded of Norman and Crawford that both highlighted the important role of the spatial dimension of the operating elements. In this case it was unclear why the movement from right to left was interrupted by the final movement for the payment to the right. This seemed to be counterintuitive for customers.