Project Anders

Here is the code for Project Anders.

Code was initially developed by a friend of mine:


Using the Ping library available form, also have a look at the tute:

Note: I am using the cheaper HC-SR04 ultrasonic sensors, in order to use the Ping library and code you can bridge the Trig and Echo pin. If you’d rather use them separately use this library: from

initial CODE: 


//Written by Anders from

// these arrays are looped through, make sure your pinb and motor match.
// so mypin[1] should corrispond to mymotor[1] and so on
int myPins[] = {6}; // map the pins for the Ping Sensors
int myMotors[] = {9}; //map the pins for the Vibration motors
int howmany = 1; //number of sensors and motors

void setup() {
// initialize serial communication:
Serial.begin(9600); // this just means you can output to the serial panel

void loop()
// establish variables for duration of the ping,
// and the distance result in inches and centimeters:
long duration, cm;

// loop through the pins array, noting theat we’ve set the limit to 5
int i; // define “i” this is used as a count variable
// start a count loop, since you know how many sensors there are, hard code this in the i < NUMBER OF SENSORS bit
for (i = 0; i < howmany; i = i + 1) {
// print out what pin
// Serial.println(myPins[i]);

// The PING))) is triggered by a HIGH pulse of 2 or more microseconds.
// Give a short LOW pulse beforehand to ensure a clean HIGH pulse:
// check the pin pMyPin[i]
pinMode(myPins[i], OUTPUT);
digitalWrite(myPins[i], LOW);
digitalWrite(myPins[i], HIGH);
digitalWrite(myPins[i], LOW);

// The same pin is used to read the signal from the PING))): a HIGH
// pulse whose duration is the time (in microseconds) from the sending
// of the ping to the reception of its echo off of an object.
pinMode(myPins[i], INPUT);
duration = pulseIn(myPins[i], HIGH);

// convert the time into a distance
cm = microsecondsToCentimeters(duration);

// Serial.print(inches);
// Serial.print(“in, “);
// inches are for americans, they silly.

if(cm < 100){
// delay(returndelay(cm));
// analogWrite(myMotors[i], 0);
} else {
analogWrite(myMotors[i], 0);
} // end of the pin loop


int returnfeedback(int cm){
if (cm < 5){ // distance
return 255; // strength
} else if (cm < 10){
return 220;
} else if (cm < 20){
return 190;
} else if (cm < 40){
return 160;
} else if (cm < 80){
return 130;
} else if (cm < 100){
return 100;
} else {
return 0;

long microsecondsToCentimeters(long microseconds)
// The speed of sound is 340 m/s or 29 microseconds per centimeter.
// The ping travels out and back, so to find the distance of the
// object we take half of the distance travelled.
return microseconds / 29 / 2;



120605 Research Journal

I’m beginning to understand how my project lacked design, and more so lacked any form of interactive design.

Just because I was using “multi-modality” didn’t mean I was heading in a direction to create an “interactive” outcome.

I am now sorting through papers and evaluating what will be most useful to read based on a few things:
– Participatory Design
– Multi-Modality
– Low Vision
– Assessing if articles are creating interactive design solutions

While there seem to be many solutions for people with low vision to their day to day situations, these solutions seem to be passive and not interactive. Yes they may engage other modes or senses but they aren’t really interactive.

It has taken me a while to arrive at this point. I now need to begin searching for opportunities to create an interactive product.

Even my assessment of existing assistive technologies, however small, is showing me that these products are purely that, products. They do not engage a high level of interactivity in an “interactive design” sense. Yes they react to inputs and give you feedforwards to initialise an interaction but the depth of interaction is shallow. For example the magnification technology is purely that, it magnifies things. Whether it is a digital or analogue magnification, it doesn’t have the depth of interaction that I’m looking for.

Why am I doing this project?

How can I blend depth of interaction with something that is positive and can contribute towards the independence of a person living with low vision or AMD?

Is this even possible?

Is a navigation system for no/low sighted people even considered interactive? I understand that it creates feedbacks allowing the person to engage with their direct surroundings and the software/device itself.

This is something I feel I’ll need to investigate further.