Arduino Uno

For what I am hoping to achieve, I am going to need to use an Arduino board to create the installation I am wishing to make. Arduino is an open source electronics platform with simple hardware and software. The Arduino board I can have access to at University is the Arduino Uno board which looks like this:Arduino_Uno_-_R3The Arduino Uno board allows you to upload sketches onto the board, and “receives input from many sensors and affects its surroundings by controlling lights, motors and other actuators”. The Arduino software uses a code similar to that of Processing, telling the Arduino what to do by programming the code and uploading it onto the board.

This is going to be the best thing for me to use as I am wanting to make a free-standing installation using ferrofluid and magnets. I have been looking into the kinds of things I can use with the Arduino to make it interactive, and I think a PING sensor would best suit my idea.


Use a PING sensor on the Arduino Uno to determine how close or far away someone is. Rig up an electro-magnetic current through the magnet and other screw-like objects to create ferrofluid sculptures. The sensor would determine the strength of the current running through the magnet, so if no one is near nothing will happen, but if someone wishes to engage with it then the current will kick in and the sculpture will be formed.



SparkFun Electronics – Arduino Uno R3.jpg –


Open Libraries for Processing – Bouncing Face idea

After researching for my idea of having a bouncing face from the camera capture, I realised I needed to use a face detection/recognition example, and these are not available in the examples given, but they are in the Open Library on the Processing website and on the software in the Library manager section. They can be installed and used by everyone as they are Open Source.

Screen Shot 2015-01-07 at 15.10.21

For what I want to achieve I downloaded the Open CV library, which uses the import of

import gab.opencv.*;
import java.awt.*;

For my next step I am going to have to separately get the face recognition to work, and capture an image from from the face recognition, and once that is working I will be able to merge the codes from the Open CV and the Bouncing ball to hopefully get a working outcome.

Here is the face recognition Open CV working:

Screen Shot 2015-01-18 at 17.09.47

This is it working detecting one face, now to see if it recognises two faces…

Screen Shot 2015-01-18 at 17.14.39

it does!

Screen Shot 2015-01-18 at 17.16.02

…It even detects David Bowie’s face!

This is helpful for if I want to advance my idea to become more than one bouncing ball with more than one face on it. So my next step is to make the sure my bouncing ball sketch is working and then I will be able to combine and merge the elements of the two codes to create the bouncing face.



Installations and Interactivity

For many years I have researched and been interested in installations and the interactivity used with them. I am researching them as I am wanting to use an installation rather than something just on a screen, such as my idea to use ferrofluid in a container, and using a sensor to control the interaction, it will adjust the position of the magnet. Todd Winkler (2000) explored the idea that “participants may become part of the work itself, as others outside of the sensing area view the spectacle of their interaction while seeing the results”, this is what I am hoping to achieve, that people will interact with my design and others will stand and watch, but want to engage too when they can.

These are some examples of interactive projects made using Processing:

These are from the the exhibition archives on the Processing website. I find the plastic bags example very interesting as it takes the simple idea of camera interaction with pixels and creates it into a fully immersive installation using simple tools like plastic bags. The process and making of it would not have been simple, through having to use coding, mapping the pixels, creating an air supply to inflate and deflate the bags and also the linking to the camera or sensor, but the outcome is very simple yet effective for both the audiences watching and participating as it puts the audience into the piece, making them a part of the art.

The fluid example is also very interesting as it is as simple as just touching the surface that will create a reaction. As they explain in the summary of the video, it also creates a collaboration and teamwork when people come together to see what can be created from numerous different people interacting with it. I also find the scientific reaction very interesting as it is very similar to the effect of ferrofluid which I am hoping to use for my final outcome.

“Interactive media itself is part of the exhibition ‘content’ that visitors should experience and engage with; interaction with an installation is part of its ‘message’” (Hornecker and Stifter, 2006). The main idea I am getting from researching installations, is that there is numerous levels of interactivity which require different levels of audience participation to be able to create the message from the installation. Such as some designs just need the audience to walk past to interact, whereas some require the audience to fully immerse themselves within the installation using many different senses. For my project I want the user to do more than just walk past to be part of the design, so this is why I have decided to form an installation rather than simply graphics on a screen.

Both of these and many others interactive designs are made using Arduino and Processing combined. So I am going to research about Arduino boards and the possibilities of what I could make with it.


Hornecker, E. and Stifter, M. 2006. Learning from Interactive Museum Installations About Interaction Design for Public Settings [online]. OzCHI, NZ. Available from: [15th December 2014].

Winkler, T. 2000. Audience Participation and Response in Movement-Sensing Installations [online]. Brown University, USA. Available from: [15th December 2014].

Black to white

In Processing we learnt how to code a basic camera interaction design, which when the background of the capture screen is dim, the screen would be black, and when the background changed to bright, the colour on screen would then turn to white. We were also set the task to change it so that the opposite happened, the video is of it in action and I selected to show the code below as this is the part that determines the shade.

Screen Shot 2015-01-08 at 11.40.43

This is a good way to start leaning about camera interaction as it is not too complicated but many ideas can be built from it. Although it only requires a low level of interaction from the user and gives very little back, the idea it gives can stem many other simple ideas from it that may require a higher level of interaction, such as when a user walks across or into the camera, there could be bars of colour that change according to what they are wearing and where they are on screen.

Camera Interaction in Processing

Within Processing there are numerous examples of camera interaction that can be used and edited to create various different forms of camera interaction. Below are the video examples that are given:

Screen Shot 2015-01-07 at 14.30.37

This could be a good basis for ideas for my interactive design, and then I could build the idea

Screen Shot 2015-01-07 at 14.47.31

The most important part of camera interaction is the code above – this loads the camera for the information to be captured from. The possibilities with camera interaction for almost endless, see my Interactive Design Ideas post for my ideas of what I have come up with for camera interaction.

For example, this is what the Mirror 2 example looks like when run,


Screen Shot 2015-01-07 at 14.43.29

but it can be altered so easily, such as in the image below I changed the colour by changing the number of the background colour, and I also changed the ‘rect’ to an ‘ellipse’, making the pixels circles instead of squares.

Screen Shot 2015-01-07 at 14.59.20


From this, I am going to go through the examples already given and try and edit to try and create a really interesting and different example. One of my ideas is to capture a face and link it to my bouncing ball example, so it is essentially a bouncing face, this will be my next Processing task.

Using Vectors in Processing

In our workshop we learnt how to code to simulate different forces, including velocity and gravity. We learnt how to create a simple ‘bouncing ball’ simulation using PVectors of position, velocity, gravity, force, and centre. Vectors are used to simulate the forces by using direction and magnitude, in a sense getting something from A to B. The image below shows how vectors work in 3D, using x, y and z co-ordinates.


Like we did in our workshop, we simulated real forces like gravity and velocity to make them react like they would do in a real life environment. This is perfect for our project of having to make an interactive graphic as it makes it more real for the users who would be in the environment and therefore heighten the interaction. Below is the code we used to simulate a bouncing ball, this is a good example to use as it uses numerous different forces but it is simple to understand. We learnt new language through doing this example, including .get, .set, .sub, .mult and .add.

Screen Shot 2014-11-27 at 13.25.07

Processing Research – Spirograph

After some of the language we have learnt in processing making examples being very similar to looking like a spirograph, I wanted to search to see if I could find a code to make one myself. The first one I came across was by Sam Brenner, it was very different to what I was expecting to find when I was searching, but I was fascinated by it nonetheless. Try it out:

It is interactive in a way that every pixel is a different element of the spirograph and when the mouse is moved onto each pixel it changes in a fluid movement. I am intrigued in the use of colour in this example and I also feel it links well with the concept of abstraction which I am going to research into further as an idea to find inspiration for my final outcome. I also like the idea of linking abstraction with science and maths to create something that you would maybe not associate with arts and design.

The webpage shows the coding used to create the spirograph, and it actually has very little coding involved, however it includes trigonometry which we have not yet been taught which makes it look more complicated than it probably is. I am going to learn more language to do with maths and physics so I am able to do more advanced and different codes.

Screen Shot 2014-11-26 at 12.26.02



Adding a clock on Processing

In our workshops we were taught how to add images into processing and how to select individual pixels of the image so we can change a single pixel or a large group of pixels.

New language I learnt in this task:

  • PImage;
  • loadImage()
  • loadPixels()
  • text()
  • textSize()
  • String

We were then set the task to go home and put a clock on our images. We were given the code for the clock and had to add it to the images we had created in the workshop.

Screen Shot 2014-10-25 at 13.28.46


This is what the code looked like:

Screen Shot 2014-10-26 at 16.20.58

I will be playing around with this more to see what different effects I can get and make it look a bit more interesting!

References: CC BY-SA 3.0

Playing about with Processing

In workshops and in spare time I have been playing about with processing more and learning more of the language to broaden my Processing horizons.
In the workshop we started off with our usual ‘void setup’ and ‘void draw’ functions and then used triangles (triangle()) and rectangles (rect()) to create an image of a house. We then played with this so that it drew a house for each frame the mouse was held down for. I was being very imaginative and drew my name…

Screen Shot 2014-10-17 at 10.08.15

We then went on to create rows of houses and to rotate them. To create the blur effect we reduced the opacity levels of the object and this was the outcome:

Screen Shot 2014-10-17 at 10.23.45

In the workshop we also learnt about arrays so we could make a colour palette, so from this workshop I then went home and played around with the sketch. I was amazed at what I could create from simply changing a few small things. For example instead of having a house, I simply made an ellipse of an oval, I changed the opacity of the objects and I changed or added colours to my array.

pretty.spiral pretty.spiral3  house spiral pretty.spiral4  pretty.spiral5 pretty.spiral6

We were also taught how to export our sketch as a video, so I did this with one of my creations:

Introduction to Processing

Processing is an open source programming language. It is widely used and can be the base software to create all kinds of things. Its language is very similar to that of Java so is very useful for the teaching of programming. There is so much information available about Processing on the internet including tutorials and open codes, it makes it a lot easier to gather information and learn.

In our first workshop we looked on the exhibition at all the different examples of media that had been made using Processing. One of the first ones I recognised was the digital petting zoo, this was because it was one of the installations at the Digital Revolution exhibition (Barbican, London) which I had been to see in the summer.

I had not realised how much could be/had been made with Processing. So when we started to learn the code it got me excited at the possibilities of what I could make. In our first workshop we started out with the basics of Processing, looking at the ‘void setup’ and ‘void draw’ functions and what to put in each section.

Screen Shot 2014-10-03 at 10.35.36

This is the code we learnt on the first workshop, and it looks like this when you run it. Screen Shot 2014-10-23 at 23.03.07

I will be continuing to play about with Processing and posting my creations.