Be a Fugio beta tester!

Screenshot 2015-02-07 11.15.42

Fugio is almost out of alpha development, which means that I’ve completed all the features I have planned for the first release, and tested it until I’m confident about it’s general stability and performance.

Now it’s time for you to try it!

I’ve done a call out for beta testers who will be the first people to get their hands on the software and will be instrumental in the next stage of its development.

If you are interested in applying, please see the online form.


Fugio Update Feb 13

Screenshot 2015-02-12 17.02.00For the past two weeks I’ve been chasing down bugs and writing documentation for Fugio, neither of which are my favourite pastimes.

I’ve done 152 code commits, added a great deal of helper features like a menu of examples that show what each node does in a nice, friendly way, and am generally trying to standardise what things are called and how they operate as much as possible to make it as easy as it can be to get past the initial hurdle of approaching a new piece of software.

It’s amazing how much time that side of things takes.  While I’m most excited about the timeline system (and the upcoming audio, video, and OpenGL plugins), I’ve spent almost as long working on making sure you can undo and redo things, that copy and paste works, and that MIDI synchronisation is good and tight.

I’m having to temper my enthusiasm for wanting to show it to the world with the knowledge that if it’s not documented and tested as best that I can, people just aren’t going to bother with it.

Have still got about half the documentation to write, which I want to have done over the next week.

Qt Multimedia in Fugio

Screenshot 2015-01-26 23.54.32

Fugio (and Painting With Light) are both written in C++ and built using the Qt Project, mainly because it offers a (mostly) consistent API across multiple platforms.  It offers a wide range of low and high level functionality, which are often great fun to play with.

Take the Qt Multimedia module, for example.  It’s so high level that I couldn’t resist adding in a couple of nodes that interface with it, so above we have the new SoundEffect node that can load and quickly play WAV audio files when triggered.

Taking MIDI input (or a multitude of different options, a Makey Makey for instance) and linking them up to SoundEffect nodes would rapidly create a simple sound board, but, y’know, triggered off touching various fruits

You can also see the new Filename Node, which is another small helper: click the button and a file open dialog appears.

Screenshot 2015-01-26 02.13.47

And here we have the Multimedia Player Node that can playback more complex media formats such as mp3’s and also video!

While I’ve been putting a lot of development time into a ffmpeg based timeline controlled media playback node,  sometimes you just need a simple way to play media, and these new nodes fit the bill nicely.

Sequencing Resolume Arena with Fugio

Screenshot 2015-01-25 23.26.14

As I’m working on streamlining the MIDI control workflow, I hooked Fugio up to Resolume to see how easy it was to get them talking.

Screenshot 2015-01-25 23.27.23Here I’m using a colour timeline, breaking it into RGB components and passing them through one of the new MIDI Helper nodes, which takes floating point values from 0.0 to 1.0 and converts it to MIDI values of 0-127, and outputs these values to Resolume to control its RGB controls.

Everything works, but the process of mapping controls was more fiddly than it should be due to not being able to send a single CC value at a time and thus not being able to take full advantage over Resolume’s MIDI mapping listen functionality.  I’m thinking how to add a way of doing that…

MIDI listen is a nice, fast way to map controls so I added it into Fugio’s MIDI Input node to automatically add pins on receipt of MIDI messages.

Fugio MIDI synchronisation

Screenshot 2015-01-23 17.39.12

Today I’ve been looking at synchronising Fugio with various applications over MIDI.  First, using the ever useful MIDIOX and loopMIDI, I was able to get Fugio synchronised to MIDI Time Code.

Screenshot 2015-01-23 18.30.26I also got MIDI clock working, synchronising Fugio to Ableton Live.  The main difference being that Fugio works in time, not measures/beats, so I had to manually set the BPM value to match the setting in Live, then it all matched up.

My plan for this is to feed the MIDI clock into a grid track, allowing direct and flexible translation between song positions and time.

I still have to finish off sending clock/MTC from Fugio, but it’s almost there.


Fugio Keyboard Support

I’ve been away for a couple of days running a Painting With Light video mapping workshop at Bournemouth University so today I managed to do a little Fugio coding and added a keyboard node to catch any keyboard sequence such as simply pressing R or combinations like CTRL+7 and generate a trigger.

Screenshot 2015-01-21 12.45.12

In this patch pressing R generates a new random number.

Fugio: Timeline Recording

This morning I added timeline track data recording into Fugio.

Screenshot 2015-01-15 10.50.42

For now it can record numeric values and also colours (have an idea for this) over time and then play them back. I need to add some punch in/out control and am thinking to put loop recording support in that would incorporate the functionality I was aiming for in my old app MIDILoop.

At some point I guess I need to do raw MIDI and OSC data tracks too for fine control. There now exists the possibility of recording data from one source into a timeline, outputting that through other processing stages, and re-recording it all in-app.

Screenshot 2015-01-15 10.50.48Apologies for the horrible timeline colours – was testing some stylesheet stuff… 🙂


Fugio: Colour to MIDI notes

I managed to get the colour timeline controls working pretty well (still some finessing to do) so I thought I’d try a little experiment and feed the Hue, Saturation, and Lightness from the colour being generated in the colour timeline to a MIDI output, creating musical notes depending on the levels. There is a grand tradition of linking colours and musical pitch (see Isaac Newton’s 1704 book Opticks) so this provides a way of playing about with this data.

Atmospheric Scattering Shader


Just found this rather nice Atmospheric Scattering rendering C++ code over at and thought I’d do a quick conversion to a GLSL shader as a test for the Timeline software I’m working on. Works rather nicely…


My (none optimised) fragment shader conversion is:

#version 150

#define M_PI 3.1415926535897932384626433832795

uniform float TimeOfDay; // range 0.0 -> 1.0 (0.0 = Midnight, 0.5 = Midday, etc)

const float RADIUS_EARTH = 6360e3;
const float RADIUS_ATMOSPHERE = 6420e3;
const float RAYLEIGH_SCALE_HEIGHT = 7994;
const float MIE_SCALE_HEIGHT = 1200;
const float SUN_INTENSITY = 20;

const float g = 0.76;

const vec3 betaR = vec3( 5.5e-6, 13.0e-6, 22.4e-6 );    // Rayleigh scattering coefficients at sea level
const vec3 betaM = vec3( 21e-6 );                       // Mie scattering coefficients at sea level

vec3 sunDirection = vec3( 0, 1, 0 );

const int numSamples = 16;
const int numSamplesLight = 8;

struct Ray
    vec3 o; //origin
    vec3 d; //direction (should always be normalized)

struct Sphere
    vec3 pos;   //center of sphere position
    float rad;  //radius

const Sphere SPHERE_EARTH      = Sphere( vec3( 0 ), RADIUS_EARTH );
const Sphere SPHERE_ATMOSPHERE = Sphere( vec3( 0 ), RADIUS_ATMOSPHERE );

bool intersect( in Ray ray, in Sphere sphere, out float t0, out float t1 )
    vec3 oc = ray.o - sphere.pos;
    float b = 2.0 * dot(ray.d, oc);
    float c = dot(oc, oc) - sphere.rad*sphere.rad;
    float disc = b * b - 4.0 * c;

    if (disc < 0.0)
        return false;

   float q;
    if (b < 0.0)         q = (-b - sqrt(disc))/2.0;     else         q = (-b + sqrt(disc))/2.0;       t0 = q;     t1 = c / q;     // make sure t0 is smaller than t1     if (t0 > t1) {
        // if t0 is bigger than t1 swap them around
        float temp = t0;
        t0 = t1;
        t1 = temp;

    // if t1 is less than zero, the object is in the ray's negative direction
    // and consequently the ray misses the sphere
    if (t1 < 0.0)
        return false;

    if( t0 < 0.0 )
        t0 = 0;

    return( true );

vec3 computeIncidentLight( in Ray r )
    float       t0, t1;

    if( !intersect( r, SPHERE_ATMOSPHERE, t0, t1 ) )
        return vec3( 1 );

    float segmentLength = ( t1 - t0 ) / numSamples;
    float tCurrent = t0;

    vec3 sumR = vec3( 0 );
    vec3 sumM = vec3( 0 );

    float opticalDepthR = 0;
    float opticalDepthM = 0;

    float mu = dot( r.d, sunDirection );
    float phaseR = 3 / ( 16 * M_PI ) * ( 1 + mu * mu );
    float phaseM = 3 / (  8 * M_PI ) * ( ( 1 - g * g ) * ( 1 + mu * mu ) ) / ( ( 2 + g * g ) * pow( 1 + g * g - 2 * g * mu, 1.5 ) );

    for( int i = 0; i < numSamples ; i++ )
        vec3    samplePosition = r.o + r.d * ( tCurrent + 0.5 * segmentLength );
        float   height = length( samplePosition ) - RADIUS_EARTH;

        // compute optical depth for light

        float hr = exp( -height / RAYLEIGH_SCALE_HEIGHT ) * segmentLength;
        float hm = exp( -height / MIE_SCALE_HEIGHT      ) * segmentLength;

        opticalDepthR += hr;
        opticalDepthM += hm;

        // light optical depth

        Ray lightRay = Ray( samplePosition, sunDirection );

        float lmin, lmax;

        intersect( lightRay, SPHERE_ATMOSPHERE, lmin, lmax );

        float segmentLengthLight = lmax / numSamplesLight;
        float tCurrentLight = 0;
        float opticalDepthLightR = 0;
        float opticalDepthLightM = 0;
        int j = 0;

        for( ; j < numSamplesLight ; j++ )
            vec3 samplePositionLight = lightRay.o + lightRay.d * ( tCurrentLight + 0.5 * segmentLengthLight );

            float heightLight = length( samplePositionLight ) - RADIUS_EARTH;

            if( heightLight < 0 )

            opticalDepthLightR += exp( -heightLight / RAYLEIGH_SCALE_HEIGHT ) * segmentLengthLight;
            opticalDepthLightM += exp( -heightLight / MIE_SCALE_HEIGHT      ) * segmentLengthLight;

            tCurrentLight += segmentLengthLight;

        if( j == numSamplesLight )
            vec3 tau = betaR * ( opticalDepthR + opticalDepthLightR ) + betaM * 1.1 * ( opticalDepthM + opticalDepthLightM );
            vec3 attenuation = exp( -tau );

            sumR += hr * attenuation;
            sumM += hm * attenuation;

        tCurrent += segmentLength;

    return( SUN_INTENSITY * ( sumR * phaseR * betaR + sumM * phaseM * betaM ) );

void main()
    const int width = 512;
    const int height = 512;

    float a = mod( TimeOfDay - 0.5, 1 ) * 2.0 * M_PI;

    sunDirection = normalize( vec3( 0, cos( a ), sin( a ) ) );

    float x = 2 * ( gl_FragCoord.x + 0.5 ) / ( width  - 1 ) - 1;
    float y = 2 * ( gl_FragCoord.y + 0.5 ) / ( height - 1 ) - 1;

    float z2 = x * x + y * y; 

    if( z2 <= 1 )
        float phi   = atan( y, x );
        float theta = acos( 1 - z2 );

        vec3 dir = vec3( sin( theta ) * cos( phi ), cos( theta ), sin( theta ) * sin( phi ) );
        vec3 pos = vec3( 0, RADIUS_EARTH + 1, 0 );

        gl_FragColor = vec4( computeIncidentLight( Ray( pos, normalize( dir ) ) ), 1 );
        gl_FragColor = vec4( 0 );

Timeline Development – 3rd August 2014

Screenshot-2014-08-03-12.13.08It’s been a while since my last update, though not from lack of action, rather I’ve been struggling with my latest project for a the past few months and I felt it’s time to pull back the curtain a bit and show what I’ve been working on.

My original design for the Timeline software was a nice open-ended sequencer that could manipulate all manner of types of data from single values (for MIDI or OSC control of parameters) to colours, audio, and even video, combined with a flexible (possibly too flexible) control over how each track played back with repeating sections and random markers, and all manner of tricks that I was getting really excited about using.

I’d spent almost a year working on it and had a pretty nice media playback engine, and everything seemed to be heading towards a 1.0 release back in June 2014 but then I hit a wall, which I have to say is pretty rare for me in my software development experience as I’ve always had a clear idea about what the role and function of each system I’m developing has been.

The problem was the growing complexity of visually managing the relationship between the different tracks of data and how these related to other applications and devices through the various input and output interfaces.  I was also toying with the idea of being able to apply real-time effects to video and audio (also data) and these did not comfortably fit into the design I had come up with.

I’ve also slowly been working on another application called PatchBox that uses a node based interface to visually build connections between blocks of functionality, so I took a deep breath and ripped the code apart and put in a new interface:

Screenshot-2014-06-25-21.49.30The node interface went some way towards solving the problem of presenting the relationship between tracks and devices, but there was a major problem, in that the core code for the node system (it’s actually the code that drives several of my art installations such as Shadows of Light) was rather incompatible with the core code of the Timeline application, and a hard decision had to be made:

  1. Release Timeline and PatchBox separately and fix the interface issue over time.
  2. Combine the two applications, which would require taking a massive step back equivalent to months of development time.

Not an easy one to make, compounded by the fact that as a freelance artist, until I get a product on sale, I’m basically paying for all the development time out of my own pocket so the latter option was not to be taken lightly.

After a couple of weeks of chin stroking, frantic diagrams scratched in notebooks, thinking about what configuration would be most commercially viable, and false starts, I came to a final thought:

“Make the tool that you need for your art”

It’s not that I don’t want it to be a useful tool that other people will want to use and buy at some point (that would be lovely) but I’m not a software design company, and this is primarily an “art platform” for my own work so I have to listen to what feels right to me.

So, I chose the latter (of course) and I’ve been working on it at least a few hours a day, pretty much every day for the past few months.  The screenshot at the top of this post is the latest showing a colour timeline track feeding into an OpenGL shader.

There is still much to be done and it’s pretty gruelling at times as I’m having to go over old ground repeatedly, but I feel like it’s heading in the right direction, and I’m already creating new artworks using it that wouldn’t have previously been possible.

Realistically a 1.0 release isn’t now going to happen until 2015, though with a long solo project like this it is easy to find yourself on the long slide into a quiet madness of complexity and introspection so I’m planning more regular updates to at least keep my progress in check by “real people”.  To this end, if you have any comments, questions, or general messages of encouragement, I’d be happy to hear them.