Fugio is almost out of alpha development, which means that I’ve completed all the features I have planned for the first release, and tested it until I’m confident about it’s general stability and performance.
Now it’s time for you to try it!
I’ve done a call out for beta testers who will be the first people to get their hands on the software and will be instrumental in the next stage of its development.
If you are interested in applying, please see the online form.
For the past two weeks I’ve been chasing down bugs and writing documentation for Fugio, neither of which are my favourite pastimes.
I’ve done 152 code commits, added a great deal of helper features like a menu of examples that show what each node does in a nice, friendly way, and am generally trying to standardise what things are called and how they operate as much as possible to make it as easy as it can be to get past the initial hurdle of approaching a new piece of software.
It’s amazing how much time that side of things takes. While I’m most excited about the timeline system (and the upcoming audio, video, and OpenGL plugins), I’ve spent almost as long working on making sure you can undo and redo things, that copy and paste works, and that MIDI synchronisation is good and tight.
I’m having to temper my enthusiasm for wanting to show it to the world with the knowledge that if it’s not documented and tested as best that I can, people just aren’t going to bother with it.
Have still got about half the documentation to write, which I want to have done over the next week.
Fugio (and Painting With Light) are both written in C++ and built using the Qt Project, mainly because it offers a (mostly) consistent API across multiple platforms. It offers a wide range of low and high level functionality, which are often great fun to play with.
Take the Qt Multimedia module, for example. It’s so high level that I couldn’t resist adding in a couple of nodes that interface with it, so above we have the new SoundEffect node that can load and quickly play WAV audio files when triggered.
You can also see the new Filename Node, which is another small helper: click the button and a file open dialog appears.
And here we have the Multimedia Player Node that can playback more complex media formats such as mp3’s and also video!
While I’ve been putting a lot of development time into a ffmpeg based timeline controlled media playback node, sometimes you just need a simple way to play media, and these new nodes fit the bill nicely.
As I’m working on streamlining the MIDI control workflow, I hooked Fugio up to Resolume to see how easy it was to get them talking.
Here I’m using a colour timeline, breaking it into RGB components and passing them through one of the new MIDI Helper nodes, which takes floating point values from 0.0 to 1.0 and converts it to MIDI values of 0-127, and outputs these values to Resolume to control its RGB controls.
Everything works, but the process of mapping controls was more fiddly than it should be due to not being able to send a single CC value at a time and thus not being able to take full advantage over Resolume’s MIDI mapping listen functionality. I’m thinking how to add a way of doing that…
MIDI listen is a nice, fast way to map controls so I added it into Fugio’s MIDI Input node to automatically add pins on receipt of MIDI messages.
Today I’ve been looking at synchronising Fugio with various applications over MIDI. First, using the ever useful MIDIOX and loopMIDI, I was able to get Fugio synchronised to MIDI Time Code.
I also got MIDI clock working, synchronising Fugio to Ableton Live. The main difference being that Fugio works in time, not measures/beats, so I had to manually set the BPM value to match the setting in Live, then it all matched up.
My plan for this is to feed the MIDI clock into a grid track, allowing direct and flexible translation between song positions and time.
I still have to finish off sending clock/MTC from Fugio, but it’s almost there.
I’ve been away for a couple of days running a Painting With Light video mapping workshop at Bournemouth University so today I managed to do a little Fugio coding and added a keyboard node to catch any keyboard sequence such as simply pressing R or combinations like CTRL+7 and generate a trigger.
In this patch pressing R generates a new random number.
This morning I added timeline track data recording into Fugio.
For now it can record numeric values and also colours (have an idea for this) over time and then play them back. I need to add some punch in/out control and am thinking to put loop recording support in that would incorporate the functionality I was aiming for in my old app MIDILoop.
At some point I guess I need to do raw MIDI and OSC data tracks too for fine control. There now exists the possibility of recording data from one source into a timeline, outputting that through other processing stages, and re-recording it all in-app.
Apologies for the horrible timeline colours – was testing some stylesheet stuff… 🙂
I managed to get the colour timeline controls working pretty well (still some finessing to do) so I thought I’d try a little experiment and feed the Hue, Saturation, and Lightness from the colour being generated in the colour timeline to a MIDI output, creating musical notes depending on the levels. There is a grand tradition of linking colours and musical pitch (see Isaac Newton’s 1704 book Opticks) so this provides a way of playing about with this data.
It’s been a while since my last update, though not from lack of action, rather I’ve been struggling with my latest project for a the past few months and I felt it’s time to pull back the curtain a bit and show what I’ve been working on.
My original design for the Timeline software was a nice open-ended sequencer that could manipulate all manner of types of data from single values (for MIDI or OSC control of parameters) to colours, audio, and even video, combined with a flexible (possibly too flexible) control over how each track played back with repeating sections and random markers, and all manner of tricks that I was getting really excited about using.
I’d spent almost a year working on it and had a pretty nice media playback engine, and everything seemed to be heading towards a 1.0 release back in June 2014 but then I hit a wall, which I have to say is pretty rare for me in my software development experience as I’ve always had a clear idea about what the role and function of each system I’m developing has been.
The problem was the growing complexity of visually managing the relationship between the different tracks of data and how these related to other applications and devices through the various input and output interfaces. I was also toying with the idea of being able to apply real-time effects to video and audio (also data) and these did not comfortably fit into the design I had come up with.
I’ve also slowly been working on another application called PatchBox that uses a node based interface to visually build connections between blocks of functionality, so I took a deep breath and ripped the code apart and put in a new interface:
The node interface went some way towards solving the problem of presenting the relationship between tracks and devices, but there was a major problem, in that the core code for the node system (it’s actually the code that drives several of my art installations such as Shadows of Light) was rather incompatible with the core code of the Timeline application, and a hard decision had to be made:
Release Timeline and PatchBox separately and fix the interface issue over time.
Combine the two applications, which would require taking a massive step back equivalent to months of development time.
Not an easy one to make, compounded by the fact that as a freelance artist, until I get a product on sale, I’m basically paying for all the development time out of my own pocket so the latter option was not to be taken lightly.
After a couple of weeks of chin stroking, frantic diagrams scratched in notebooks, thinking about what configuration would be most commercially viable, and false starts, I came to a final thought:
“Make the tool that you need for your art”
It’s not that I don’t want it to be a useful tool that other people will want to use and buy at some point (that would be lovely) but I’m not a software design company, and this is primarily an “art platform” for my own work so I have to listen to what feels right to me.
So, I chose the latter (of course) and I’ve been working on it at least a few hours a day, pretty much every day for the past few months. The screenshot at the top of this post is the latest showing a colour timeline track feeding into an OpenGL shader.
There is still much to be done and it’s pretty gruelling at times as I’m having to go over old ground repeatedly, but I feel like it’s heading in the right direction, and I’m already creating new artworks using it that wouldn’t have previously been possible.
Realistically a 1.0 release isn’t now going to happen until 2015, though with a long solo project like this it is easy to find yourself on the long slide into a quiet madness of complexity and introspection so I’m planning more regular updates to at least keep my progress in check by “real people”. To this end, if you have any comments, questions, or general messages of encouragement, I’d be happy to hear them.