Monday 7 March 2016

TOPLAP Leeds: First public outing of "FoxDot"

Outside of my PhD research into nonverbal communication in musicians I also have a passion for Live Coding music and programming my own music-making system, FoxDot. I'm quite new on the Live Coding "scene" and have been wanting to get more involved for the last year or so but never really plucked up the courage to do something about it. Luckily for me I was asked by the Live Coding pioneer, Dr. Alex McLean, to help set up TOPLAP Leeds, a node extension of the established Live Coding group TOPLAP, along with a few other students.

What this means, I don't really know, but Alex encouraged me to put on a small demonstration of the code I've been working on to get some feedback from my peers. It was the first time I had used FoxDot in a public setting that wasn't YouTube and I was actually really nervous. I started working on the system last year for a module in composition as part of my Computer Music MA and from there it grew - but I had never really thought I would use it to perform. The feedback was really positive and the group gave me some great ideas to put into practice but my PhD comes first and I'm trying to limit the amount of time I spend on FoxDot to only weekday evenings and Sunday - but it's just so fun! 

FoxDot is a pre-processed Python based language that talks to a powerful sound synthesis engine called SuperCollider and let's you create music-playing objects that can work together or independently to make music and I'm really hoping to promote it over the next few years amongst the Live Coding community. It's generally in a working state but I have so many ideas for it that it seems like it's still very much in its infancy. The reason for writing this blog post about it is to try and make it all a bit more real. For so long I've been working on it in a private way; only letting the public see it in brief clips on YouTube - it's time I actually started to put it out there and letting it loose on the world. If you're interested in Live Coding, I urge you to check out http://toplap.org and also my FoxDot website https://sites.google.com/site/foxdotcode/ and see what you can do. I'm performing at the ODI Leeds on Friday 29th April and you can get tickets here.

I hope this is the first step of a long and rewarding journey.

Tuesday 1 March 2016

How to "iterate" over Kinect files and extract RGB video stream

If you have ever tried to extract data from a Microsoft Kinect Extended Event File (XEF), you may have had some trouble getting your hands on the RGB video stream that it contains, like some of the users in this MSDN thread. As mentioned in some of my previous blog posts, I've found it possible to play an XEF file in Kinect Studio and then record data as it is being played, as if it were a live stream. This makes it relatively easy to collect data on the bodies and their joint positions but difficult with the other types of streams due to the amount of data processing that each frame requires.

Incoming RGB video frames need to be written directly to file otherwise you heavily increase the amount of resources needed to store them in memory and risk crashing your computer but not all machines can perform enough I/O operations to keep up with the 30 fps playback rate of the XEF file, which makes extracting this data very difficult for most users. Being able to extract the RGB video data would be really useful when trying to synchronise the data with audio I am recording from a different source; something that is integral to the data analysis in my PhD studies and hard to achieve with the skeleton data alone. It wasn't until recently where I tried pressing the "step file" button in Kinect Studio to see whether each frame could be individually sent to the Kinect Service and, consequently, my data extraction Python applications. To my surprise: it worked!

Close up of the "step file" button

It seems like it would be possible to manually iterate over the file by clicking the "step file" button but for long files this would be very time consuming (a 5 minute file at 30 fps contains around 9,000 frames). Using the PyAutoGUI module I was able to set up an automated click every 0.5 seconds on the "step file" button, which could be specified by hovering over it and pressing the Return key, and iterate over the file automatically and allow me to extract and store the RGB video data successfully. I tried to implement the automated click to press as soon as the frame was processed but got some Windows Errors and will hopefully fix this in future to make the process faster, but right now it's at least a bit easier!

I am also hoping to find a bit of time to write up a simple README for the application, which is available at my GitHub here: https://github.com/FoxDot/PyKinectXEF

Please feel free to use it and give feedback - bad or good - and I look forward to hearing any suggestions!

- Ryan