So I've spent a good part of the last month working on a couple of Python applications to get joint coordinates from the skeletons generated from a Version 2 Kinect and finally have something to show for it. When an XEF file is played back using the Kinect Studio, the Kinect Service interprets it as if the data is being captured live and you can get data out of this using the BodyFrame object, which contains data on all the bodies in the current frame and the location of their joints. The only problem is that the files are in the GigaBytes in size and takes up quite a bit of hard drive space. I've managed to store the location of each body's joints in a database with a concise database schema.
I can easily access which "performance" I want using a unique ID that I assign when I record Kinect data and store it in a database that can be accessed by a variety of platforms using 0.1% of the original hard drive space. At the moment I am only using motion capture data of the 25 joints the Kinect can automatically detect and no RGB or sound data but I want to have some sort of compromise early in the new year. Using this data I can easily play back the data in a much more human-understandable way in case I need to analyse the data visually or quantitatively, or even just to make sense of the data. I've included a screen shot of how this looks below.
The other great thing about having this data in a database is that I don't need to iterate over the XEF files and extract joint coordinates whenever I want to create a graph, for example. I can just read the data out of the database using a simple SELECT query and plot a graph. Already I'm able to generate graphs tracking joint movement, velocity, and acceleration over time - not something that was particularly easy with motion capture data, even a decade ago.
I can't wait to get recording ensembles and using the kit to get some real data and see what we can find! I've been getting some ideas for studies after my initial exploratory pilot test - watch this space!
Ryan