Hello all! So I’m currently trying to look at ways I can utilize the Kinect to create interactive installations. I’ve had the following three projects in mind when considering it:
Put simply, I want the participant to be able to interact with some sort of projected image (or be the cause of one) as was the case with the first two videos, or at the very least, to be able to achieve something similar to the last one. I’ve attempted using the source code provided by the user in the final video, but have run into numerous problems attempting to debug it, unfortunately. I figured that starting off with something basic – using the Kinect to enable a drawing application – would be a good place to begin.
I’ve installed SimpleOpenNi (as well as a few other libraries) in an attempt to use the available examples to teach myself. However, I’m having a lot of trouble going anywhere from there. I can get the sketches to work, but beyond confirming that the Kinect can recognize my skeleton or track my hands, I’ve no idea what to do. I think the problems for me have to do with not knowing how to apply these examples to new programs; as in, how can I use the examples provided to create different applications? Am I supposed to obtain certain data, like the position of my body within the space, and use that? If so, how? Basically, how do I take these sketches and actually do something with them?
I apologize for my lack of technical knowledge on the subject; I love learning about this but programming is probably the most abstract thing I’ve ever had to do, so I would appreciate any answers or pointers, or even questions if further clarification is needed. Thanks!