Help creating interactive projects with the Kinect


#1

Hello all! So I’m currently trying to look at ways I can utilize the Kinect to create interactive installations. I’ve had the following three projects in mind when considering it:



Put simply, I want the participant to be able to interact with some sort of projected image (or be the cause of one) as was the case with the first two videos, or at the very least, to be able to achieve something similar to the last one. I’ve attempted using the source code provided by the user in the final video, but have run into numerous problems attempting to debug it, unfortunately. I figured that starting off with something basic – using the Kinect to enable a drawing application – would be a good place to begin.

I’ve installed SimpleOpenNi (as well as a few other libraries) in an attempt to use the available examples to teach myself. However, I’m having a lot of trouble going anywhere from there. I can get the sketches to work, but beyond confirming that the Kinect can recognize my skeleton or track my hands, I’ve no idea what to do. I think the problems for me have to do with not knowing how to apply these examples to new programs; as in, how can I use the examples provided to create different applications? Am I supposed to obtain certain data, like the position of my body within the space, and use that? If so, how? Basically, how do I take these sketches and actually do something with them?

I apologize for my lack of technical knowledge on the subject; I love learning about this but programming is probably the most abstract thing I’ve ever had to do, so I would appreciate any answers or pointers, or even questions if further clarification is needed. Thanks!


#2

This is quite ambitious for someone new to programming, but achievable !

I would highly recommend the book “Making Things See”

You really will be taking 5 steps back, and break some toes if you try to get from A-Z without the rest of the alphabet :smile: in other words… read the entire book.

You are trying to develop for a piece of hardware.

do you know what the hardware is doing?
what is a 3D camera?
what are these sketches doing with the hardware?

these are some crucially important questions before you can begin making software for hardware :slight_smile:

By the way… the book is geared for the IDE, Processing (from what it sounds like, this is your choice IDE).


#3

In addition to Making Things See (or perhaps before that) I’d give this shorter summary of computer vision a read:

http://www.flong.com/texts/essays/essay_cvad/

It provides a lot of background and context that will be helpful to have more realistic expectations, which will take you farther :slight_smile:


#4

Thank you both! I’ve read through the essay and am currently going through Borenstein’s book (I’m on the interviews). Hopefully it’ll help me get going on things.


#5

Super. Keep us posted. :smile: Looking forward to seeing what you make!


#6

I recently posted video from David Rokeby’s “Very Nervous System” and @bakercp pointed out that the hardware Rokeby used was made up of an array of photoresistors. What’s important in understanding that is that by the time Rokeby had a working system, he had gone through multiple prototypes, both hardware and software. He had, I imagine but can’t say for sure, thought about why he wanted the system and what, exactly, he needed it to be able to do. He built a tool that became both a work and an interface. A photoresistor is a fairly simple component compared to a laptop or Kinect but even a photoresistor is a complex tool with a history tied into electronics, politics and economics.
I would not discourage you from continuing to work with the Kinect but I think it’s a very loaded tool to begin with when working with computer vision. It’s a consumer, mass-marketed tool and, it seems to me, it will be much harder to deconstruct the data coming out of it than it would be to begin with a simpler computer vision approach, assuming your goal is a broader understanding of what is happening and what you can do with it. I would recommend the OpenCV library combined with whatever programming approach you prefer. You can make a lot happen with Processing and OpenCV and, in my experience, it’s much easier to understand what is happening and make small changes and see the effect they have. The piece in which falling objects collect on a body in front of the camera (all of the many pieces that have used that as a construct over two or more decades) could be achieved with OpenCV without the need for tracking skeletal points.

Programming-wise, I am cannot how much experience you have but do you understand how arrays work? The basics of object oriented programming? These are things vital to what the Kinect is doing and it’s going to be very difficult to understand and use the data that comes out of the Kinect (or OpenNI libraries) without at least a basic understanding of these concepts. Once again, the OpenCV library would be a good start, looking at blob detection and understanding how the computer sees blobs, changing the parameters in the blob detection library, looking at the instances of blobs constructed from the blob object.

Conceptually, I’d also suggest looking at artwork outside the maker-demo community. I think Ernesto Klar’s work is much more conceptually rich, and much more thought has gone into human motion, than the other two videos. It’s worth considering the history of drawing with the body in the form of dance and performance, with an eye towards what the piece you are moving towards could add to that or how it could deviate from that. There’s also a politics of tracking human information, from Bureau of Inverse Technology’s Suicide Box to Martha Rosler’s Vital Statistics of A Citizen, Simply Obtained.

I am sure you have the ability to learn whatever you need to in regard to the Kinect and making a drawing piece but I think a broader exploration with tools you can more easily understand might be more enriching. If you’ve considered all this, I apologize for my assumptions.


#7

+1 to all of that @atrowbridge.

Also, just as one more classic example – Text Rain is an example of a classic poetic computer vision piece.

It’s now a beginner project in some courses and just requires a webcam. It is probably worth exploring some of these approaches.

http://golancourses.net/2013/category/project-1/text-rain/

The nice thing about that example is that the students in that course explain what they are doing in words and in code.


#8

I appreciate the advice, @atrowbridge and @bakercp. I had not actually considered OpenCV. I did think about how whatever project I’d make would have connotations associated with the material I used, and I understood that the majority of the history of the hard/software I planned on using would be unknown to me. I had a vague understanding of computer vision’s ties with information, politics, surveillance, etc., but nothing thorough, all mostly superficial. Normally I would ground my projects within their contextual tendencies, but I suppose that my motivation for using the Kinect was that, as an avid gamer, it was a readily-available resource for me, and with its active community, I didn’t consider learning how to use it to be too problematic.

The suggestion to use OpenCV is something I’m strongly considering. Although I have so many resources for the Kinect, I am definitely finding it hard to grasp the technicality, and I would love the opportunity to demystify these tools that I’m using as I step into this field. Text Rain is a wonderful example of something I’d like to be able to do myself, so I appreciate the link to that. Are there any other resources that might be recommended to help me in learning OpenCV?


#9

Regarding OpenCV – OpenCV can be used with kinect or static images, etc. It’s basically just a big utility library for doing common image processing tasks with some extra bits for tracking, feature detection (like faces), etc.

If you’re just looking to explore some of the computer vision processes, I might recommend using the cv.jit library with MaxMSPJitter. I ported some OpenCV algorithms to that package many years ago and Jean-Marc has done a great job wrapping up a bunch of basic computer vision algorithms. Jean-Marc also has some great examples of using the Kinect alongside the cv.jit package with his jit.cv.freenect object.

Anyway, to get start with that you’d just download the trial of MaxMSP and install the cv.jit package according to the instructions.

While I don’t personally use MaxMSP much anymore, I’ve spent a lot of time with it and still use it to do quick mockups and explore/introduce basic computer vision concepts. I’d be happy to assist with any package you choose … be it MaxMSP, Processing or openFrameworks … or some other one :smile:


#10

2nd all @bakercp and @atrowbridge had mentioned! openCV has a heavy toolkit of the “essential” algorithms commonly found throughout computer vision applications. The kinect as a piece of hardware has those nifty calibrated IR lasers that give you depth info.

Their are numerous methods to gain that information (depth) depending on your environments in 2D space (measuring say bounding box dimensions around a tracked object, pixel density from the histogram, whatever other elements to make a more accurate system).Whatever your hardware tool of choice…the accuracy of your vision algorithm will rely heavily on your environment, lighting, hardware…and…well…your algorithm to name a few things in terms of machine vision.