Frame differencing: or what the hell do I do next?


#1

Does anyone have any experience with Frame differencing and would have some helpful resources they could refer me to? Trying to work something out but never done anything with motion detection before.
Thanks!


#2

Are you looking for something more than percent of pixels changed per frame like we did briefly on Monday?


#3

The first example in Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers is a Frame Differencing example in Processing.

If two consecutive frames are the same (meaning nothing has changed in the scene), the result of subtracting the last frame from the current frame will result in a zero value for each pixel (or a black frame). Generally after frame differencing (or before, depending on your strategy), you will threshold the “difference” frame, meaning turn all non-black (0) pixels to a value of one (1). Then you can count the number of 1 pixels and get a sense of how many pixels have “motion” in them. It can be useful to normalize this count by dividing the count by the total number of pixels (width x height), giving you a floating point number (0-1) where 1.0 means that all pixels were different (thus all pixels have changed) and 0.0 (no pixels have changed).

You then have to decide what your “movement detected” threshold is – somewhere between 0 and 1.

All of that said, I’ve talked about all of this assuming that your image stream is a grayscale image stream. The example in the link deals with this.


#4

@brannondorsey I just don’t think I know what I’m doing. As in: I don’t fully understand how I can get something to track the movement based off of the difference. Maybe we can go over it the next Monday you’re in oLab

Unless you have time @bakercp for me to send you an email explaining what I’m doing?

Thanks, dudes


#5

Can you post the description on the forum?


#6

I just have a big grid of eyeballs and I want them to follow movement but I have zero idea about all that. :eyes:


#7

are they mechanically controlled or video eyeballs?


#8

video eyeballs. (side note: strangest question I’ve been asked today)


#9

This was an old issue that maxMSP + Jitter had a handy tool for called jit.scissors and jit.glue. Essentially if you want the eyes to appear to be tracking motion in certain areas of the captured video you can break it up into a grid and have the eyes point towards the quadrant that has the greatest difference, much the way @bakercp mentioned above. Its not the most accurate method but its probably the easiest. Another strategy would be blob tracking using openCV, I am pretty certain processing has a cv library. The problem with that is multiple blobs, how do you determine which one to point the eyes to, etc… but it would give you the option of tracking things more accurately. This method could allow you to have different eyes on the grid pointing at different areas and thankfully you can set up the blob tracking for specific sizes and weights so that noise in the video is ignored.

Heres an issue though if you want the eyes to appear to be looking at a person then you need to figure out some kind of mapping so that the fov appears to be right, harder without some kind of depth information but not impossible.