Averaging color of video pixels


#1

Hello everyone,

I would like to take an area (50x50px) of a video and calculate its average color (in order to send it to Philips Hue bulbs later on). What’s the most straight forward way to do this?

At the moment I’m trying to catch a frame of the video to an ofPixels object, cut it down, send it to a while-loop to resize the image to 1x1px and then use that color info. Something works, but not as expected (the averaged color doesn’t match the supposed sampled area). I’m also thinking of averaging the color over 5-10 frames to smoothen it out (especially as the video will have color noise in it). Is there an averaging function that could take care of all this?

ofApp.cpp

void ofApp::update(){
    video.update();
        
    if(video.isFrameNew()) {
        
        videoFrame.setFromPixels(
                                 video.getPixels(),
                                 video.getWidth(),
                                 video.getHeight(),
                                 OF_IMAGE_COLOR);
        
        sample1.sampling(videoFrame);       
    }
    
    sample1.x = mouseX;
    sample1.y = mouseY;   
}

sample.cpp

void Sample::sampling(ofPixels& frame) {
    
    frame.crop(x, y, w, h);
    
    int newW = w;
    int newH = h;
    
    while(newH > 1) {
        newH = newH/2;
        if(newW > 1) {
            newW = newW/2;
        }
        frame.resize(newW, newH);
    }
    
    color_sample = frame.getColor(0, 0);   
}

The full code is on GitHub.
Any help appreciated, thanks!
Tobias

PS: this is my first attempt in openFrameworks without the immediate help of Mr. Baker, so feel free to point out all the other weird things in my code. :smile:


#2

hey @twobeers83 - incrementally resizing an image down to a single pixel isn’t the same as calculating the average color of a region of an image. What you want to do is get the pixels from the video and loop over them to calculate the average. Below is an ofSketch example that calculates the average color of a 50 x 50 rectangle at the top left of the webcam feed. The averaging region is hardcoded in the example, so of course you would want to make a nicer version that allows you to move/resize that averaging region.

ofPixels pixels;
ofVideoGrabber grabber;
ofColor averageColor;

void setup() {
    ofSetWindowShape(1024, 768);
    grabber.initGrabber(640, 480);
}

void update() {
    grabber.update();

    if(grabber.isFrameNew()) {
        
        // We have a new frame from the webcam feed, so it's time to recalculate
        // the average color of our desired region
        
        // Get the pixels from the video grabber
        pixels = grabber.getPixelsRef();
        
        // Set up variables to calculate the sum of the r, g and b channels
        int rSum = 0;
        int gSum = 0;
        int bSum = 0;

        // Loop through a region of the video feed to get the color sums
        for(int x = 0; x < 50; x++) {
            for(int y = 0; y < 50; y++) {
                ofColor pixelColor = pixels.getColor(x, y);
                rSum += pixelColor.r;
                gSum += pixelColor.g;
                bSum += pixelColor.b;
            }
        }
        int samples = 50 * 50; // The number of pixels we are averaging 
        
        // Update the average color
        averageColor.r = rSum / samples;
        averageColor.b = bSum / samples;
        averageColor.g = gSum / samples;
    }
    
}

void draw() {
    ofBackground(0);
    
    // Draw the webcam feed
    ofSetColor(ofColor::white);
    grabber.draw(0, 0);
    
    // Draw a section over the webcam feed to show the region we are averaging
    ofSetColor(ofColor::red);
    ofNoFill();
    ofRect(0, 0, 200, 200);
    
    // Draw a rectangle to show the average color of the region
    ofFill();
    ofSetColor(averageColor);
    ofRect(grabber.getWidth() + 25, 0, 200, 200);
}

That’s hopefully a good starting point for your averaging in your piece. From there you can assess performance and figure out whether you need a faster averaging solution.


#3

wonderful! I’m still easily confused of the capabilities of ofPixels (and the getPixels or getPixelsRef), ofImage and ofColor and how/when to use/access them, but this makes sense. thanks so much, Mike!


#4

No problem - glad it helped.

I haven’t been following the 0.9 release stuff very closely, but here’s my understanding. In 0.8.4, getPixels() returns a raw pointer to an array of unsigned chars and getPixelsRef() returns a nice ofPixels object.

When dealing with the raw pointer, you have to do a little math to get/set a particular pixel’s color. That’s because the image is being represented as a 1-dimensional array of color channel values:

Pixel 1’s R value, Pixels 1’s G value, Pixel 1’s B value, Pixel 2’s R value, Pixels 2’s G value, …

So if you’ve got an RGB image and you want pixel (x, y), you need to do:

  1. ( x + y * imageWidth ) * 3 to get the R value
  2. ( x + y * imageWidth ) * 3 + 1 to get the G value, and
  3. ( x + y * imageWidth ) * 3 + 2 to get the B value.

If you’ve got a different image format, your formulae change. E.g. RGBA is ( x + y * imageWidth ) * 4.

The ofPixels object from getPixelsRef() is a nicer way to interact with the pixel color information. The ofPixels object knows about it’s color mode (RGB vs RGBA vs grayscale vs …), so it can abstract away the math that you’d need to do to get a particular pixel. It also nicely wraps up the various color channels into an ofColor object, so you don’t have to access the individual channels like you would with the unsigned char array. (ofPixels has a bunch of helpful methods, but I would say abstracting away that math is one of the key things that it does.)

In 0.9, getPixelsRef() is being deprecated. In the future, getPixels() will return an ofPixels object (and getPixelsRef() will disappear). But for 0.8.4 and your code, getPixelsRef() would be the way to go since it gives you the niceties of ofPixels. (And since ofVideoGrabber gives you access to the current frame’s pixels via getPixelsRef(), you don’t have to create an ofImage, like you started out doing.)


#5

that was super helpful! I’m starting to see through the maze. It’s honestly only a few weeks ago since I realized how there can be arrays of objects (i.e. getPixelsRef contains lots of ofPixels which contain a number of values each). I’m working my way through the “Programming Interactivity” book we started Christopher’s class with two years ago (a bit outdated by now, but it helps). Grad school had to be over before I had time for this. Thanks!


#6

Averaging the color worked out really well. But now I got stuck with averaging the samples over multiple frames. So I need a buffer-array of 9 different samples (9 positions of the screen get sampled), with 5 frames each that themselves have 3 color values. What’s the best way to do that? I tried to sum up the color values in an array of ofVec3f's, but my program crashes when I try to add or extract the values from it.

here’s what I’m trying to do:

int sampleNum = 9;
    int sampleRate = 5;
    
    vector<ofVec3f> buffer;
    vector<ofColor> sampleColor;
    vector<ofColor> averageColor;
if(video.isFrameNew()) {

            sampleColor.clear();
        
        // Get each sample per frame
            for (int i = 0; i < sampleNum; i++) {
                ofColor color = sample(samplePos[i].x,samplePos[i].y,sampleW,sampleH, video.getPixelsRef());
                sampleColor.push_back(color);
            }
        
        
        
        // add the RGB value of each sample to buffer
        if(frameCount < sampleRate) {
            for (int i = 0; i < sampleNum; i++) {
                ofVec3f color(sampleColor[i].r, sampleColor[i].g, sampleColor[i].b);
                
                buffer[i] += color; // <-- program crashes
            }
            
            frameCount++;
        
        } else {    // if enough samples are collected, put average of each frame into averageColor
           
            averageColor.clear();
            
            for (int i = 0; i < sampleNum; i++) {
                averageColor[i].r = buffer[i].x / sampleRate; // <-- where it crashes if previous error is commented out
                averageColor[i].g = buffer[i].y / sampleRate;
                averageColor[i].b = buffer[i].z / sampleRate;
            }
            
            frameCount = 0;
            buffer.clear();
        }
    }

#7

Is your buffer vector empty when you are trying to do: buffer[i] += color? That would explain why it is crashing at runtime in both of those places - it’s reaching for a memory location that is out of the bounds of the std::vector.

Approach-wise I would probably do something slight different. Right now you are:

  1. Accumulating a buffer of color information from 5 frames.
  2. Calculating the average of that buffer.
  3. Clearing that buffer and restarting back at step 1.

So on frame 5, you would have a full buffer and a valid average color. But then you clear your buffer and have to wait until frame 10 to have a full buffer again, and hence, have to wait 5 frames for an updated average color.

I would suggest doing a continuously moving average, where you always hold on to the color samples of the past 5 frames. Each time you get a new sample, you jettison the oldest sample in your vector and add the newest sample. So on frame 6, you would have the samples from frames 2 - 6. On frame 7, you’d have frames 3 - 7. Etc. This way you can recalculate your average on every frame.

Also - the math you have at the moment might be slightly off of what you are expecting. Right now you are adding 9 samples to your buffer (buffer[i] += color), but when you calculate the average you are dividing by 5 (averageColor[i].r = buffer[i].x / sampleRate). Actually, you don’t have to store all 9 samples in an array. You can collapse your nine samples down into a single average before storing anything into your buffer.

To help explain all that, here’s something vaguely pseudocode-like:

vector buffer
int numSamples

If the frame is new,
    Sample the colors at the 9 positions in the frame.
    Find the average color of those 9 samples and store it in a variable, frameColor.
    If the buffer's size is equal to numSamples.  
        We've got a full set of samples, so delete the oldest value (the first element) from the buffer to make room for the new sample.
    Add frameColor to the buffer (i.e. add it as the last value in the vector).
    If the buffer's size is greater than 0,
        Calculate the average of the samples in buffer, the moving average

Hope that all makes sense. As an aside, if you end up needing something less computationally expensive, you could instead do an exponential smoothing operation. It’s not the same kind of smoothing, but you wouldn’t need to hold on to an array of values. For that, you would do this general pattern:

currentAverage = s * currentAverage + (1 - s) * newValue

s is your smoothing factor, a decimal between 0 and 1. When s = 0, your average is always your most recent value. When s > 0, your average becomes smoothed.


#8

Thanks! Yes the buffer was indeed empty on start up and it crashed because of an empty pointer.

The reason why I wanted to take it every 5 frames was because I will send the data to the Philips Hue bulbs via HTTP requests, and the bulbs have themselves a built-in transition-fade from one color to the next. So I thought I don’t want to send data every frame. But I decided to build in the smoothing which is a great way to do it, and then pick out every 5th frame to send later on. With the smoothing function it got actually quite simple and there was no need for ofVec3f:

float smoothing;
vector<ofColor> sampleColor;
vector<ofColor> averageColor;

if(video.isFrameNew()) {

        sampleColor.clear();

        for (int i = 0; i < sampleNum; i++) {
            ofColor color = sample(samplePos[i].x,samplePos[i].y,sampleW,sampleH, video.getPixelsRef());
            sampleColor.push_back(color);
        }

        
        if(averageColor.size() == 0) {
            for (int i = 0; i < sampleNum; i++) {
                averageColor.push_back(sampleColor[i]);
            }
        } else {
            for (int i = 0; i < sampleNum; i++) {
                averageColor[i].r = smoothing * averageColor[i].r + (1-smoothing) * sampleColor[i].r;
                averageColor[i].g = smoothing * averageColor[i].g + (1-smoothing) * sampleColor[i].g;
                averageColor[i].b = smoothing * averageColor[i].b + (1-smoothing) * sampleColor[i].b;
            }
        }
    }

#9

Ohhhh, I see what you are trying to do now. That makes sense about the HTTP requests - no need to spam the device. And yeah, exponential smoothing definitely makes the code simpler. Glad that part worked.

I probably should have asked before - are you using the samples to drive a single bulb or to drive nine separate bulbs? I assumed from your original post that there was just one and that you were trying to smooth both temporally (across frames) and spatially (within frames). That’s why I suggested collapsing your nine frame samples down into a single average before storing anything into your buffer. But now I’m thinking that you’ve got nine bulbs and that advice is null and void.


#10

Yeah it’s 9 samples for 9 bulbs, so I couldn’t collapse the samples into one. But no worries it’s all working so far! :wink: