Why does ofMesh point cloud look different from the "front" and "back"


#1

A followup to the previous followup, @bakercp and @mikewesthad

I’m making big, dense ofMesh objects in which each point has a low opacity (between 30-40 / 255) and I’m curious why the color/opacity of the mesh looks significantly different from the front versus the back. ofEnableAlphaBlending() doesn’t seem to make much of a difference.

Here’s my basic setup:

ofSetVerticalSync(true);
ofEnableDepthTest();
ofEnableAlphaBlending();

mesh.setMode(OF_PRIMITIVE_POINTS);
    
glEnable(GL_POINT_SMOOTH); // use circular points instead of square points
glPointSize(2); // make the points bigger

Thoughts?


#2

We had some offline discussion, so I’m migrating it into the forums so that hopefully others can jump in with corrections or suggestions.

I believe the fundamental issue here is transparency and the order in which things are being drawn, which is a real and open challenge in real time graphics. (Esp true for your situation with many, many translucent surfaces.)

Here’s the isolated issue. If you draw two translucent objects in openGL, the order in which they are blended into the screen matters. This ofSketch gist code draws two transparent squares - one red and one blue. On the left, the red is drawn on top of the blue. On the right, the blue is drawn on top of the red. The color of the area of overlap changes depending on the order of drawing:

So when alpha blending things in openGL, the order matters. After the GPU figures out vertices and fragment colors, it starts blending everything together in a buffer that represents the current state of the screen. (For a better overview of the GPU pipeline, see this article.) For alpha blending, the colors are blended into that frame buffer over time eventually resulting in the pixels you see on the screen.

The alpha blending math looks like this:

DestinationColor = SourceColor * SourceColorAlpha + DestinationColor * (1 - SourceColorAlpha)).

When each point is processed by the GPU, it is blended into the frame buffer using that equation (e.g. DestinationColor is what’s currently in the buffer). Since there is a subtraction in there, that math isn’t commutative and the order matters (7-4 != 4-7). For more details, check out this post.

So when this gets applied to the spiral of points, you end up the color changing depending on which side you are viewing (top image is the front, bottom image is the back).

Note: this ordering of translucent points is a problem because ofEnableDepthTest() orders the drawing based on depth. With that turned off, the order in which the points are drawn will always be the same, so there would be no view-dependent color changes. But then the mesh will be drawn in the order in which the vertices were added, so the bottom will always be drawn over the top (left, view from above; right, view from below):

So right, as far as I understand things, this issue you are hitting is a real challenge in real time graphics (e.g. a recent Nivida slideshow on an advanced approach). There are simple tricks, like using a commutative blending operation (like add or multiply mentioned in that article), but they aren’t going to give a nice looking solution here. There are more advanced approaches like the Nivida slideshow, that do an order-independent blending that looks better, but of course are harder to implement and are memory intensive.


#3

Another thing to try might be pre-multiplied alpha. I don’t think it will fully solve things, but it should help mitigate the “shadowing.”

You’d do something like this:

ofEnableBlendMode(OF_BLENDMODE_DISABLED);
ofBackground(ofColor::black);
	
glEnable(GL_BLEND);
glBlendEquationSeparate(GL_FUNC_ADD, GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ZERO);
// Draw the mesh

(Source)