WebGL Lesson 8 – the depth buffer, transparency and blending

<< Lesson 7Lesson 9 >>

Welcome to my number eight in my series of WebGL tutorials, based on number 8 in the NeHe OpenGL tutorials. In it, we’ll go over blending, and as a useful side-effect run through roughly how the depth buffer works.

Here’s what the lesson looks like when run on a browser that supports WebGL (though sadly the transparency doesn’t show up all that well on the video):

Click here and you’ll see the live WebGL version, if you’ve got a browser that supports it; here’s how to get one if you don’t. You should see a semi-transparent slowly-spinning cube, apparently made of stained glass. Lighting should be the same as it was for the last lesson.

You can use the checkbox underneath the canvas to switch off or on the blending code, and thus the transparency effect. You can also adjust an alpha scaling factor (which we’ll explain later), and of course the lighting.

More on how it all works below…

The usual warning: these lessons are targeted at people with a reasonable amount of programming knowledge, but no real experience in 3D graphics; the aim is to get you up and running, with a good understanding of what’s going on in the code, so that you can start producing your own 3D Web pages as quickly as possible. If you haven’t read the previous tutorials already, you should probably do so before reading this one — here I will only explain the differences between the code for lesson 7 and the new code.

There may be bugs and misconceptions in this tutorial. If you spot anything wrong, let me know in the comments and I’ll correct it ASAP.

There are two ways you can get the code for this example; just “View Source” while you’re looking at the live version, or if you use GitHub, you can clone it (and the other lessons) from the repository there.

But before we start on the code, there’s a little bit of theory to go over. To start off with, I probably need to explain what blending actually is! And in order to do that, I should explain something about the depth buffer first.

The Depth Buffer

When you tell WebGL to draw something, as you may remember from lesson 2, it goes through a pipeline of stages. From a high level, it:

  1. Runs the vertex shader on all of the vertices to work out where everything is.
  2. Linearly interpolates between the vertices, which tells it which fragments (which for the moment you can treat as being the same as pixels) need to be painted.
  3. For each fragment, run the fragment shader on it to work out its colour.
  4. Write it to the frame buffer.

Now, ultimately the frame buffer is what is displayed. But what happens if you draw two things? For example, what if you draw a square with its centre at (0, 0, -5) and then another of the same size at (0, 0, -10)? You would not want the second square to overwrite the first, because it is clearly further away and should be hidden.

The way WebGL handles this is to use the depth buffer. When the fragments are written to the frame buffer after the fragment shader is finished with them, as well as the normal RGBA colour values, it also stores a depth value which is related to, but not exactly the same as, the Z value associated with the fragment. (Unsurprisingly, the depth buffer is often also referred to as the Z buffer.)

What do I mean by “related to”? Well, WebGL likes all of the Z values to be in the range from 0 to 1, with 0 closest and 1 furthest away. This is all hidden from us by the projection matrix we create with our call to perspective at the start of drawScene. For now all you need to know is that the larger the Z-buffer value, the further away something is; this is the opposite of our normal coordinates.

OK, so that’s the depth buffer. Now, you may remember that in the code that we’ve used to initialise our WebGL context ever since lesson 1, we’ve had the following two lines:

    gl.enable(gl.DEPTH_TEST);
    gl.depthFunc(gl.LEQUAL);

These are instructions to the WebGL system about what to do when writing to the a new fragment to the frame buffer. The first one says “pay attention to the depth buffer” and the second “if our fragment has a Z value that is less than or equal to the one that is currently there, use our new one rather than the old one”. This test on its own is enough to give us sensible behaviour; things that are close up obscure things that are further away. (You can also set the depth function to various other values, though I suspect they’re more rarely used.)

Blending

Blending is simply an alternative to this process. With depth-testing, we use the depth function to choose whether or not replace the existing fragment with the new one. When we’re doing blending, we use a blend function to combine the colours from both the existing and the new fragments to make an entirely new fragment, which we then write to the buffer.

Let’s take a look at the code now. Almost all of it is exactly the same as it was for lesson 7, and almost all of the important stuff is in a short segment in drawScene. Firstly, we check whether the “blending” checkbox is checked.

    var blending = document.getElementById("blending").checked;

If it is, we set the function that will be used to combine the two fragments:

    if (blending) {
      gl.blendFunc(gl.SRC_ALPHA, gl.ONE);

The parameters to this function define how the blending is done. This is fiddly, but not difficult. Firstly, let’s define two terms: the source fragment is the one that we’re drawing right now, and the destination fragment is the one that’s already in the frame buffer. The first parameter to the gl.blendFunc determines the source factor, and the second the destination factor; these factors are numbers used in the blending function. In this case, we’re saying that the source factor is the source fragment’s alpha value, and the destination factor is a constant value of one. There are other possibilities; for example, if you use SRC_COLOR to specify the source colour, you wind up with separate source factors for the red, green, blue and alpha values, each of which is equal to the original RGBA components.

Now, let’s imagine that WebGL is trying to work out the colour of a fragment when it has a destination with RGBA values of (Rd, Gd, Bd, Ad) and an incoming source fragment with values (Rs, Gs, Bs, As).

In addition, let’s say that we’ve got RGBA source factors of (Sr, Sg, Sb, Sa) and destination factors of (Dr, Dg, Db, Da)

For each colour component WebGL will calculate as follows:

  • Rresult = Rs * Sr + Rd * Dr
  • Gresult = Gs * Sg + Gd * Dg
  • Bresult = Bs * Sb + Bd * Db
  • Aresult = As * Sa + Ad * Da

So, in our case, we’re saying (just giving the calculation for the red component, to keep things simple):

  • Rresult = Rs * As + Rd

Normally this wouldn’t be an ideal way of creating transparency, but it happens to work very nicely in this case when the lighting is switched on. And this point is well worth emphasising; blending is not the same as transparency, it’s just a technique that can be used (among others) to get transparent-style effects. This took a while to percolate into my thick head when I was working my way through the Nehe lessons, so forgive me if I’m overemphasising it now :-)

OK, let’s move on:

      gl.enable(gl.BLEND);

A pretty simple one — like many things in WebGL, blending is disabled by default, so we need to switch it on.

      gl.disable(gl.DEPTH_TEST);

This is a little more interesting; we have to switch off depth testing. If we don’t do this, then the blending will happen in some cases and not in others. For example, if we draw a face of our cube that happens to be at the back before one at the front, then when the back face is drawn it will be written to the frame buffer, and then the front one will be blended on top of it, which is what we want. However, if we draw the front face first and then the back one, the back one will be discarded by the depth test before we get to the blending function, so it will not contribute to the image. This is not what we want.

Sharp-eyed readers will have noticed from this (and from the blend function above) that there’s a strong dependency when blending on the order in which you draw things that we haven’t encountered in previous lessons. More about this later; let’s finish with this bit of code first:

      gl.uniform1f(shaderProgram.alphaUniform, parseFloat(document.getElementById("alpha").value));

Here we’re loading an alpha value from a text field in the page and pushing it up to the shaders. This is because the image we’re using for the texture doesn’t have an alpha channel of its own (it’s just RGB, so has an implicit alpha value of 1 for every pixel) so it’s nice to be able to adjust the alpha to see how it affects the image.

The remainder of the code in drawScene is just that which is necessary to handle things in the normal fashion when blending is switched off:

    } else {
      gl.disable(gl.BLEND);
      gl.enable(gl.DEPTH_TEST);
    }

There’s also a small change in the fragment shader to use that alpha value when processing the texture:

  #ifdef GL_ES
  precision highp float;
  #endif

  varying vec2 vTextureCoord;
  varying vec3 vLightWeighting;

  uniform float uAlpha;

  uniform sampler2D uSampler;

  void main(void) {
     vec4 textureColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
     gl_FragColor = vec4(textureColor.rgb * vLightWeighting, textureColor.a * uAlpha);
  }

That’s everything that’s changed in the code!

So let’s get back to that point about the drawing order. The transparency effect we get with this example is pretty good — it really does look like stained glass. But now try looking at it again, but change the directional lighting so that it’s coming from a positive Z direction — just remove the “-” from the appropriate field. It still looks pretty cool, but the realistic “stained glass” effect is gone.

The reason for that is that with the original lighting, the back-facing face of the cube is always dimly-lit. This means that its R, G and B values are low, so when the equation

  • Rresult = Rs * Ra + Rd

…is calculated, they show less strongly. To put it another way, we’ve got lighting that means that the stuff that is at the back is less visible. If we switch the lighting around so that stuff at the front is less visible, then our transparency effect works less well.

So how would you get “proper” transparency? Well, the OpenGL FAQ says that you need to use a source factor of SRC_ALPHA and a destination factor of ONE_MINUS_SRC_ALPHA. But we still have the problem that the source and the destination fragments are treated differently, and so there’s a dependency on the order in which stuff is drawn. And this point finally gets us to what I think of as the dirty secret of transparency in Open-/WebGL; again, quoting the OpenGL FAQ:

When using depth buffering in an application, you need to be careful about the order in which you render primitives. Fully opaque primitives need to be rendered first, followed by partially opaque primitives in back-to-front order. If you don’t render primitives in this order, the primitives, which would otherwise be visible through a partially opaque primitive, might lose the depth test entirely.

So, there you have it. Transparency using blending is tricky and fiddly, but if you control enough other aspects of the scene, like we controlled the lighting in this lesson, you can get the right effect without too much complexity. You can do it properly, but you’ll need to take care and draw stuff in a very specific order to get it looking good.

Luckily, blending is also useful for other effects, as you’ll see in the next lesson. But for now, you know all there is to learn from this lesson: you have a solid foundation for understanding the depth buffer, and you know how blending can be used to provide simple transparency.

If you have any questions, comments, or corrections, please do leave a comment below — in particular for this lesson. I personally felt that this was the hardest part of the NeHe lessons to really understand properly when I first did it, and I hope I’ve managed to make everything as clear as if not clearer than the original.

Next time, we’ll start improving the structure of the code, so that we can support a large number of different objects in the scene without all these messy global variables.

<< Lesson 7Lesson 9 >>

Acknowledgments: talking to Jonathan Hartley (no mean OpenGL coder himself) made the depth buffer and blending much clearer to me, and Steve Baker’s description of the Z buffer was also very useful. As always, I’m deeply in debt to NeHe for his OpenGL tutorial for the script for this lesson.

You can leave a response, or trackback from your own site.

14 Responses to “WebGL Lesson 8 – the depth buffer, transparency and blending”

  1. murphy says:

    Strangely enough, I had problems with the LEQUAL function for the depth buffer. The effect was this: far away things were shining through other things that should obscure them. Nearby things weren’t affected as much, although sometimes a line of pixels flickered through them as well. I only had the problem in Firefox; it seems the z-buffer “resolution” is somehow better Safari.

    So, to solve it, replacing it with the LESS function worked for me.

  2. giles says:

    Interesting! I would have thought the depth buffer size would be per-card rather than per-browser. What do you get if you call gl.getParameter(gl.DEPTH_BITS) ?

  3. Shy says:

    you should probably use parseFloat() when reading the alpha value for good manners.

  4. Shy says:

    How come when the alpha is 1 there is still transparency?

  5. Shy says:

    Oh ok, that’s because the gl.ONE. If you switch it to ONE_MINUS_SRC_ALPHA then alpha=1 would give you an opaque surface.

  6. giles says:

    @Shy — Re: parseFloat — good point, I’ll fix that.

  7. IlluminaZero says:

    I think you might want to consider modifying the blender function like such that when disabled it reasserts the alpha value to be 1, so that you do not have diluted colors:

    if (blending) {
    gl.blendFunc(gl.SRC_ALPHA, gl.ONE);
    gl.enable(gl.BLEND);
    gl.disable(gl.DEPTH_TEST);
    gl.uniform1f(shaderProgram.alphaUniform, document.getElementById(”alpha”).value);
    } else {
    gl.uniform1f(shaderProgram.alphaUniform, 1); <—— ADDED LINE
    gl.disable(gl.BLEND);
    gl.enable(gl.DEPTH_TEST);
    }

    I do not think that is intentional, and if it is I do not see why a user would want to inadvertently get that effect through transparency value modifications.

  8. IlluminaZero says:

    When I modified my code with ONE_MINUS_SRC_ALPHA I had an opaque image but the depth buffer (right term?) was screwed up — Such that background images would supersede the front images.

    The best way to get an Opaque image seems to be to disable the gl.Blend and enable the gl depth test:

    gl.uniform1f(shaderProgram.alphaUniform, 1);
    gl.disable(gl.BLEND);
    gl.enable(gl.DEPTH_TEST);

    (Yes, same as last post.)

    —–

    This is from the perspective of a Minefield user btw. Spent quite a bit of time to get the desired transparency effects on multiple objects. ;)

  9. Lindsay says:

    Cheers for this article Giles!

    Trying to work out how to perform transparency with WebGL for the case in which an outer transparent object encloses an inner one.

    I’m a little naive, and I’d appreciate a seasoned graphics pro to set me right here. Traditionally (with fixed function pipeline), I would decompose those objects into a list of faces, depth-sort them, then render them in far-to-near order while enabling/disabling blend/depth on a per-face basis (arg! state changes!).

    But how to do transparency in this case using WebGL, where we load entire geometries onto the GPU as monolithic VBOs?

    Anyone have any technical hints or links?

  10. giles says:

    @IlluminaZero — good point about resetting the alpha when there’s no blending, I’ll change that.

    @Lindsay — that’s a great question — no ideas from me, but if you do find anything out I’d also love to know the answer!

  11. Erik says:

    Can’t seem to get this to work with the latest chromium nightly (65205).

  12. giles says:

    Erik — thanks for the heads-up, I see it doesn’t work for me either. I seem to remember something about this on the mailing list the other day, I’ll investigate and fix it if it’s a bug on my side.

  13. giles says:

    Ah, got it — it was nothing complicated, I was just failing to convert the alpha value from string to float before pushing it to the graphics card. Chrome used to auto-convert this but I guess they’ve stopped doing that. Fixed it, and it looks like it works now.

  14. jakub says:

    I’m unable to make this example work AFTER rewriting it on my local drive (everytjing is OK when running the online version). The texture is invisible, in order to see the cube (all white) I need to switch blending off. Texture file is in the same directory (gif 500×500)

Leave a Reply

Subscribe to RSS Feed Follow me on Twitter!