WebGL Lesson 14 – specular highlights and loading a JSON model

<< Lesson 13Lesson 15 >>

Welcome to my number fourteen in my series of WebGL tutorials! In it, we’ll introduce the last bit of the Phong reflection model that we introduced in lesson 7: specular highlights; the “glints” on a shiny surface, which make a scene look that little bit more realistic.

Here’s what the lesson looks like when run on a browser that supports WebGL:

Click here and you’ll see the live WebGL version, if you’ve got a browser that supports it; here’s how to get one if you don’t. You’ll see a spinning teapot, and as it spins you’ll see a constant glint to the mid-left and on the handle of the lid. There will also be occasional glints from the spout and the handle when they hit just the right angle to catch the light. You can switch these specular highlights on and off using the checkbox below, and you can also disable lighting completely, and switch between three options for the texture used: none, the “galvanized” texture that is used by default (and which is a Creative Commons-licensed sample from the excellent Arroway Textures), and, just for fun, a texture showing the planet Earth (courtesy of the European Space Agency/Envisat), which looks quite fun on a teapot :-)

You can also control the teapot’s shininess from the text fields below — larger numbers mean a smaller, sharper highlight — and you can adjust the specular reflection’s colour, and, as before, the position and diffuse colour of the point light that’s causing all of these effects. More about that below.

Before we wade into the code, the usual warning: these lessons are targeted at people with a reasonable amount of programming knowledge, but no real experience in 3D graphics; the aim is to get you up and running, with a good understanding of what’s going on, so that you can start producing your own 3D Web pages as quickly as possible. If you haven’t read the previous tutorials already, you should probably do so before reading this one — here I will only explain the new stuff. The lesson is based on lesson 13, so you should make sure that you understand that one (and please do post a comment on that post if anything’s unclear about it!)

There may be bugs and misconceptions in this tutorial. If you spot anything wrong, let me know in the comments and I’ll correct it ASAP.

There are two ways you can get the code for this example; just “View Source” while you’re looking at the live version, or if you use GitHub, you can clone it (and the other lessons) from the repository there.

Once you have it, open it up in an editor. We’ll start at the top and work our way down, which has the great advantage that we can see the fragment shader pretty much right away — this is where the most interesting changes are. Before we encounter that, there’s one minor difference between this code and lesson 13’s: we don’t have the shaders for per-vertex lighting. Per-vertex lighting doesn’t really handle specular highlights very well (as they get smeared out over an entire face), so we don’t bother with them.

So, the first shader you’ll see in the file is the fragment shader for per-fragment lighting. It starts off with the usual precision stuff and declarations of varying and uniform variables, of which a couple (highlighted here in red) are new, and one — the uniform that used to hold the point light’s colour — has been renamed, as the point light now has specular and diffuse components:

  #ifdef GL_ES
  precision highp float;
  #endif

  varying vec2 vTextureCoord;
  varying vec4 vTransformedNormal;
  varying vec4 vPosition;

  uniform float uMaterialShininess;

  uniform bool uShowSpecularHighlights;
  uniform bool uUseLighting;
  uniform bool uUseTextures;

  uniform vec3 uAmbientColor;

  uniform vec3 uPointLightingLocation;
  uniform vec3 uPointLightingSpecularColor;
  uniform vec3 uPointLightingDiffuseColor;

  uniform sampler2D uSampler;

These shouldn’t need much explanation; they’re just where the values that you can change from the HTML code are fed into the shader for processing. Let’s move on to the body of the shader; the first thing is to handle the case where the user has lighting switched off, and this is the same as it was before:

  void main(void) {
    vec3 lightWeighting;
    if (!uUseLighting) {
      lightWeighting = vec3(1.0, 1.0, 1.0);
    } else {

Now we handle the lighting, and of course this is where it gets interesting:

      vec3 lightDirection = normalize(uPointLightingLocation - vPosition.xyz);
      vec3 normal = normalize(vTransformedNormal.xyz);

      float specularLightWeighting = 0.0;
      if (uShowSpecularHighlights) {

So, what’s going on here? Well, we calculate the direction of the light just as we did for normal per-fragment lighting. We then normalise the fragment’s normal vector, once again just as before — remember, when the per-vertex normals are linearly interpolated to create per-fragment normals, the results aren’t necessarily of length one, so we normalise to fix this — but this time we’ll be using it a little more so we store it in a local variable. Next, we define a weighting for the amount of extra brightness that is going to come from the specular highlight; this is of course zero if the specular highlight is switched off, but if it’s not, we need to calculate it.

So, what determines the brightness of a specular highlight? As you might remember from the explanation of the Phong Reflection Model in lesson 7, specular highlights are created by that portion of the light from a light source that bounces off the surface as if from a mirror:

The portion of the light that is reflected this way bounces off the surface at the same angle as it hit it. In this case, the brightness of the light you see reflected from the material depends on whether or not your eyes happen to be in the line along which the light was bounced — that is, it depends not only on the angle at which the light hit the surface but on the angle between your line of sight and the surface. This specular reflection is what causes “glints” or “highlights” on objects, and the amount of specular reflection can obviously vary from material to material; unpolished wood will probably have very little specular reflection, and highly-polished metal will have quite a lot.

The specific equation for working out the brightness of a specular reflection is this:

  • (Rm . V)α

…where Rm is the (normalised) vector that a perfectly-reflected ray of light from the light source would go when it bounced off the point on the surface that is under consideration, V is the (also normalised) vector pointing in in the direction of the viewer’s eyes, and α is a constant describing the shininess, the higher the shinier. You may remember that the dot product of two vectors is the cosine of the angle between them; this means that this part of the equation produces a value that is 1 if the light from the light source would be reflected directly at the viewer (that is, Rm and V are parallel and so the angle between them is zero, and the cosine of zero is one) and then fades off fairly slowly as the light is less directly reflected. Taking this value to the power of α has the effect of “compressing” it: that is, while the result is still one when the vectors are parallel, it drops off more rapidly to each side. You can see this in action if you set the shininess constant in the demo page to something large, like (say) 512.

So, given all this, the first things we need to work out are the direction of the viewer’s eyes, V, and the direction of a perfectly-reflected ray of light, Rm. Let’s look at V first, because it’s easy! Our scene is constructed in eye space, which you may remember from lesson 10; in effect, this means that we’re drawing the scene as if there was a camera at the origin, (0, 0, 0), looking down the negative Z axis with X increasing to the right and Y increasing upwards. The direction of any point from the origin is, of course, just its coordinates expressed as a vector — so, likewise, the direction of the viewers eyes at the origin from any point is just the negative of the coordinates. We have the coordinates of the fragment, linearly interpolated from the vertex coordinates, in vPosition, so we negate it, normalise it to make it of length one, and that’s it!

        vec3 eyeDirection = normalize(-vPosition.xyz);

Now let’s look at Rm. This would be a bit more involved, if it weren’t for a very convenient GLSL function called reflect, which is defined as:

reflect (I, N): For the incident vector I and surface orientation N, returns the reflection direction

The incident vector is the direction from which a ray of light hits the surface at the fragment, which is the opposite of the direction of the light from the fragment (which we already have in lightDirection). The surface orientation is called N because it’s just the normal, which we also already have. Given all that, it’s easy to work out:

        vec3 reflectionDirection = reflect(-lightDirection, normal);

So, now that we have all that, the final step is very easy indeed:

        specularLightWeighting = pow(max(dot(reflectionDirection, eyeDirection), 0.0), uMaterialShininess);
      }

That’s all we need to do to work out the contribution of the specular component to the fragment’s lighting. The next step is to work out how much the diffuse light contributes, using the same logic as before (though we can now use our local variable for the normalised normal):

      float diffuseLightWeighting = max(dot(normal, lightDirection), 0.0);

Finally, we use both weightings, the diffuse and specular colours, and the ambient colour to work out the overall amount of lighting at this fragment for each colour component; this is a simple extension of what we were using before:

      lightWeighting = uAmbientColor
        + uPointLightingSpecularColor * specularLightWeighting
        + uPointLightingDiffuseColor * diffuseLightWeighting;
    }

Once that’s all done, we have a value for the light weighting which we can just use in identical code to lesson 13’s to weight the colour as specified by the current texture:

    vec4 fragmentColor;
    if (uUseTextures) {
      fragmentColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
    } else {
      fragmentColor = vec4(1.0, 1.0, 1.0, 1.0);
    }
    gl_FragColor = vec4(fragmentColor.rgb * lightWeighting, fragmentColor.a);
  }

That’s it for the fragment shader!

Let’s move a bit further down. If you’re looking for differences from lesson 13, the next you’ll notice is that initShaders is back to its earlier simple form, just creating one program, though of course it also now initialises one or two more uniform locations for the new specular lighting settings. A little further down from that, initTextures is now loading textures for the Earth and the galvanised steel effects instead of the Moon and the crate. Down a little bit more and setMatrixUniforms is, like initShaders, once again designed for a single program — and then we reach something a little more interesting.

Instead of having initBuffers to create the WebGL buffers containing the various per-vertex attributes that define the appearance of the teapot, we have two functions: handleLoadedTeapot and loadTeapot. The pattern will be familiar from lesson 10, but it’s worth going over again. Let’s take a look at loadTeapot (though it’s the second one to appear in the code):

  function loadTeapot() {
    var request = new XMLHttpRequest();
    request.open("GET", "Teapot.json");
    request.onreadystatechange = function() {
      if (request.readyState == 4) {
        handleLoadedTeapot(JSON.parse(request.responseText));
      }
    }
    request.send();
  }

The overall structure should be familiar from lesson 10; we create a new XMLHttpRequest and use it to load the file Teapot.json. This will happen asynchronously, so we attach a callback function that will be triggered when the process of loading the file reaches various stages, and in the callback we do some stuff when the load reaches a readyState of 4, which means that it is fully loaded.

The interesting bit is what happens then. The file that we’re loading is in JSON format, which basically means that it’s already written in JavaScript; have a look at it to see what I mean. The file describes a JavaScript object containing lists that hold the vertex positions, normals, texture coordinates, and a set of vertex indices that completely describe the teapot. We could, of course, just embed this code directly into the index.html file, but if you were building a more complex model, with many separately-modelled objects, you’d want them all in separate files.

(Just which format you should use for pre-built objects in your WebGL applications is an interesting question. You might be designing them in any of a plethora of different programs, and these programs can output models in many different formats, ranging from .obj to 3DS. In the future, it looks like at least one of them will be able to output models in a JavaScript-native format, which I suspect may look a bit like the JSON model I’ve used for the teapot. For now, you should treat this tutorial as an example of how you might go about loading a JSON-format pre-designed model, and not as an example of best practice :-)

So, we have code that loads up a file in JSON format, and triggers an action when it’s loaded. The action converts the JSON text into data we can use; we could just use the JavaScript eval function to convert it into a JavaScript object, but this is generally frowned upon, and so instead we use the built-in function JSON.parse to parse the object. Once that’s done, we pass it up to handleLoadedTeapot:

  var teapotVertexPositionBuffer;
  var teapotVertexNormalBuffer;
  var teapotVertexTextureCoordBuffer;
  var teapotVertexIndexBuffer;
  function handleLoadedTeapot(teapotData) {
    teapotVertexNormalBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, teapotVertexNormalBuffer);
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(teapotData.vertexNormals), gl.STATIC_DRAW);
    teapotVertexNormalBuffer.itemSize = 3;
    teapotVertexNormalBuffer.numItems = teapotData.vertexNormals.length / 3;

    teapotVertexTextureCoordBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, teapotVertexTextureCoordBuffer);
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(teapotData.vertexTextureCoords), gl.STATIC_DRAW);
    teapotVertexTextureCoordBuffer.itemSize = 2;
    teapotVertexTextureCoordBuffer.numItems = teapotData.vertexTextureCoords.length / 2;

    teapotVertexPositionBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, teapotVertexPositionBuffer);
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(teapotData.vertexPositions), gl.STATIC_DRAW);
    teapotVertexPositionBuffer.itemSize = 3;
    teapotVertexPositionBuffer.numItems = teapotData.vertexPositions.length / 3;

    teapotVertexIndexBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, teapotVertexIndexBuffer);
    gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(teapotData.indices), gl.STREAM_DRAW);
    teapotVertexIndexBuffer.itemSize = 1;
    teapotVertexIndexBuffer.numItems = teapotData.indices.length;

    document.getElementById("loadingtext").textContent = "";
  }

There’s not really anything worth highlighting in that function — it just takes the various lists from the loaded JSON object and puts them into WebGL arrays, which are then pushed over onto the graphics card in newly-allocated buffers. Once all of this is done, we clear out a div in the HTML document which was previously telling the user that the model was being loaded, just like we did in lesson 10.

So, that’s the model loaded. What else? Well, there’s drawScene, which now needs to draw the teapot at an appropriate angle (after checking that it’s loaded), but there’s nothing really new there; take a look at the code and make sure you know what’s going on (and please do leave a comment if anything’s unclear), but I doubt you’ll find anything to surprise you.

After that, animate has a few trivial changes to make it spin the teapot rather than make the Moon and the crate orbit, and webGLStart has to call loadTeapot instead of initBuffers. And finally, the HTML code has the DIV to display the “Loading world…” text while the teapot is being loaded up, along with its associated CSS style, and of course it has new input fields for the new specular highlight parameters.

And after that, that’s it! You now know how to write shaders to show specular highlights, and how to load pre-built models that are stored in JSON format. Next time we’ll look at something a bit more advanced; how to use textures in a slightly different and more interesting way than we are now, with specular maps

<< Lesson 13Lesson 15 >>

Acknowledgments: the galvanised pattern is a Creative Commons sample from Arroway Textures, and the Earth texture is courtesy of the European Space Agency/Envisat.

You can leave a response, or trackback from your own site.

27 Responses to “WebGL Lesson 14 – specular highlights and loading a JSON model”

  1. murphy says:

    Hooray for the teapot! I wondered when it would finally make its inevitable appearance in your tutorials.

    And maybe you’re code can be made simpler even: Actually, you don’t need the JSON library any more: http://en.wikipedia.org/wiki/JSON#Native_JSON.

  2. murphy says:

    Here’s the link a second time for stupid WordPress markup parsers: http://en.wikipedia.org/wiki/JSON#Native_JSON

  3. giles says:

    Thanks for the link! You’re right, it looks like any browser that supports WebGL will also support Native JSON, so I can get rid of the separate library; excellent news.

  4. Shy says:

    Font turned to fixed-width at some point in the middle of the post. Annoying.

  5. Shy says:

    What is the name of the program that is (will be?) able to save files as JSON?

  6. giles says:

    @Shy — thanks for the heads up about the fixed-width, must have broken it while updating the other day. Fixed now. There is a JavaScript export module for Blender, which I’m |(slowly!) extending to support JSON; you can get the current (non-JSON) version here: http://code.google.com/p/blender-webgl-exporter/

  7. Theo says:

    Hi Giles

    Anything I can do to get the Blender to JSON Exporter working? Or help with testing?

    Having the exporter working would really help your lessons to be more useful. There nothing like seeing one’s own model being rendered to spur people on to do more…

    Theo

  8. giles says:

    Hi Theo — thanks for the prod ;-) No, I just need to make time to get my version working. Perhaps I should just get that done regardless of lesson 16 — I kind of feel I’m swapping between the two and getting neither done…

    I’ll put an announcement on the blog as soon as I have something working.

  9. Theo says:

    Giles

    I found an older alternative JSON exporter here:

    http://jslibs.googlecode.com/svn/trunk/src/jstrimesh/blender_json_export.py

    Looks like it might be simpler to work with – and it just provides an alternative view point on things.

    My vote is for getting the Blender to JSON converter going – so we can make cool things that will use your lesson 16 ideas…

    Looking forward to seeing you on Friday at the WebGL Camp…

    Theo

  10. giles says:

    Interesting, thanks for the link! I’ll have a good look at it, maybe it would be better to spend my time contributing to that rather than working on a reinvention of the wheel.

    I’m looking forward to Friday, just sad that I can’t be there in person :-(

  11. Caesar says:

    Hey Giles, i have a question.

    You recommend us to use a a JavaScript export module for Blender. I have tried it, but the content in files which export by this module is quite different from your tutorial’s example’s json file.

    I wonder if i can use this json files in your code?

    Caesar

  12. giles says:

    Hi Caesar,

    I’m hoping to get time to add my kind of JSON Dennis Ippel’s Blender exporter, so I’ll change this lesson when I’ve done that. In the meantime, you’re quite welcome to use the JSON from this lesson.

    Giles

  13. Anthony says:

    Hi

    Some of the lessons have stopped working on webkit (mac) version r64451, I guess there have been some changes with webkit has they were working fine on older versions.

    The simple one still ok but the box with lights and teapot failing!

    Regards

  14. giles says:

    Hi Anthony — thanks for the heads-up! I’ve just updated everything to the latest WebGL spec, and hopefully that will fix the problem.

  15. problem with vertexTextureCoords says:

    why you don’t use the textures index to build your model with the teapotData.vertexTextureCoords ?
    it doesn’t work on my side with your method i can load my model but not the texture
    please help me thanks

  16. Corey Altman says:

    Very helpful as always. Thanks for being so detailed with all your tutorials.

  17. giles says:

    @”problem” — sorry, I don’t quite understand what you mean. Do you have an example page I could look at?

    @Corey Altman — thanks! Glad to help.

  18. problem with vertexTextureCoords says:

    i just want to know how you make the correspondence between the vertex coordinates of each triangles of the model and the textures coordinates
    in the buffers. you only use the teapotVertexIndexBuffer with teapotVertexPositionBuffer to draw triangles with the gl.drawElement method but what about teapotVertexTextureCoordBuffer? how u place the textureccordinates at the good place? i don’t understand how you manage to place the good texture coordinate with the good vertex coordinate

    teapotVertexTextureCoordBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, teapotVertexTextureCoordBuffer);
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(teapotData.vertexTextureCoords), gl.STATIC_DRAW);
    teapotVertexTextureCoordBuffer.itemSize = 2;
    teapotVertexTextureCoordBuffer.numItems = teapotData.vertexTextureCoords.length / 2;

    teapotVertexPositionBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, teapotVertexPositionBuffer);
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(teapotData.vertexPositions), gl.STATIC_DRAW);
    teapotVertexPositionBuffer.itemSize = 3;
    teapotVertexPositionBuffer.numItems = teapotData.vertexPositions.length / 3;

    teapotVertexIndexBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, teapotVertexIndexBuffer);
    gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(teapotData.indices), gl.STREAM_DRAW);
    teapotVertexIndexBuffer.itemSize = 3;
    teapotVertexIndexBuffer.numItems = teapotData.indices.length;

  19. alain says:

    forget my last question i could fix my problem
    but now i want yo know if its possible to import bones animation in webgl
    thanks for your tutorials!!!
    i’m a flash developer but adobe is very lazy with 3D so i prefer javascript and webgl now lol especially with the 3D models importation its simpler to parse with a json format than a dae. what is your opinion about that?

  20. Sumez says:

    So far in your lessons you’ve used an itemsize of 1 for the index buffer, but this time around you made it 3, why? I find it especially confusing when you use the length of the array as the length of the buffer (implying a size of 1)

  21. Sumez says:

    Nevermind, it’s 1 in the demo, so I guess it’s just a typo? :)
    You may want to fix it, unless there’s something I’m not getting.

  22. alain says:

    i try to make a tangent space bump mapping but it doesn’t work why?

    here is my code :
    vertex shader :

    attribute vec3 aVertexPosition;
    attribute vec2 aTextureCoord;
    attribute vec3 aVertexNormal;
    attribute vec3 aVertexTangente;

    uniform mat4 uMVMatrix;
    uniform mat4 uPMatrix;
    uniform mat4 uNMatrix;
    uniform vec3 uLightingDirection;

    varying vec2 vTextureCoord;
    varying vec3 lightVec;

    void main(void) {
    gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
    vTextureCoord = aTextureCoord;

    vec4 n = normalize(uNMatrix * vec4(aVertexNormal, 1.0));
    vec4 t = normalize(uNMatrix * vec4(aVertexTangente, 1.0));
    vec3 b = cross(n.xyz, t.xyz);

    lightVec.x = dot(uLightingDirection, t.xyz);
    lightVec.y = dot(uLightingDirection, b.xyz);
    lightVec.z = dot(uLightingDirection, n.xyz);

    }

    fragment shader :

    #ifdef GL_ES
    precision highp float;
    #endif

    uniform sampler2D color_texture;
    uniform sampler2D normal_texture;
    uniform vec3 uLightingDirection;

    varying vec2 vTextureCoord;
    varying vec3 lightVec;

    //uniform vec3 uDirectionalColor;
    //uniform vec3 uAmbientColor;

    void main() {

    vec3 normal = normalize(texture2D(normal_texture, vec2(vTextureCoord.s, vTextureCoord.t)).rgb * 2.0 – 1.0);
    vec3 light_pos = normalize(lightVec);
    float diffuse = max(dot(normal, light_pos), 0.0);
    vec3 color = diffuse * texture2D(color_texture, vec2(vTextureCoord.s, vTextureCoord.t)).rgb;
    gl_FragColor = vec4(color, 1.0);
    }

  23. giles says:

    @Sumez — you’re quite right, it was a typo — thanks for pointing it out! Fixed now.

    @alain — yup, I’d go for JSON over DAE. COLLADA is a great idea when people want to interchange 3D models between systems and organisations — for example, if I was building a virtual world where people could upload content, I reckon I’d make it the preferred format — but within your own application it probable makes more sense to pre-parse models into whatever internal data structures you need on the server, and then pass them out to the browser as JSON.

    re: the bump mapping — I’ve not really looked into that myself yet, sorry!

  24. Jules says:

    Is there a bank of .JSON 3D (image) files that exist anywhere?

    Anyone know if it is (or would be) possible to convert Google warehouse 3D images to JSON?

    Thanks and regards,
    Jules345

  25. jules says:

    Hi,

    So I’ve taken a simple COLLADA file of a chair model from here: http://sketchup.google.com/3dwarehouse/details?mid=66b29a0b56c2b69d25af37f812eb1801&prevstart=0

    I’ve converted it to JSON using o3d’s converter, the resultant file is here: http://cid-1f608f69f170fa85.office.live.com/self.aspx/Public/scene.json

    All i want to do is get this model displayed in WebGL, like has been done with the teapot JSON file in this example – but I can’t do so easily as the teapot JSON file is in a totally different JSON structure, and i cant tell what is vertexTextureCoords, vertexPositions, indices etc. Can anyone point me in the right direction?

    It would be amazing if there was a tutorial for handling a JSON file of this format, i.e. the format that is produced as a result of converting a COLLADA dae file to JSON using o3d’s collada to JSON converter

    Thanks in advance,
    Julian

  26. Jian says:

    Great tutorial! I’ve done quite some 3D programming but the problem has always been how to deliver the content to the client-end in a hassle-free way. Now, with WebGL, no more need to require the user to install this and that dll or ActiveX. The only sad thing is that the stubborn Microsoft still refuses to embrace open standards such as WebGL — so IE users would still have to install something… Very sad situation for a developer….

  27. giles says:

    Thanks, Jian. Re: IE — very true. Hopefully Chrome Frame will make it less of a problem, though.

Leave a Reply

Subscribe to RSS Feed Follow me on Twitter!