WebGL Lesson 5 – introducing textures

<< Lesson 4Lesson 6 >>

Welcome to my number five in my series of WebGL tutorials, based on number 6 in the NeHe OpenGL tutorials. This time we’re going to add a texture to a 3D object — that is, we will cover it with an image that we load from a separate file. This is a really useful way to add detail to your 3D scene without having to make the objects you’re drawing incredibly complex. Imagine a stone wall in a maze-type game; you probably don’t want to model each block in the wall as a separate object, so instead you create an image of masonry and cover the wall with it; a whole wall can now be just one object.

Here’s what the lesson looks like when run on a browser that supports WebGL:

Click here and you’ll see the live WebGL version, if you’ve got a browser that supports it; here’s how to get one if you don’t.

More on how it all works below…

The usual warning: these lessons are targeted at people with a reasonable amount of programming knowledge, but no real experience in 3D graphics; the aim is to get you up and running, with a good understanding of what’s going on in the code, so that you can start producing your own 3D Web pages as quickly as possible. If you haven’t read the previous tutorials already, you should probably do so before reading this one — here I will only explain the differences between the code for lesson 4 and the new code.

There may be bugs and misconceptions in this tutorial. If you spot anything wrong, let me know in the comments and I’ll correct it ASAP.

There are two ways you can get the code for this example; just “View Source” while you’re looking at the live version, or if you use GitHub, you can clone it (and the other lessons) from the repository there. Either way, once you have the code, load it up in your favourite text editor and take a look.

The trick to understanding how textures work is that they are a special way of setting the colour of a point on a 3D object. As you will remember from lesson 2, colours are specified by fragment shaders, so what we need to do is load the image and send it over to the fragment shader. The fragment shader also needs to know which bit of the image to use for the fragment it’s working on, so we need to send that information over to it too.

Let’s start off by looking at the code that loads the texture. We call it right at the start of the execution of our page’s JavaScript, in webGLStart at the bottom of the page (new code in red):

  function webGLStart() {
    var canvas = document.getElementById("lesson05-canvas");
    initGL(canvas);
    initShaders();
    initTexture();

    gl.clearColor(0.0, 0.0, 0.0, 1.0);

Let’s look at initTexture — it’s about a third of the way from the top of the file, and is all new code:

  var neheTexture;
  function initTexture() {
    neheTexture = gl.createTexture();
    neheTexture.image = new Image();
    neheTexture.image.onload = function() {
      handleLoadedTexture(neheTexture)
    }

    neheTexture.image.src = "nehe.gif";
  }

So, we’re creating a global variable to hold the texture; obviously in a real-world example you’d have multiple textures and wouldn’t use globals, but we’re keeping things simple for now. We use gl.createTexture to create a texture reference to put into the global, then we create a JavaScript Image object and put it into a a new attribute that we attach to the texture, yet again taking advantage of JavaScript’s willingness to set any field on any object; texture objects don’t have an image field by default, but it’s convenient for us to have one, so we create one. The obvious next step is to get the Image object to load up the actual image it will contain, but before we do that we attach a callback function to it; this will be called when the image has been fully loaded, and so it’s safest to set it first. Once that’s set up, we set the image’s src property, and we’re done. The image will load asynchronously — that is, the code that sets the src of the image will return immediately, and a background thread will load the image from the web server. Once it’s done, our callback gets called, and it calls handleLoadedTexture:

  function handleLoadedTexture(texture) {
    gl.bindTexture(gl.TEXTURE_2D, texture);
    gl.pixelStorei(gl.UNPACK_FLIP_Y_WEBGL, true);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, texture.image);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
    gl.bindTexture(gl.TEXTURE_2D, null);
  }

The first thing we do is tell WebGL that our texture is the “current” texture. WebGL texture functions all operate on this “current” texture instead of taking a texture as a parameter, and bindTexture is how we set the current one; it’s similar to the gl.bindBuffer pattern that we’ve looked at before.

Next, we tell WebGL that all images we load into textures need to be flipped vertically. We do this because of a difference in coordinates; for our texture coordinates, we use coordinates that, like the ones you would normally use in mathematics, increase as you move upwards along the vertical axis; this is consistent with the X, Y, Z coordinates we’re using to specify our vertex positions. By contrast, most other computer graphics systems — for example, the GIF format we use for the texture image — use coordinates that increase as you move downwards on the vertical axis. The horizontal axis is the same in both coordinate systems. This difference on the vertical axis means that from the WebGL perspective, the GIF image we’re using for our texture is already flipped vertically, and we need to “unflip” it. (Thanks to Ilmari Heikkinen for clarifying that in the comments.)

The next step is to upload our freshly-loaded image to the texture’s space in the graphics card using texImage2D. The parameters are, in order, what kind of image we’re using, the level of detail (which is something we’ll look at in a later lesson), the format in which we want it to be stored on the graphics card (repeated twice for reasons we’ll also look at later), the size of each “channel” of the image (that is, the datatype used to store red, green, or blue), and finally the image itself.

On to the next two lines: these specify special scaling parameters for the texture. The first tells WebGL what to do when the texture is filling up a large amount of the screen relative to the image size; in other words, it gives it hints on how to scale it up. The second is the equivalent hint for how to scale it down. There are various kinds of scaling hints you can specify; NEAREST is the least attractive of these, as it just says you should use the original image as-is, which means that it will look very blocky when close-up. It has the advantage, however, of being really fast, even on slow machines. In the next lesson we’ll look at using different scaling hints, so you can compare the performance and appearance of each.

Once this is done, we set the current texture to null; this is not strictly necessary, but is good practice; a kind of tidying up after yourself.

So, that’s all the code required to load the texture. Next, let’s move on to initBuffers. This has, of course, lost all of the code relating to the pyramid that we had in lesson 4 but have now removed, but a more interesting change is the replacement of the cube’s vertex colour buffer with a new one — the texture coordinate buffer. It looks like this:

    cubeVertexTextureCoordBuffer = gl.createBuffer();
    gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexTextureCoordBuffer);
    var textureCoords = [
      // Front face
      0.0, 0.0,
      1.0, 0.0,
      1.0, 1.0,
      0.0, 1.0,

      // Back face
      1.0, 0.0,
      1.0, 1.0,
      0.0, 1.0,
      0.0, 0.0,

      // Top face
      0.0, 1.0,
      0.0, 0.0,
      1.0, 0.0,
      1.0, 1.0,

      // Bottom face
      1.0, 1.0,
      0.0, 1.0,
      0.0, 0.0,
      1.0, 0.0,

      // Right face
      1.0, 0.0,
      1.0, 1.0,
      0.0, 1.0,
      0.0, 0.0,

      // Left face
      0.0, 0.0,
      1.0, 0.0,
      1.0, 1.0,
      0.0, 1.0,
    ];
    gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(textureCoords), gl.STATIC_DRAW);
    cubeVertexTextureCoordBuffer.itemSize = 2;
    cubeVertexTextureCoordBuffer.numItems = 24;

You should be pretty comfortable with this kind of code now, and see that all we’re doing is specifying a new per-vertex attribute in an array buffer, and that this attribute has two values per vertex. What these texture coordinates specify is where, in cartesian x, y coordinates, the vertex lies in the texture. The texture’s size is normalised so that it’s 1 unit high and 1 wide, so (0, 0) is at the bottom left, (1, 1) the top right.

That’s the only change in initBuffers, so let’s move on to drawScene. The most interesting changes in this function are, of course, the ones that make it use the texture. However, before we go through these, there are a number of changes related to really simple stuff like the removal of the pyramid and the fact that the cube is now spinning around in a different way. I won’t describe these in detail, as they should be pretty easy to work out; they’re highlighted in red in this snippet from the top of the drawScene function:

  var xRot = 0;
  var yRot = 0;
  var zRot = 0;
  function drawScene() {
    gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
    gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

    perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 100.0);
    loadIdentity();

    mvTranslate([0.0, 0.0, -5.0])

    mvRotate(xRot, [1, 0, 0]);
    mvRotate(yRot, [0, 1, 0]);
    mvRotate(zRot, [0, 0, 1]);

    gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexPositionBuffer);
    gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, cubeVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);

There are also matching changes in the animate function to update xRot, yRot and zRot, which I won’t go over.

So, with those out of the way, let’s look at the texture code. In initBuffers we set up a buffer containing the texture coordinates, so here we need to bind it to the appropriate attribute so that the shaders can see it:

    gl.bindBuffer(gl.ARRAY_BUFFER, cubeVertexTextureCoordBuffer);
    gl.vertexAttribPointer(shaderProgram.textureCoordAttribute, cubeVertexTextureCoordBuffer.itemSize, gl.FLOAT, false, 0, 0);

…and now that WebGL knows which bit of the texture each vertex uses, we need to tell it to use the texture that we loaded earlier, then draw the cube:

    gl.activeTexture(gl.TEXTURE0);
    gl.bindTexture(gl.TEXTURE_2D, neheTexture);
    gl.uniform1i(shaderProgram.samplerUniform, 0);

    gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, cubeVertexIndexBuffer);
    setMatrixUniforms();
    gl.drawElements(gl.TRIANGLES, cubeVertexIndexBuffer.numItems, gl.UNSIGNED_SHORT, 0);

What’s happening here is somewhat complex. WebGL can deal with up to 32 textures during any given call to functions like gl.drawElements, and they’re numbered from TEXTURE0 to TEXTURE31. What we’re doing is saying in the first two lines that texture zero is the one we loaded earlier, and then in the third line we’re passing the value zero up to a shader uniform (which, like the other uniforms that we use for the matrices, we extract from the shader program in initShaders); this tells the shader that we’re using texture zero. We’ll see how that’s used later.

Anyway, once those three lines are executed, we’re ready to go, so we just use the same code as before to draw the triangles that make up the cube.

The only remaining new code to explain is the changes to the shaders. Let’s look at the vertex shader first:

  attribute vec3 aVertexPosition;
  attribute vec2 aTextureCoord;

  uniform mat4 uMVMatrix;
  uniform mat4 uPMatrix;

  varying vec2 vTextureCoord;

  void main(void) {
    gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
    vTextureCoord = aTextureCoord;
  }

This is very similar to the colour-related stuff we put into our vertex shader in lesson 2; all we’re doing is accepting the texture coordinates (again, instead of the colour) as a per-vertex attribute, and passing it straight out in a varying variable.

Once this has been called for each vertex, WebGL will work out values for the fragments (which, remember, are basically just pixels) between vertices by using linear interpolation between the vertices — just as it did with the colours in lesson 2. So, a fragment half-way between vertices with texture coordinates (1, 0) and (0, 0) will get the texture coordinates (0.5, 0), and one halfway between (0, 0) and (1, 1) will get (0.5, 0.5). Next stop, the fragment shader:

  #ifdef GL_ES
  precision highp float;
  #endif

  varying vec2 vTextureCoord;

  uniform sampler2D uSampler;

  void main(void) {
    gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
  }

So, we pick up the interpolated texture coordinates, and we have a variable of type sampler, which is the shader’s way of representing the texture. In drawScene, our texture was bound to gl.TEXTURE0, and the uniform uSampler was set to the value zero, so this sampler represents our texture. All the shader does is use the function texture2D to get the appropriate colour from the texture using the coordinates. Textures traditionally use s and t for their coordinates rather than x and y, and the shader language supports these as aliases; we could just as easily used vTextureCoord.x and vTextureCoord.y.

Once we have the colour for the fragment, we’re done! We have a textured object on the screen.

So, that’s it for this time. Now you know all there is to learn from this lesson: how to add textures to 3D objects in WebGL by loading an image, telling WebGL to use it for a texture, giving your object texture coordinates, and using the coordinates and the texture in the shaders.

If you have any questions, comments, or corrections, please do leave a comment below!

Otherwise, check out the next lesson, in which I show how you can get basic key-based input into the JavaScript that animates your 3D scene, so that we can start making it interact with the person viewing the web page. We’ll use that to allow the viewer to change the spin of the cube, to zoom in and out, and to adjust the hints given to WebGL to control the scaling of textures.

<< Lesson 4Lesson 6 >>

Acknowledgments: Chris Marrin’s WebKit-only spinning box was a great help when writing this, as was Jacob Seidelin’s port of Chris’ demo to Firefox. As always, I’m deeply in debt to NeHe for his OpenGL tutorial for the script for this lesson.

You can leave a response, or trackback from your own site.

46 Responses to “WebGL Lesson 5 – introducing textures”

  1. Ilmari Heikkinen says:

    Re: coordinate flipping, OpenGL uses the math-style y-grows-up-from-bottom-left system, while everything else uses y-grows-down-from-top-left.

  2. giles says:

    Thanks! That makes perfect sense, I’ll update the lesson accordingly.

  3. KrisHoood says:

    Why if We use textures, We can’t use colors. Is possible use both at once?

  4. giles says:

    You can definitely use textures on one part of your scene and colours on another.

    You could also specify both a texture and a colour for one object, using different shader attributes for the texture coordinates and for the colour. If you did that, you’d need to write shaders that knew how to combine the two, perhaps using the colour to tint the texture; I don’t think it would be all that hard to do.

  5. KrisHoood says:

    Ok. I will try. Thank You.

  6. rotoglup says:

    Note that the prefered way to flip the texture should be to adjust ‘textureCoords’ accordingly, or perhaps in the vertex shader. Having one operation per fragment to handle this is not a very ‘real world’ use :P

  7. giles says:

    @rotoglup — thanks for that! I think that it’s nice to have the texture coords in the same “frame of reference” as the vertex positions (to stretch a metaphor — I mean, to keep things so that +ve y is upwards in both kinds of coords), so I’d like to keep them the same as they are right now. Flipping everything in the vertex shader is an clear improvement, so I may make that change if that’s the best option… however, perhaps the most efficient way if I want to keep the texture coords as they are would be to flip the image before I call gl.texImage2D — I’m sure I’ve seen code that does that in OpenGL, but I can’t find anything now right now. If you’ve any thoughts on how to do it that way I’d love to hear about it — otherwise I’ll just do yet another retrospective change and move to the vertex shader solution…

  8. George Slavov says:

    For some reason I get awful performance on this demo. It seems no more than 2 frames per second. I’m using the Firefox nightly from December 8, 2009. I get similarly awful performance on the lessons where both lighting and texturing is on. At least those seem to be the uniting characteristics. Just thought you should know. Thanks for your work!

  9. giles says:

    Hi George — thanks for letting me know. Are you using software rendering? I’ve found that MESA has big performance problems once textures are introduced.

  10. Hi,
    I really appreciate your effort in delivering these contents in this blog.
    I am experiencing a problem with RGBA images: it seems (both from visual feedback and looking at the browsers source code) that chrome and firefox *always* assume RGBA textures are passed with premultiplied alpha and the fourth optional parameter of glTexImage2D is ignored (the one which flags the source texels as premultiplied alpha). Unfortunately, one cannot premultiply the alpha because this is a lossy procedure whenever Alpha x Channel exceeds 255.
    Am I missing something or is it a bug in current implementations? Any robust workaround?

    Thank you so much,
    Marco.

  11. giles says:

    Hi Marco,

    I wish I could help but I don’t know the answer. My guess would be that the functionality’s just not finished yet, but I could well be wrong. Hopefully someone will give a better answer in response to your post on the WebGL forums (which, for anyone else reading this, is here: http://www.khronos.org/message_boards/viewtopic.php?f=35&t=2320).

    Cheers,

    Giles

  12. Rob says:

    First of all thanks for these brilliant tutorials, very helpful in getting up to speed using webGL!

    About textures, normally in openGL you’d handle the flipping (and rgb swapping if necessary) in your texture loader function, where you have access to the pixels. You can’t do that apparently in webGL as we’re using the Image object, and that doesn’t give access to pixels.

    Firefox however has the imgITools interface, which might just do the trick (but obviously only helps if you’re working exclusively with Firefox).

  13. giles says:

    Thanks Rob! That makes sense, I had a vague memory of seeing some kind of “flip the texture upside down” code in a Python port of the NeHe lessons, but hadn’t been able to find it anywhere. Good to know I’m not imagining things.

    But more importantly, thanks for prodding me on this: I’ve just read the spec for texImage2D, which (the spec, that is) didn’t exist when I first wrote this lesson, and it has the following parameters:

    texImage2D(target, level, image, flipY, asPremultipliedAlpha)

    Presumably the last two default to false, and if I set flipY to true then everything will work! I’ll give that a go and update the lesson accordingly if it does.

  14. giles says:

    Yup, adding the “true” to my calls to texImage2D allows me to get rid of the “1.0 – ” in the fragment shader, which is excellent news. Thanks again for raising this! I’ll update the tutorials now.

  15. giles says:

    Grr, flipY doesn’t work in Safari on the Mac. Backing it out again…

  16. Andy Jackson says:

    Awesome tutorials.
    https://bugs.webkit.org/show_bug.cgi?id=34459
    It looks like Safari for Mac has that resolved.

  17. giles says:

    Thanks, Andy — that does look promising! Unfortunately now that WebKit for the Mac only works in Snow Leopard, I can’t test it myself, so I’ve posted asking whether any 10.6 users out there can confirm the bug is fixed: http://learningwebgl.com/blog/?p=1923

  18. You say this right before a code example: (emphasis mine)

    This has, of course, lost all of the code relating to the pyramid that we had in lesson 4 but have now removed, but a more interesting change is the replacement of the cube’s vertex colour buffer with a new one — the vertex coordinate buffer.

    But since it’s actually a buffer of texture coordinates and your code (correctly) calls it “cubeVertexTextureCoordBuffer”, you probably meant to say:

    This has, of course, lost all of the code relating to the pyramid that we had in lesson 4 but have now removed, but a more interesting change is the replacement of the cube’s vertex colour buffer with a new one — the texture coordinate buffer.

  19. giles says:

    You’re quite right, I did mean that. I’ve fixed it — many thanks for pointing it out!

  20. tribadelics says:

    Hi Giles!!! Can you explain me why if you delete this line gl.uniform1i(shaderProgram.samplerUniform, 0); the example continues working???. I try the same in lesson 7 with ligth..but all is working also. Is really weird for me….any comments??? Thanks in advance.

  21. giles says:

    That’s interesting, I’ve no idea! Perhaps the default value for the uniform is zero? I suspect that if that’s the case, it might differ from graphics card to graphics card.

  22. tribadelics says:

    Thanks Giles for you response. Well finally i have deduced it. Here http://nehe.gamedev.net/data/articles/article.asp?article=21 says that:

    Textures are not directly passed to GLSL, but we pass the texture unit where our texture is bound to OpenGL.

    Conclusion if you don’t pass a sample2D to fragment shader….is the fixed pipeline who render the texture.

  23. giles says:

    @tribadelics — not sure I understand that…

    Here’s how I think of it: we’ve bound the texture to texture unit zero with these lines:

    gl.activeTexture(gl.TEXTURE0);
    gl.bindTexture(gl.TEXTURE_2D, neheTexture);

    Now, in our GLSL code, we say:

    gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));

    This means “make the fragment colour the value from location vTextureCoord.s, vTextureCoord.t in the texture unit defined by uSampler”.

    Now, if you include the JavaScript code

    gl.uniform1i(shaderProgram.samplerUniform, 0);

    …then you’re explicitly setting things up so that uSampler is zero when you get into the GLSL. If you don’t include that line, then uSampler will have some kind of default value. If the default value is zero (for a particular graphics card — I’ve no idea if default values are defined in the GLSL specification) then you can get away with omitting the code to set the uniform.

    One experiment to try is using texture unit 1 instead of zero. To do this, change the JavaScript to this:

    gl.activeTexture(gl.TEXTURE1);
    gl.bindTexture(gl.TEXTURE_2D, neheTexture);
    gl.uniform1i(shaderProgram.samplerUniform, 1);

    This will have exactly the same effect as the original code, but if you comment out the last line, you’ll see that the texture disappears.

    Hope that all makes sense!

  24. fazeaction says:

    Definetly you are right….not fixed pipeline at all. I have tried your suggestion with 2 textures and make sense now. If you do not set uniform sample2D to the fragment shader, it get always the last texture active at least on nvidia graphics card. Thanks again!!!!

  25. jojo says:

    hi guys! this blog is awesome!
    i have a little question about uv coords.
    i’m writing a obj parser but in obj format vertex and uvcoord aren’t aligned. either, may i use an index buffer for uvcoords, or i must align them in the parser?

    thanks for all! great work!

  26. giles says:

    @fazeaction — aha, that would make sense.

    @jojo — thanks! Not sure what you mean by “aren’t aligned”…? Do you mean that the third vertex location might not refer to the same vertex as the third texture coord? If so, then yes — you’ll need to align it in the parser :-(

    BTW there’s a basic obj parser in this Google WebGL demo which you might find interesting: https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/sdk/demos/webkit/TeapotPerVertex.html

  27. [...] it to work, I started by stripping Processing.js from my test case.  I went with Learning WebGL lesson 5.  This was my initial test case and my objective at the beginning was to use another canvas as the [...]

  28. john says:

    How would the code change if you had two cubes, that use different shader programs for each cube. For instance, if you had the cube from lesson4 with its vertex and fragment shaders to handle drawing the colored cube and another cube drawn beside it that is textured using the vertex and fragment shaders from lesson5. How do you handle the switching of the program shaders when the two cubes are rendered. Would you use two different shaderProgram variables e.g. shaderProgram1 and shaderProgram2 and initialize them in initShaders() routine and then call gl.useProgram(shaderProgram1) when drawing the first cube and gl.useProgram(shaderProgram2) when drawing the textured cube in the drawScene() routine. I was just wondering how this is done in general when you have different objects in a scene that require different shader programs.

  29. john says:

    Here is my implementation with 2 cubes using different shaders. Is this the correct way to implement it or is there a better way to do it?

    http://dl.dropbox.com/u/5095342/X3D/mixedShaders.html

  30. giles says:

    @john — that looks exactly right to me :-)

  31. pion says:

    The code as is works as expected — spinning textured cube. This is running on Chrome Canary build.

    I replaced the “setInterval(tick, 15);” with “drawScene();”.

    The background becomes black. But the cube (and its texture) did not show up.

    Why is that?

  32. pion says:

    I looked at the example on http://www.khronos.org/webgl/wiki/WebGL_and_OpenGL#Loading_Images

    image.onload = function() {
    gl.bindTexture(gl.TEXTURE_2D, texture);
    gl.texImage2D(gl.TEXTURE_2D, 0, image, true);
    checkGLError();
    draw();
    };

    I modified the lesson05.html to the following:
    neheTexture.image.onload = function() {
    handleLoadedTexture(neheTexture);
    drawScene(); // added this new line
    }

    Now it works/draws without calling setInterval();

  33. giles says:

    That sounds right. The reason you didn’t see it the first time was because you were calling drawScene before the texture was loaded.

  34. The example creates 24 vertices, while a cube would normally just need 8, some of these vertex positions are identical.

    I’m now trying to import a cube (exported from blender) which defines 8 vertices and 12 faces (triangles). However, it defines a texture/uv coordinate for each face-index (total of 36), not 1 per vertex.

    Do you guys know of a way to tell gl to use a texture coord per face-index instead of per vertex?

  35. WarrenFaith says:

    I read twice but can’t find a reference in the tutorial about the changes in the initShader() method. As I am not using github to get the code but doing the changes by hand, I missed the changes in the initShader(). They are not mentioned here.

  36. giles says:

    @Joost — in OpenGL ES 2.0 and thus WebGL, a vertex isn’t so much a point in space as it is a bundle of attributes, some of which might (and usually will) describe its location. By that definition (which I agree isn’t necessarily the usual one), if you have N texture coordinates at a single point in space, you need to have N vertices.

    @WarrenFaith — I don’t describe every change, especially in the later lessons. This is because people would wind up having to read large amounts of stuff that basically duplicated earlier lessons. The best way to follow through the code, if you’re not taking it from github, is to go to the example page and do a “View source”.

  37. Kai says:

    The nehe.gif is broken, så when running the program I just got a black screen. After a while I figured that it was the broken texture file (dowloaded from your reposotory) and replaced it … and it worked like a charm ;)

    Btw: Some scenes don’t look right on my GTX480 (latest driver) in Firefox 4 beta9 – but looks fine in Chrome (lastest build). Actually I feel Chrome is the best browser to show webGL.

    I am using dreamweaver CS5 with HTML5 extension to develop, but the pure webgl stupp (openGL ES 2.0) don’t syntax highlight (code hint) – anyone know of a really good IDE for javascript/webGL/HTML5 ?

    And does anybody know of some frameworks for webGL that resemble the OpenGL desktop syntax and code structure (like glut for openGL)?

    Thanks, your site is really good! And the tutorials very nice ;)
    Keep it up! :-)

  38. Vik says:

    @ Giles-

    Hi

    May be its just a naive question but I would really appreciate if you could help me here. See you told that the texture is normalized i.e. its coords changes to (0,0) and so on…. Now I have a question that how the texture got normalized? I mean which part or the function of the program did that???

    Thanks
    Vik

  39. giles says:

    @Kai — glad you like the tutorials! Have you installed the DirectX runtime installed as per this post on Vlad Vukicevic’s blog? It might help.

    @Vik — none of the WebGL code did it. What I’m saying is that while the texture image might be 512×256 pixels (or whatever), once you’re in the fragment shader, it’s considered to be 1.0×1.0, and you address it with floating-point numbers. Does that make more sense?

  40. Vik says:

    @Giles- yes totally…. thanks… :)

  41. mel says:

    Hi thanks for this Awesome tutorial!
    I’m sure this is a stupid question but this lesson only works if I load an image that is exactly 256 x 256 in dimensions (such as the nehe.gif or other files with those dimensions).

    So why can’t I load say..a 255×255 image.. I’m puzzled
    if there is a way to load other sized files then I would very much like to know if possible.
    thanks.

  42. Q. says:

    Hi,

    Great tutorial and thanks.

    In “gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));”, why don’t you just use “gl_FragColor = texture2D(uSampler, vTextureCoord)” because vTextureCoord is declared as vec2?

    -Q.

  43. giles says:

    @mel — Glad you like the tutorial! Images for textures have to have power-of-two resolutions — that is, 1px, 2px, 4px, 8px, 16px, 32px, and so on. More here. I should mention that in this tutorial, thanks for the heads-up.

  44. giles says:

    @Q — glad you like the tutorial too :-) Not sure why I split the texture coordinates out like that — perhaps it was just to showcase the .s and .t -style addressing.

  45. Stephane says:

    Great lessons !
    Now I’ve a question. As far as I understand we define 24 vertices in order to match the 24 coordinates for our texture.
    Now, how can I do if I want to set 1 different texture per face ?
    Is it possible to do that without changing the way we define the cube (using 24 vertices) ?

  46. giles says:

    Hmmm, tricky. I think you’d need to have another per-vertex attribute specifying which texture they used, which you’d then use to select the appropriate sampler in the fragment shader.

Leave a Reply

Subscribe to RSS Feed Follow me on Twitter!