opengl-article-02

In this article, we will be adding a texture to the triangle. This will involve adding new variables to the vertex and fragment shaders, creating and using texture objects, and learning a bit about texture units and texture coordinates.

As with every article, we will be adding to the small library of reusable code in the tdogl namespace. This article introduces two new classes: tdogl::Bitmap and tdogl::Texture. These two new classes will allow us to load an image from a jpg, png, or bmp file into video memory, for use in the shaders. Also, the tdogl::Program class has some new methods for setting shader variables.

Accessing The Code

Download all lessons as a zip from here: https://github.com/tomdalling/opengl-series/archive/master.zip

Setup instructions are available in the first article: Getting Started in Xcode, Visual C++, and Linux.

This article builds on the code from the previous article.

All the code in this series of articles is available from github: https://github.com/tomdalling/opengl-series. You can download a zip of all the files from that page, or you can clone the repository if you are familiar with git. The code for this article can be found in the windows/02_textures, osx/02_textures, and linux/02_textures directories.

Uniform vs Attribute Shader Variables

All the variables in the previous article were attribute variables. In this article we will be introducing the other kind of variable: uniform variables.

Attribute variables can have a different value for each vertex. Uniform variables keep the same value for multiple vertices.

There are two kinds of shader variables: uniform variables and attribute variables. Attribute variables can have a different value for each vertex. Uniform variables keep the same value for multiple vertices. For example, if you want to set a colour for a whole triangle, you would use a uniform variable. If you want each corner of a triangle to be a different color, you would use an attribute variable. I will just refer to them as “uniforms” and “attributes” from now on.

Uniforms can be accessed from any shader, but attributes must enter the vertex shader first, not the fragment shader. The vertex shader can pass the value into the fragment shader if necessary.

Uniforms can be accessed from any shader, but attributes must enter the vertex shader first, not the fragment shader. The vertex shader can pass the value into the fragment shader if necessary. This is because uniforms are like constants – they don’t change so they can be accessed from any shader. However, attributes are not constant. The vertex shader can change the value of an attribute before it gets to the fragment shader. The output of the vertex shader is the input to the fragment shader.

To set the value of a uniform, we use one of the glUniform* functions. To set the value of an attribute, we store the values in a VBO and send them to the shader with a VAO and glVertexAttribPointer like we saw in the previous article. It is also possible to set the value of an attribute using one of the glVertexAttrib* functions, if you are not storing the values in a VBO.

Textures

Textures are basically 2D images that you can apply to your 3D objects.

Textures are basically 2D images that you can apply to your 3D objects. They have other uses, but displaying a 2D image on 3D geometry is the most common use. There are 1D, 2D and 3D textures, but we will only be looking at 2D textures in this article. For a more in-depth look at textures, see the Textures are not Pictures chapter of the Learning Modern 3D Graphics Programming book.

Textures live in video memory. That is, you upload the data for the texture to the graphics card before you can use it. This is similar to how we saw VBOs working in the previous article – VBOs are used to store data in video memory before that data gets used.

The pixel width and height of a texture should be a power of two.

The pixel width and height of a texture should be a power of two, for example: 16, 32, 64, 128, 256, 512. In this article we will use the 256×256 image “hazard.png” as a texture, which is shown below.

We will use the tdogl::Bitmap class to load the raw pixel data from “hazard.png” into memory, with the help of stb_image. Then we will use tdogl::Texture to upload the raw pixel data into an OpenGL texture object. Fortunately, texture creation in OpenGL has not really changed since it was first introduced, so there are plenty of good articles online that will show you how to create a texture in OpenGL. The way that texture coordinates are sent to the graphics card has changed, but the creation of the texture is the same.

Below is the constructor code for tdogl::Texture, which handles the creation of an OpenGL texture.

Texture::Texture(const Bitmap& bitmap, GLint minMagFiler, GLint wrapMode) :
    _originalWidth((GLfloat)bitmap.width()),
    _originalHeight((GLfloat)bitmap.height())
{
    glGenTextures(1, &_object);
    glBindTexture(GL_TEXTURE_2D, _object);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, minMagFiler);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, minMagFiler);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, wrapMode);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, wrapMode);
    glTexImage2D(GL_TEXTURE_2D,
                 0, 
                 TextureFormatForBitmapFormat(bitmap.format()),
                 (GLsizei)bitmap.width(), 
                 (GLsizei)bitmap.height(),
                 0, 
                 TextureFormatForBitmapFormat(bitmap.format()), 
                 GL_UNSIGNED_BYTE, 
                 bitmap.pixelBuffer());
    glBindTexture(GL_TEXTURE_2D, 0);
}

Texture Coordinates

The strange thing about texture coordinates is that they are not in pixels. They range from zero to one, where (0,0) is the bottom left and (1,1) is the top right.

Texture coordinates are, unsurprisingly, coordinates on a texture. The strange thing about texture coordinates is that they are not in pixels. They range from zero to one, where (0,0) is the bottom left and (1,1) is the top right. If you load the image into OpenGL upside down, then (0,0) will be the top left, not the bottom left. To turn pixel coordinates into texture coordinates, you must divide by the width and height of the texture. For example, in a 256×256 image, the pixel coordinates (128,256) become (0.5, 1) in texture coordinates.

Texture coordinates are commonly referred to as UV coordinates. You could call them XY coordinates, but XYZ is commonly used to represent a vertex, and we don’t want to confuse texture coordinates with vertex coordinates.

Texture Image Units

You can’t just send a texture straight to a shader. First you bind the texture to a texture unit, then you send the index of the texture unit to the shader.

Texture image units, or just “texture units” for short, are a slightly weird part of OpenGL. You can’t just send a texture straight to a shader. First you bind the texture to a texture unit, then you send the index of the texture unit to the shader.

There are a limited number of texture units. On less-powerful devices such as phones, there might only be two texture units. In that case, even though we could have dozens of textures, we could only use two of them at the same time in the shaders. We will only be using one texture in this article, so we only need one texture unit, but it is possible to blend multiple different textures together inside the fragment shader.

Implementing Textures

First, let’s make a new global for the texture.

tdogl::Texture* gTexture = NULL;

We’ll make a new function to load “hazard.png” into the global. This gets called from the AppMain function.

static void LoadTexture() {
    tdogl::Bitmap bmp = tdogl::Bitmap::bitmapFromFile(ResourcePath("hazard.png"));
    bmp.flipVertically();
    gTexture = new tdogl::Texture(bmp);
}

Next we will give each vertex of the triangle a texture coordinate. If you compare the UV coords to the image above, you will see that the coordinates represent (middle,top), (left,bottom), and (right,bottom) in that order.

GLfloat vertexData[] = {
    //  X     Y     Z       U     V
     0.0f, 0.8f, 0.0f,   0.5f, 1.0f,
    -0.8f,-0.8f, 0.0f,   0.0f, 0.0f,
     0.8f,-0.8f, 0.0f,   1.0f, 0.0f,
};

Now we need to modify the fragment shader so that it takes a texture and a texture coordinate as input. The new fragment shader looks like this:

#version 150
uniform sampler2D tex; //this is the texture
in vec2 fragTexCoord; //this is the texture coord
out vec4 finalColor; //this is the output color of the pixel

void main() {
    finalColor = texture(tex, fragTexCoord);
}

The uniform keyword indications that tex is a uniform variable. The texture is a uniform because all the vertices of the triangle will have the same texture. sampler2D is the variable type, indicating that it holds a 2D texture.

The fragTexCoord is an attribute variable, because each vertex of the triangle will have a different texture coordinate.

The texture function finds the color of the pixel at the given texture coordinate. In older versions of GLSL, you would use the texture2D function to do this.

We can’t pass an attribute straight into the fragment shader, because attributes must first go through the vertex shader. Here is the modified vertex shader:

#version 150
in vec3 vert;
in vec2 vertTexCoord;
out vec2 fragTexCoord;

void main() {
    // Pass the tex coord straight through to the fragment shader
    fragTexCoord = vertTexCoord;
    
    gl_Position = vec4(vert, 1);
}

This vertex shader takes vertTexCoord as input, and passes it straight into the fragTexCoord attribute of the fragment shader without modifying it.

The shaders now have two variables we need to set: the vertTexCoord attribute and tex uniform. Let’s start by setting the tex variable. Open up main.cpp (main.mm on OSX) and find the Render() function. We will set the tex uniform just before we draw the triangle:

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gTexture->object());
gProgram->setUniform("tex", 0); //set to 0 because the texture is bound to GL_TEXTURE0

The texture can’t be used until it is bound to a texture unit. glActiveTexture tells OpenGL which texture unit we want to use. GL_TEXTURE0 is the first texture unit, so we will just use that.

Next, we use glBindTexture bind our texture into the active texture unit.

Then we set the tex uniform of the shaders to the index of the texture unit. We used texture unit zero, so we set the tex uniform to the integer value 0. The setUniform method just calls the glUniform1i function.

The final step is to get the texture coordinates into the vertTexCoord attribute. To do this, we will modify the VAO inside the LoadTriangle() function. This is what the code used to look like:

// Put the three triangle vertices into the VBO
GLfloat vertexData[] = {
    //  X     Y     Z
     0.0f, 0.8f, 0.0f,
    -0.8f,-0.8f, 0.0f,
     0.8f,-0.8f, 0.0f
};

// connect the xyz to the "vert" attribute of the vertex shader
glEnableVertexAttribArray(gProgram->attrib("vert"));
glVertexAttribPointer(gProgram->attrib("vert"), 3, GL_FLOAT, GL_FALSE, 0, NULL);

And this is what we need to change it to:

// Put the three triangle vertices (XYZ) and texture coordinates (UV) into the VBO
GLfloat vertexData[] = {
    //  X     Y     Z       U     V
     0.0f, 0.8f, 0.0f,   0.5f, 1.0f,
    -0.8f,-0.8f, 0.0f,   0.0f, 0.0f,
     0.8f,-0.8f, 0.0f,   1.0f, 0.0f,
};

// connect the xyz to the "vert" attribute of the vertex shader
glEnableVertexAttribArray(gProgram->attrib("vert"));
glVertexAttribPointer(gProgram->attrib("vert"), 3, GL_FLOAT, GL_FALSE, 5*sizeof(GLfloat), NULL);
    
// connect the uv coords to the "vertTexCoord" attribute of the vertex shader
glEnableVertexAttribArray(gProgram->attrib("vertTexCoord"));
glVertexAttribPointer(gProgram->attrib("vertTexCoord"), 2, GL_FLOAT, GL_TRUE,  5*sizeof(GLfloat), (const GLvoid*)(3 * sizeof(GLfloat)));

We’ve added a second call to glVertexAttribPointer, but we’ve also modified the first call as well. The most important arguments to look at are the last two.

The second last argument to both glVertexAttribPointer calls is 5*sizeof(GLfloat). This is the “stride” argument. This argument wants to know how many bytes are between the start of one value, and the start of the next value. In both cases, each value is five GLfloats away from the next value. For example, if we start at the “X” value and count forward five values, we will be at the next “X” value. The same is true if we start at a “U” value, and count forward five. This argument is in bytes, not floats, so we must multiply the number of floats by the number of bytes per float.

The last argument to glVertexAttribPointer is the “offset” argument. This argument wants to know how many bytes from the start is the first value. The first XYZ value is right at the beginning, so the offset is set to NULL which means “zero bytes from the start”. The first UV value is not at the beginning – it is three floats away from the beginning. Once again, this argument is in bytes, not floats, so we must multiply the number of floats by the number of bytes per float. We must cast the number of bytes to a const GLvoid*, because in older versions of OpenGL this argument used to be something different to the “offset” that it is now.

Now, when you run the program, you should see a textured triangle like the one shown at the top of this article.

Future Article Sneak Peek

In the next article we will learn a bit about matrix math, and use matrices to spin a cube, move the camera, and add a perspective projection. We will also learn about depth buffering, and how a typical program does time-based updates such as animation.

Additional OpenGL Texture Resources

Translations Of This Article

  • Casey

    I like the simplicity and clarity of your articles. Your code samples are well explained and make me understand the content. Can’t wait for the next one!

  • http://www.tomdalling.com/ Tom Dalling

    Thanks!

  • Christian

    I enjoy your tutorials. Great stuff! bookmarked and waiting for more.

  • Mecahi

    Great tutorial. I like it when basic and important stuff is explained with simplicity. Very useful both for a new beginner and for ones like me who want to fresh up his/her knowledge. Keep them coming please.

  • Nope

    Great article! When do you think the next will be up? 

  • http://www.tomdalling.com/ Tom Dalling

    Hopefully I will publish the next article today. The code is done, but the article taking a while.

  • michael

    Your tutorial is nice. I would like to suggest something that occured to me while reading part 1 and 2:
    in part 1, a vertex is defined as vec4. also in this part, you say a vertex is represented by X,Y,Z,W but nowhere do you explain what the 4th component is used for. i was puzzled at first. you might place a short explanation cause i think one usually expects 3 components…

  • http://www.tomdalling.com/ Tom Dalling

    Yeah, there is a little but about W in the next article, but I might have to move that back to an earlier article.

  • Peter Fors

    I have a little suggestion for a change to clear some things up for new people,

    you wrote: There are a limited number of texture units, which means there are a limited number of textures that your shaders can use.

    My suggestion is that you add “at any given time.” at the end, so people understand that it’s not limited. Love the tutorials, I’m comming here from OpenGL 2.x, I really need to learn using 3.x+ :) Thanks for your great work!

  • http://www.tomdalling.com/ Tom Dalling

    Thanks! Good suggestion. I’ll reword it a bit.

  • Pingback: opengl | Annotary

  • Peter

    Hi Tom,

    I really like your articles – they are the best I’ve seen so far!
    I use them as a base for my project but I’ve got a problem with multiple textures. When I use 1 texture everything seems ok, but when I load and use second texture it overrides the previous texture. Any idea why is that happening. If you have time I can give you my source code, just tell me :)

    Regards,
    Peter

  • http://www.tomdalling.com/ Tom Dalling

    Hi Peter,

    It could be caused by lots of things. If you put your project into a github repository and send me a link to it, I’ll have a quick look at it.

  • http://www.tomdalling.com/ Tom Dalling

    I’ve sent you a pull request on github with a couple of changes, and some notes in the commit messages.

  • Peter

    Thank you very much Tom! I really appreciate your help, because there are not many people who would agree to do this!
    For the future I plan to implement ray tracing, height maps, skybox and model loading. I hope you can make articles on some of these topics because you are doing it great so far !

  • ralph

    Can one use an index based vbo with textures?
    How would one, for example, create a cube with a different symbol on each face using a VAO, vertex buffer, normal buffer, vertex index list and a single texture?
    Can a texture coordinate be assigned to each index instead of each vertex?

    Other examples on the net do not have an index array for the object and duplicate the vertex data.

  • http://www.tomdalling.com/ Tom Dalling

    The vertex data has to be duplicated because you have to set texture coords per-vertex – you can’t set them per-index. I don’t think it’s possible to put all six sides of a cube into a single 2D texture without having gaps between any of the corners. You could put all the faces into a single texture, but it would have to be a GL_TEXTURE_CUBE_MAP or a GL_TEXTURE_2D_ARRAY, and the texture coordinates would be 3D instead of 2D. Even if you did that, the texture coordinates would still be different for each face, so you would still have to duplicate the vertices.

    It’s all compatible with index-based drawing, but textured cubes just don’t have any shared vertices because of the texture coordinates.

  • ralph

    Our concern was with the memory usage by the duplicated vertices data. Is there a good reference, guidance, web page for how to reduce the video GPU memory used for vertex, normal, texture coordinates, and extra non-visual vertex attributes (fluid specific data)?
    Our application has a much more complicated geometry consisting of a fixed set of overlapping/concentric 3d pipeline tubes; and a set of time steps having fluid data samples moving through the cylinders.
    This will have around 400,000 vertices each with 3 non-visual 4 byte floating point attributes on an OpenGL 3.x+ environment.
    Thank you for the tutorials.

  • http://www.tomdalling.com/ Tom Dalling

    Well, textured cubes are an unusual case due to the texture coords. Most models (including cylinders) will have fully duplicated vertices, so you will reduce memory by using indexed drawing in those cases.

    Other than using indexed drawing, and reducing bytes-per-vertex, I’m not sure how to reduce memory usage further. You could trying using compressed textures: https://www.opengl.org/wiki/Image_Format#Compressed_formats

  • Sajjadul

    Hi Tom,

    Thanks for these tutorials. I am using couple of your classes for one of my tiny projects. I am trying create a rippling effect over the mesh plane and i am using the glfwGetTime() and the value is sent to the vertex shader as uniform. Unfortunately i am not getting any rippling effect at all.

    I think that program->setUniform(“time”,time); should be fine enough ?

    Any idea?

  • Sajjadul

    The issue is solved. Sorry for polluting the forum with the trivial issue. You can delete this post.

  • http://www.tomdalling.com/ Tom Dalling

    Don’t worry about it! I’m glad you git it working.