Feb 13, 2014

Future of Intro to Shader Programming Book

I have been posting the English version of my Intro to Shader Programming book in this blog. I intended to publish it some years ago, but I had more important things to occupy myself. I thought it was a bit too late to publish this book, but I got lucky this time.

Luckily, my friend who owns an e-book only publisher wants to publish it. I'm one of people who wants all the programming books available exclusively online, removing the need for any non-standard, paper-specific layout design. So I decided to give it to him. This means I can't post all chapters on my blog, but I managed to negotiate a bit. As a result, I will be posting up to Chapter 6 on this blog, which is about the half of this book. I hope that this new "DLC scheme" won't piss you off and people who enjoyed it on my blog would buy it.

I still have rights to post all the rest chapters on my blog. But as a good will for the publisher, I won't post them unless the book becomes unavailable for some reason in the future.

The book will be sold on Amazon, iBooks and Google Play Books, starting sometime in March or April 2014.

Feb 12, 2014

Learning New Knowledge is Easy

learning new knowledge is easy, but changing old habits is hard.

Feb 11, 2014

[Intro to Shader] 04.Basic Lighting Shader - Part 2


Where to buy:
Amazon
iBooks

Source Code: GitHubZip

Chapter 4: Basic Lighting Shaders - Part 2

Specular Light
Background
Specular light is different from diffuse light in a way that it only reflects to one direction and the angle of incidence is same as the angle of reflection. So if you want to observe specular light in action, you will have to see the surface from the direction where the reflected rays point toward. Have you ever turned your head away because the sun glare is too bright on your monitor? If you tilt the monitor a bit, it is bearable, right? That is specular light.

Then let’s add specular light to Figure 4.2, which had only diffuse light.

Figure 4.8 Diffuse and specular light

Just like diffuse light, there are many different specular lighting models, too. In this book, we will use Phong model, which is widely used in video games. In order to calculate specular lighting, Phong model finds the cosine value of the angle between the reflect vector (light vector reflected off the surface) and camera vector (a vector from the camera position to the current position), and raises the result to the power of some exponent. Look at the picture below for better understanding.

Figure 4.9 An example of specular lighting

Finding the cosine of the angle between reflect vector, R, and camera vector, V, is not different from what we did for diffuse lighting except that R and V are used instead of normal and light vectors. By the way, why do we raise cosine to the power of an exponent? You will find the answer in Figure 4.10.

Figure 4.10 As the exponent gets bigger, the cosine graph falls faster.
You see the graph falls faster as the exponent grows, right? If you observe specular light in the real world, you will notice that radius of the light is very tight unlike diffuse light, which is rather wide. This is why we use the power function: to mimic the tightness.[1]  Then, what exponent should we use? It depends on the materials of surfaces. Rougher surfaces have less tight specular light, so the smaller exponent should be used. As a general rule, start from an exponent of 20, and experiment with bigger and smaller numbers.

Now let’s write some shader code.

Initial Step-by-Step Setup
Let’s add specular light to the diffuse light shader we wrote earlier in this chapter. After all, we need both diffuse and specular to get a “correct” light effect.

What were the new things added to Figure 4.9? They were reflect and camera vectors, right? Reflect vector is a light vector which is reflected off the surface, and the angel between normal and light vectors is same as the angle between normal and reflect vectors. This means that reflect vector can be found from all the information we already have. Then, what about camera vector? Just like how we found the light vector, we can draw a line from the camera position to the current position, right? Therefore, the camera position will be a global variable. Go to RenderMonkey, and right-click on Lighting effect to add a new float4 variable. gWorldCameraPosition should be good for the name. Now right-click on it and assign ViewPosition semantic.

Now we got everything we need. Let’s look at the vertex shader.

Vertex Shader
Just like before, the full source code is listed first.

float4x4 gWorldMatrix;
float4x4 gViewMatrix;
float4x4 gProjectionMatrix;

float4 gWorldLightPosition;
float4 gWorldCameraPosition;

struct VS_INPUT 
{
   float4 mPosition : POSITION;
   float3 mNormal: NORMAL;
};

struct VS_OUTPUT 
{
  float4 mPosition : POSITION;
  float3 mDiffuse : TEXCOORD1;
  float3 mViewDir: TEXCOORD2;
  float3 mReflection: TEXCOORD3;
};

VS_OUTPUT vs_main( VS_INPUT Input )
{
  VS_OUTPUT Output;

  Output.mPosition = mul( Input.mPosition, gWorldMatrix );

  float3 lightDir = Output.mPosition.xyz - gWorldLightPosition.xyz;
  float3 lightDirUnnorm = lightDir;
  lightDir = normalize(lightDir);
   
  Output.mViewDir = Output.mPosition.xyz - gWorldCameraPosition.xyz;
   
  Output.mPosition = mul( Output.mPosition, gViewMatrix );
  Output.mPosition = mul( Output.mPosition, gProjectionMatrix );
   
  float3 worldNormal = mul( Input.mNormal, (float3x3)gWorldMatrix );
  worldNormal = normalize(worldNormal);

  Output.mDiffuse = dot(-lightDir, worldNormal);
  Output.mReflection = reflect(lightDirUnnorm, worldNormal);

  return Output;
}

Global Variables and Input Data of Vertex Shader
Let’s take a look at the input data to vertex shader. Do we need any extra vertex information? I can’t think of any, so there must be none. :P Let’s just use the same input structure we used earlier in this chapter.

Then how about global variables? We have to declare gWorldCameraPosition that we just added to the RenderMonkey project, right? Add the following line:

float4 gWorldCameraPosition;

Output Data from Vertex Shader
Let’s look at the vertex shader’s output data. As we did with diffuse light, can we calculate the specular light in vertex shader and pass the result to pixel shader? Unfortunately, no. To calculate the specular light, we have to raise the cosine value to the power of an exponent, but doing it so before the interpolation step produces a wrong result. This is because the power function is not linear. This means that we need to calculate specular light in pixel shader, so we will find two directional vectors, R and V in vertex shader and pass them to pixel shader. Please add the following lines to VS_OUTPUT structure.

   float3 mViewDir: TEXCOORD2;
   float3 mReflection: TEXCOORD3;

Vertex Shader Function
K, now let’s find these two vectors. Remember how we can find the camera vector? It’s simple. Just draw a line from the camera position to current position. This is not different from finding a light vector, at all. Let’s add the camera vector code right below where we calculated the light vector.

  Output.mViewDir = Output.mPosition.xyz - gWorldCameraPosition.xyz;

Now it’s time to find the reflect vector. Then what is the math formula for vector reflection? Guess what? I don’t even remember! But don’t worry. There is another magic HLSL function for this. It’s called reflect(). This function takes two parameters, light and surface normal vectors. Add the following line before the Output structure is returned

  Output.mReflection = reflect(lightDirUnnorm, worldNormal);

In the above code, we used a variable that’s not defined yet: lightDirUnnorm. This is the unnormalized light vector, which we don’t have yet. It is best not to normalize vectors which will be passed to pixel shaders to avoid visual artifacts on large triangles. That’s why we did not normalize mViewDir either. But you might still see some artifacts because we calculated the reflection vector in vertex shader. In rare cases, the reflection vector might become 0 during interpolation. If you see this symptom, find the reflection vector in pixel shader. Anyways, add the following line right below where lightDir was defined to remember the unnormalized light vector:

  float3 lightDirUnnorm = lightDir;

Now that we found both vectors we need, there’s nothing more to do in vertex shader.

Pixel Shader
Let’s see the full pixel shader code first.

struct PS_INPUT
{
  float3 mDiffuse : TEXCOORD1;
  float3 mViewDir: TEXCOORD2;
  float3 mReflection: TEXCOORD3;
};

float4 ps_main(PS_INPUT Input) : COLOR
{
  float3 diffuse = saturate(Input.mDiffuse);
  
  float3 reflection = normalize(Input.mReflection);
  float3 viewDir = normalize(Input.mViewDir); 
  float3 specular = 0;
  if ( diffuse.x > 0 )
  {
    specular = saturate(dot(reflection, -viewDir ));
    specular = pow(specular, 20.0f);
  }

  float3 ambient = float3(0.1f, 0.1f, 0.1f);
  
  return float4(ambient + diffuse + specular, 1);
}

First add the following two vectors to PS_INPUT structure. These are exactly same as what we added to VS_OUTPUT structure.

  float3 mViewDir: TEXCOORD2;
  float3 mReflection: TEXCOORD3;

We will add some new code right after where we calculated diffuse lighting earlier in this chapter. First, normalize mReflection and mViewDir.

  float3 reflection = normalize(Input.mReflection);
  float3 viewDir = normalize(Input.mViewDir);

Then, find the dot product of these two vectors and raise it to the 20th power.

  float3 specular = 0;
  if ( diffuse.x > 0 )
  {
    specular = saturate(dot(reflection, -viewDir ));
    specular = pow(specular, 20.0f);
  }

From the above code, we calculate specular light only when diffuse light is bigger than 0%. It’s because there’s no light hitting the surface if there is no diffuse light, so specular light cannot exist there, either. Also you must have noticed that –viewDir is used when calculating the dot product, right? As with diffuse light, two vectors’ tails must meet to calculate specular light correctly.

Also please note that pow() is used to raise the value to the 20th power. The exponent 20 would be different for different objects.[2]  So declaring it as a global float variable would be a good idea if you need different specular tightness for different objects. I will leave this task to readers.
Now it’s time to return the result. Let’s return only specular light first. Replace the return statement with the following line.

  return float4(specular, 1);

Once you compile and run the shader, you will see specular light, as shown in Figure 4.11.

Figure 4.11 Specular light has much stronger and tighter highlight than diffuse light
Now you know how specular light looks like. If we add diffuse light to this, the result will be more perfect. Please change the return code like this:

  return float4(diffuse + specular, 1);

There are cases when the addition of diffuse and specular becomes bigger than 1. Luckily, you don’t need to worry about this because the result is automatically clamped to 1.[3]

If you compile both vertex and pixel shader and see the preview window, you will find a nice looking sphere with both diffuse and specular light.

Figure 4.12 Diffuse + Specular

This is already pretty good, but the bottom-left part of the sphere is too dark. In fact, it is almost invisible. As mentioned before, indirect light usually illuminates the dark area in the real world. Then, why don’t we just add simple ambient light to brighten the dark area? Let’s declare the ambient light as 10% .

  float3 ambient = float3(0.1f, 0.1f, 0.1f);

Then, add this ambient amount to the final return value.

  return float4(ambient + diffuse + specular, 1);

After this, you will see the result like Figure 4.13.

Figure 4.13 Ambient + Diffuse + Specular


(Optional) DirectX Framework
This is an optional section for readers who want to use shaders in a C++ DirectX framework.

First, make a copy of the framework used in Chapter 3 and save it into a new folder. Next, save the shader and 3D model that we used in RenderMonkey into Sphere.x and Lighting.fx files so that they can be used in the DirectX framework.

Then, open the solution file in Visual C++. We will look at the global variables first. Since we don’t use any texture in this chapter, delete the texture variable declared in the last chapter. Its name was gpEarthDM. Now change the name of the shader variable from gpTextureMappingShader to gpLightingShader.
Now it is time to declare new variables for the light and camera positions. Both of them are in the world space. First, we will reuse the same light position used in RenderMonkey.

// world position of the light
D3DXVECTOR4 gWorldLightPosition(500.0f, 500.0f, -500.0f, 1.0f);

For the camera position, we are using the same values defined in RenderScene() function in the last chapter.

// world position of the camera
D3DXVECTOR4 gWorldCameraPosition(0.0f, 0.0f, -200.0f, 1.0f);

Now go to CleanUp() function. Since gpEarthDM texture is not used anymore, delete the code which was releasing the texture.

Next up is LoadAssets() function. Again, delete the code which was loading gpEarthDM texture. And change the shader’s name to Lighting.fx. Don’t forget to change the variable name from gpTextureMappingShader to gpLightingShader.

  // loading textures

  // loading shaders
  gpLightingShader = LoadShader("Lighting.fx");
  if (!gpLightingShader)
  {
    return false;
}

Lastly, we will look at RenderScene() function. First, find all the instance of gpTextureMappingShader, and replace them with gpLightingShader. Now let’s look at the code which constructs the view matrix. There was a variable named vEyePt that we used to make the view matrix, right? This variable’s value is same as gWorldCameraPosition, so we will reuse this value.

Change below code

  D3DXVECTOR3 vEyePt( 0.0f, 0.0f, -200.0f );

to this:

  D3DXVECTOR3 vEyePt( gWorldCameraPosition.x, gWorldCameraPosition.y,
    gWorldCameraPosition.z );

Now delete gpLightingShader->SetTexture() code. The shader in this chapter does not use a texture, so we don’t need this code. Then, pass the light and camera positions to the shader. Since the data type is D3DXVECTOR4, we will call SetVector() function.

  gpLightingShader->SetVector("gWorldLightPosition", &gWorldLightPosition);
  gpLightingShader->SetVector("gWorldCameraPosition", &gWorldCameraPosition);

Now compile and run the shader. You can see the same visual that you saw in RenderMonkey, right?

Other Lighting Techniques
Still the most common lighting techniques in computer games is Lambert + Phong, but now there are more games using more advanced lighting techniques. For the readers who want to learn more about advanced lighting techniques, I will mention some of them in the following list:

  • Blinn-Phong: a technique that is very similar to Phong.
  • Oren-Nayar: a diffuse lighting technique that takes account of the roughness of a surface.
  • Cook-Torrance: a specular lighting technique that takes account of surface roughness
  • Spherical Harmonics Lighting: once indirect light is preprocessed offline, it can be applied in real-time.

Summary
A quick summary of what we learned in this chapter:

  • Both Lambert and Phong models use cosine function.
  • Phong specular lighting model uses pow() function.
  • Once you change the vector’s length to 1, a dot product can replace cosine.
  • If a same calculation can be done either in vertex and pixel shader, doing it in vertex shader is a better choice.
  • There are more realistic, but more complicated techniques. Some of them are already used in recent computer games.

With the completion of lighting shader, now we have learned all the basic shaders. We will mix and match what we have learned so far to implement more practical techniques. So if there was anything you are unsure about from Chapter 1 to 4, please review it before coming to Chapter 5.

------
Footnotes:

  1. This was invented as a hack, which has no physical correctness, but still is used a lot in games.
  2. Higher exponents produce tighter specular light. Experiment with different numbers.
  3. It is because our back buffer format is 8 bit per channel. If a floating-point texture is used, values bigger than 1 can be stored, as well.


Next Chapter


Feb 5, 2014

[Intro to Shader] 04.Basic Lighting Shader - Part 1


Where to buy:
Amazon
iBooks

Source Code: GitHubZip

Chapter 4: Basic Lighting Shaders - Part 1

New HLSL in this chapter

  • NORMAL: a shader semantic used to retrieve normal data from a vertex buffer
  • normalize(): normalizes a vector
  • dot(): dot product function
  • saturate(): clamps a value to [0, 1] range
  • reflect(): vector reflection function
  • pow(): raises base to the power of exponent

New math in this chapter

  • dot product: can be used to find the cosine value of an angle quickly
  • normalization: converts a vector to a unit vector, which has a length of 1

If there is no light, we can see nothing. It sounds very obvious, but we often forget about it. For example, if you go into a room with no window and close the door, you cannot see anything. No matter how long you stay in the dark, you can’t see a thing… well, unless there is a seam under the door or something. The reason why we keep forgetting this obvious fact is because it is really hard to find a complete dark place in the real world. Why is it so? It’s because light reflects off objects endlessly, and the reflected light eventually reaches our eyes. This type of light is called indirect light. On the other hand, direct light is the light directly coming from a light source. An example is shown in Figure 4.1.

Figure 4.1 An example of direct and indirect light
Between direct and indirect light, which one would be easier to calculate? As the above picture hint, the answer is direct light. Indirect light goes through multiple reflections, so it is inherently harder to calculate. As one of the methods to calculate indirect light, there is a technique called Ray Tracing. Readers who are interested in 3D graphics probably heard about it, but this technique is still not widely used in computer games due to hardware limitations.[1] Therefore, most real-time 3D applications, including computer games, are still calculating only direct light “properly”, and trying to mimic indirect light. That is why this book only covers direct light.[2]  By the way, the lighting techniques covered in this chapter are still widely used in most games, so make sure you understand them very well.

In computer graphics, we like to think light is made of two major components: diffuse and specular light, so we will look at these two separately in this chapter.

Diffuse Light
Background
Although most objects don’t emit light by themselves, the reason why we can still recognize them is because light emitted from other objects, such as the sun, reflects off them. When this happens, some light is evenly reflected to all directions. We call this diffuse light. Have you ever wondered why an object’s color and darkness do not change regardless where you look at it from? It is because of diffuse light, which is evenly reflected to all directions. If it is reflected to only one direction[3], you should be able to recognize the object only from a certain direction.

By the way, a rougher surface usually reflects more diffuse light.[4]

Maybe a drawing will help.

Figure 4.2 Diffuse lighting

One thing that I didn't show you in Figure 4.2 is specular light that we will learn shortly. Don’t worry about this for now, and just remember that some part of an incoming light becomes diffuse light, and some other part becomes specular light.

Well, then how do we calculate diffuse lighting? As one can guess, there are various diffuse lighting models created by many great mathematicians. Out of these, we will learn only one simple diffuse lighting model, called Lambert diffuse lighting model, which happens to be a very popular lighting model in computer games. Lambert diffuse lighting model, which was created by a mathematician named Johann Heinrich Lambert, says the amount of diffuse light at a point on a surface is same as the cosine value of the angle between the surface normal[5] and the incoming light ray. Then, let’s observe the cosine graph shown in Figure 4.3.

Figure 4.3 A graph showing y = cos(x) function
From the above graph, you can see that the result, or the value on the y-axis, is 1 when the angle is 0. And as the angle grows bigger, the result gets smaller until it becomes 0 at 90 degree. If we go further, the result even becomes negative. With this observation in mind, let’s look at Figure 4.4, which shows what happens in the real word depending on the angle of the incoming light rays?

Figure 4.4 Various angles between different light and normal vectors
Can you guess when the surface would be lit brightest? Yes, it is when the sun reaches its highest position in the sky. (Case a) As the sun lowers, the surface gradually gets darker. (Case b) And when the sun finally goes down over the horizon, the surface becomes completely dark. (Case c) Then, what happens after the sunset? The surface will remain dark because it is not getting any light. (Case d) Then let’s turn this observation into a graph. The angle between the surface normal and the sun is on the x-axis and the brightness of the surface is on the y-axis. The range of the y-axis is [0, 1]: 0 is when the surface is darkest, and 1 is when the surface is brightest.

Figure 4.5 A graph showing our observation

The reason why I put question marks between -90 ~ 90 degrees in the above figure is because it is still unknown how fast the surface darkens as the angle decreases. Now let’s compare this figure to Figure 4.3. If you clamp any negative values to 0 in Figure 4.3, it looks almost identical to our current figure, right? The only difference is that the falloff speed between -90 and 90 is a bit different. Then, can we just believe that uncle Lambert derived this cosine formula after through observations? Yes, at least I’d love to! :P

Then we should be able to calculate diffuse light with a cosine function if we use the Lambert model! However, cosine is not a cheap function, so calling it every time in pixel shader makes me feel icky. Is there any alternative? Yes. If you flip through you math book, you will find a section where it says dot product can replace cosine… well, only under certain conditions.

θ = angle between A and B
| A | = length of vector A
| B | = length of vector B
A ∙ B = cosθ | A || B |

In other words,

cosθ = (A ∙ B) ÷ (| A |ⅹ| B |);

According to the above dot product formula, the cosine value of an angle between two vectors is same as two vector’s dot product divided by the multiplication of lengths of two vectors. If we simplify this formula even further by making the lengths of both vectors to 1.

cosθ = (A' ∙ B')

The simplified formula says if you take the cosine of the angle between two vectors, it’s same as the dot product of them. But here’s a question: is it okay to change the lengths of vectors like this? In other words, the lengths of the normal and light vectors are important while diffuse light is calculated? No, not at all. What matters is only the angel between two vectors, and the lengths don’t affect the result, at all. Therefore, it sounds much better to make the lengths to 1 to simplify the formula.[6]

Do you want to know why a dot product is better than cosine? Let (a, b, c) be vector A, and (d, e, f) be vector B. Then you can find the dot product very easily like this:

A ∙ B = (a ⅹ d) + (b ⅹ e) + (c ⅹ f)

This looks much simpler than cosine function, right? If I ask you to calculate the cosine value, I’m pretty sure you will be slower than doing three multiplications followed by two additions. :P

Okay, I think we learned enough to write diffuse lighting shader code.

Initial Step-by-Step Setup

  1. As we did in other chapters, create a new DirectX effect inside RenderMonkey, and delete all the code inside vertex and pixel shaders.
  2. Now change the shader name to Lighting.
  3. Don’t forget to add gWorldMatrix, gViewMatrix and gProjectionMatrix, and assign proper semantics for them. They are needed to transform vertex positions.

What information did we need to calculate diffuse lighting with the Lambert model? The light and normal vectors, right? Normal information is normally stored in each vertex,[7]  so we should get it from the vertex buffer. Do you remember what extra step we had to perform to get the UV coordinates from vertex buffer in Chapter 3? From Workspace panel, double-click on Stream Mapping and add a new field named NORMAL. Normal is a direction vector that exists in a 3D world, so it will be declared as FLOAT3. Again, you don’t need to worry about Attribute Name, but make sure that Index is 0

Then, how do we find the light vector? This is not that hard. If you just a draw a line from the position of the light source to the current pixel position, that is what we are looking for. So once the light position is known, we can find the light vector very easily. Then how is the light position defined? Something like “the light is at (500, 500, -500) in the world” should be enough. This means that the light position is a global variable. From Workspace panel, right-click on Lighting, and select Add Variable > Float > Float4. Then, change the name of the newly-created variable to gWorldLightPosition. Finally, double-click the variable and set the values to (500, 500, -500, 1).

Once you are done, RenderMonkey Workspace should look like Figure 4.6.

Figure 4.6 RenderMonkey project after the initial setup

Vertex Shader
I will show you the full source code first, and provide line-by-line explanation after.

struct VS_INPUT
{
  float4 mPosition : POSITION;
  float3 mNormal : NORMAL;
};

struct VS_OUTPUT
{
  float4 mPosition : POSITION;
  float3 mDiffuse : TEXCOORD1;
};

float4x4 gWorldMatrix;
float4x4 gViewMatrix;
float4x4 gProjectionMatrix;

float4 gWorldLightPosition;

VS_OUTPUT vs_main( VS_INPUT Input )
{
  VS_OUTPUT Output;

  Output.mPosition = mul( Input.mPosition, gWorldMatrix );

  float3 lightDir = Output.mPosition.xyz - gWorldLightPosition.xyz;
  lightDir = normalize(lightDir);


  Output.mPosition = mul( Output.mPosition, gViewMatrix );
  Output.mPosition = mul( Output.mPosition, gProjectionMatrix );

 float3 worldNormal = mul( Input.mNormal, (float3x3)gWorldMatrix );
 worldNormal = normalize( worldNormal );

  Output.mDiffuse = dot(-lightDir, worldNormal);

  return Output;
}

Input Data to Vertex Shader
We will start from the input data structure used in Chapter 2.

struct VS_INPUT
{
  float4 mPosition : POSITION;
};

Now we need to add normal here. The semantic for normal is NORMAL. As mentioned earlier, normal is a direction vector in 3D space, so the data type will be float3.

struct VS_INPUT
{
  float4 mPosition : POSITION;
  float3 mNormal : NORMAL;
};

Vertex Shader Function
In this chapter, we will take a look at the vertex shader function before its input and output data. I believe this is an easier way to understand the lighting shaders.

First, we transform the vertex position, as usual.


VS_OUTPUT vs_main( VS_INPUT Input )
{
  VS_OUTPUT Output;

  Output.mPosition = mul( Input.mPosition, gWorldMatrix );


  Output.mPosition = mul( Output.mPosition, gViewMatrix );
  Output.mPosition = mul( Output.mPosition, gProjectionMatrix );


The above code does not need explanation anymore. What other things did we need to calculate diffuse light? We need to find the light and normal vectors, but do we find these in the vertex or in pixel shader? Think about it for a second.

...

So, what do you think? There is no one absolute answer: you can do it in either shader. If we do this in vertex shader, we would calculate the dot product of these two vectors on each vertex, and return the result as part of VS_OUTPUT structure. Then, the output values will be passed to pixel shader after interpolated by interpolators, so we can just use interpolated dot product values in pixel shader.

On the other hand, if you do this in pixel shader, you would return the normal information as part of VS_OUTPUT, and pixel shader will read it to calculate the dot product.

Since there is no difference whether the calculation is done in vertex or pixel shader[8], we should select the option which is better for performance. To see which option is better, let’s count how many times each shader is executed.[9]  When a triangle is drawn, how many times will the vertex shader run? A triangle consists of three vertices, so the vertex shader is executed three times. Then how about the pixel shader? It will be executed as many times as the pixels covered by this triangle on the screen. If the triangle is really tiny on the screen, thus covering only one pixel, the pixel shader will be executed only once. However, usually a triangle covers more than three pixels on the screen. So, if a same calculation can be done in either vertex or pixel shader, it is better to be done in vertex shader over pixel shader. Therefore, we will calculate diffuse lighting in vertex shader, as well.

Tip: If a calculation can be done either in vertex or pixel shader. It is usually a better idea to do so in the vertex shader for performance reasons.

Then, let’s construct the light vector, first. As mentioned earlier, you can do this by drawing a line from the light position to current position. “Drawing a line” between two points is same as subtracting a position vector from another. So, if you subtract the light position from current position, you can get the light vector. However, there is one thing that we should be careful about. To get the correct result in 3D math, all variables must be in a same space. We defined the light position in the world space, right? But, which space is the vertex position defined in? Input.mPosition is in the local space, and Output.mPosition is in the projection space, but what we really need is the position in the world space. If you look at the vertex shader code listed above, you will see some empty lines after multiplying the world matrix to the local position. Output.mPosition right after the world matrix multiplication is the position in the world space, so we can just subtract the light position from it. Now replace the empty lines with the light vector construction code shown below:

  float3 lightDir = Output.mPosition.xyz - gWorldLightPosition.xyz;

K, now it is time to make the vector’s length to 1. I told you the reason for this is to use a dot product instead of a rather expensive cosine function. Did I also tell you an operation of making a vector’s length to 1 is called normalization? To manually normalize a vector, you can divide each components of the vector by the length of the vector. However, we will just use a HLSL intrinsic function, normalize(). Yay for another magic function!

  lightDir = normalize(lightDir);

Now that we have prepared the light vector, it is time to find the normal vector. Can we just use the normal information from the input data as-is? To find the answer, think about in which space the vector lives. Since this data directly comes from the vertex buffer, it must be in the local space. Now we know we need to transform it to the world space to calculate diffuse lighting properly.

Caution: While performing any 3D operation, we have to make sure that all the variables are in a same space.

  float3 worldNormal = mul( Input.mNormal, (float3x3)gWorldMatrix );

Do you see we are casting the world matrix to a 3-by-3 matrix? We prefixed (float3x3) to do so. In a 4-by-4 matrix, the fourth row (or column) contains translation information, so it does not affect direction vectors, at all.[10]

Don’t forget to change this vector to a unit vector, as well.

  worldNormal = normalize( worldNormal );

Now we have all two vectors we need, so let’s find the dot product of them. Do you still remember what the dot product formula was? Although it was not hard, you do not need to remember it at all because we will just use another HLSL intrinsic function, dot() for this.

  Output.mDiffuse = dot(-lightDir, worldNormal);

The above code assigns the dot product result to mDiffuse of the return structure. Oh, another thing! Do you see we used -lightDir instead of lightDir here? The reason why we did this is because the tails of two vector must meet to calculate the dot product correctly. So, if lightDir was used, the light vector’s header would meet the normal vector’s tail, resulting in a wrong calculation.

Also do you see that float3 is used for mDiffuse to store the dot product result, which is only a single float? If you assign a float value to a float3 variable, all three components of the variable will have the same float value. So, the above code is same as assigning dot(-lightDir, worldNormal).xxx.

Now simply returns the result.

  return Output;
}

Global Variables
Have you figured out by now why I wanted explained the vertex shader function first? It is because it made no sense to say “I’m going to declare the light position as a global variable” without any explanation.

Please add these global variables at the top of the source code.

float4x4 gWorldMatrix;
float4x4 gViewMatrix;
float4x4 gProjectionMatrix;

float4 gWorldLightPosition;

Output Data from Vertex Shader
As we saw while writing the vertex shader function, the output data is mPosition and mDiffuse. We already know float4 and POSITION semantic are used for the position. Then, what type and semantic should be used for mDiffuse? The dot product of two vectors is not a vector: it is just a single real number.[11]  Therefore, using float is completely fine, but float3 is used here since we are going to return this value as the RGB values of pixels. Then how about the semantic? Do you think there is a semantic like DIFFUSELIGHTING? Unfortunately, no.[12]  While programming shaders, there are often cases where you cannot find a semantic which is specifically made for your specific use. In these cases, TEXCOORD semantic is normally used. There are at least 8 TEXCOORDs[13], so they rarely run out! For this chapter, we will use TEXCOORD1.[14]

Please add the following output structure to the source code.

struct VS_OUTPUT
{
  float4 mPosition : POSITION;
  float3 mDiffuse : TEXCOORD1;
};

Pixel Shader
Alright, pixel shader time! But what do we need to do? We already calculated diffuse lighting in vertex shader. So the only work left for the pixel shader is returning the interpolated diffuse result, which is passed to the pixel shader. If you recall, the dot product is used instead of cosine, so the range of the result is [-1, 1]. But, the range of diffuse lighting is [0, 1], so let’s just clamp any negative value to 0. Although we can use an if statement for this, we will instead use a faster HLSL function, saturate(). This function clamps a value to [0, 1]. And even better, it is almost free performance-wise!

struct PS_INPUT
{
  float3 mDiffuse : TEXCOORD1;
};

float4 ps_main(PS_INPUT Input) : COLOR
{
  float3 diffuse = saturate(Input.mDiffuse);
  return float4(diffuse, 1);
}

In the above code, the return value is constructed with float4(diffuse, 1). Just remember that this is a way to construct a float4 variable.

Now press F5 to compile vertex and pixel shaders separately before seeing the preview window. You will see a sphere which is lit with very smooth diffuse light, as shown in Figure 4.7.

Figure 4.7 Our diffuse lighting effect!

----------------
Footnotes:

  1. Especially video game console hardware is a problem.
  2. Lighting models that only consider direct light are called local illumination models, while ones that also consider indirect light are called global illumination models.
  3. This is specular lighting, which we will cover later in this chapter.
  4. There are not many objects which doesn’t reflect diffuse light at all. Even very smooth surfaces reflect diffuse light because light can penetrate through and scatter below the surface until it finally comes out of the surface.
  5. Normal is a direction vector that represents a surface’s orientation. Therefore, the normal vector of a horizontally flat surface is perpendicular to the surface facing upward.
  6. A vector whose length is 1 is called a unit vector. And a process of making a vector to have a length of 1 is normalization.
  7. This is not always true. We will see another way of finding normal while implementing Normal Mapping shader later in this book.
  8. In fact, there is a subtle difference between these two approaches.
  9. There are also other factors that might degrade the performance, so this is a guideline only.
  10. Think it this way. The direction which an arrow is pointing to doesn’t change, even if it is moved around without any. So translation value has no meaning to a direction vector.
  11. This is called scalar.
  12. Some people use COLOR0 semantic for this, but this book does not. In vertex shader 2.0 spec, a value with COLOR semantic is clamped to [0, 1], and the interpolated values passed to pixel shader seem to have small errors because of this.
  13. TEXCOORD0 ~ TEXCOORD7
  14. The reason why TEXCOORD1 is used instead of TEXCOORD1 is because we will use TEXCOORD0 for the UV coordinates of a texture in the next chapter.



Next Chapter

Jan 15, 2014

[Intro to Shader] 03. Texture Mapping


Where to buy:
Amazon
iBooks

Source Code: GitHubZip

Chapter 3: Texture Mapping

New HLSL in this chapter

  • sampler2D: a texture sampler data type which is used to get a texel from a texture
  • tex2D(): a HLSL function to sample a texel from a texture
  • swizzling: a way to access the components of a vector in an arbitrary order

What did you think about what we covered in the last chapter? Too easy? It didn't seem that useful for the game you are trying to make? Yeah, you are right. The main goal of the last chapter was learning the basic syntax of HLSL through a simple practice. Just consider it as a hello-world program in other programming languages. Now, you are going to learn something more useful in this chapter. What about wrapping the red sphere with an image? You know this is called Texture Mapping, right?

Texture Mapping and UV Coordinates
As mentioned earlier in this book, the building blocks of a 3D object are triangles. Then, what’s involved to put an image, or a texture, on a triangle? We should order the GPU like this: “Show the pixel at the right-bottom corner of that image on the left vertex of this triangle”[1]  We all know that a triangle is made of three vertices, so all we need to do is mapping each of three vertices to a pixel in a texture. Then how do we specify one pixel on a texture? A texture is an image file after all, so can we just say something like “the pixel at x = 30, y = 101”? But, what happens if we doubles the width and height of the image? We will have to change it to “x = 60, y = 202”. This is not good, at all!

Let’s take a moment and think about a common sense that we learned in the last chapter. We did something very similar with the color representation. To represent a color in a uniform way regardless the number of bits per channel, we used the percentage notation [0~1]. So why don’t we just use the same method? Let’s say x = 0 points to the very left column of a texture, and x = 1 points to the very right column. Similarly, y = 0 is the top row and y = 1 is the bottom row. By the way, the UV notation is normally used instead of XY for texture mapping; there is no special reason, it’s just to avoid any confusion since XY is normally associated with positions. Figure 3.1 shows what we just discussed here:

Figure 3.1 UV layout on a texture

Now let’s see some examples of how different UV coordinates change the visuals. Please look at Figure 3.2.
Figure 3.2 Various examples of texture mapping

(a) 2 triangles with no texture. Vertices v0, v1, v2 and v0, v2, v3 are making up one triangle each.
(b) The range of UV coordinates is [0, 0] ~ [1, 1]. It shows a full texture.
(c) The range of UV coordinates is [0, 0] ~ [0.5, 1]. It shows only the left half of the texture. 0.5 means 50%, so it’s halfway, right?
(d) The range of UV coordinates is [0, 0] ~ [0.5, 0.5]. So it only shows the top left quarter of the image.
(e) The range of UV coordinates is [0, 0] ~ [1, 2]. It repeats the texture twice vertically. [2]
(f) The range of UV coordinates is [0, 0] ~ [2, 2]. It repeats the texture twice vertically and twice horizontally. [3]

Additionally, you can flip the texture horizontally if the range of UV coordinates is set to [1, 0] ~ [0, 1]. I believe it’s enough for you to understand how UV coordinates work. Then, it is about time to write Texture Mapping shader, finally!

Initial Step-by-Step Setup

  1. As we did in Chapter 2, open RenderMonkey to make a new DirectX effect. Then, delete all the code inside the vertex and pixel shaders.
  2. Now, change the name of shader to TextureMapping.
  3. Don’t forget to add gWorldMatrix, gViewMatrix and gProjectionMatrix variables that are needed to transform vertex positions. You still remember how to use variable semantics to pass the data, right?
  4. Next, we will add an image that is going to be used as the texture. Right-click on TextureMapping shader and select Add Texture > Add 2D Texture > [RenderMonkey installation folder]\examples\media\textures\earth.jpg. Now you will see a texture, named Earth, is added.
  5. Change the name of texture to DiffuseMap.
  6. Now, right-click on Pass 0 and select Add Texture Object > DiffuseMap. You should be able to see a newly added texture object, named Texture0.
  7. Change the name from Texture0 to DiffuseSampler.

Once you finish these steps, the Workspace panel should look like Figure 3.3.



Figure 3.3 RenderMonkey project after the initial setup


Vertex Shader
The full source code is listed below, followed by line-by-line explanation.

struct VS_INPUT
{
   float4 mPosition : POSITION;
   float2 mTexCoord : TEXCOORD0;
};

struct VS_OUTPUT
{
   float4 mPosition : POSITION;
   float2 mTexCoord : TEXCOORD0;
};

float4x4 gWorldMatrix;
float4x4 gViewMatrix;
float4x4 gProjectionMatrix;


VS_OUTPUT vs_main(VS_INPUT Input)
{
   VS_OUTPUT Output;
   
   Output.mPosition = mul(Input.mPosition, gWorldMatrix);
   Output.mPosition = mul(Output.mPosition, gViewMatrix);
   Output.mPosition = mul(Output.mPosition, gProjectionMatrix);
   
   Output.mTexCoord = Input.mTexCoord;
   
   return Output;
}

Before walking through the vertex shader code, let’s take a moment and think about what kind of new data is needed to perform texture mapping. Obviously, we need an image, which is going to be used as the texture. Then where should we perform the actual texture mapping between the vertex and pixel shaders? If you think about where vertex and pixel shaders are executed, you can find the answer easily. A vertex shader is executed for each vertices, but where the texture will be shown? Is it on vertices? No, it’s not. We want to see the texture on all the pixels inside of a triangle, so texture mapping got to be performed inside the pixel shader, which is executed for each pixels. Then, now we know that it’s unnecessary to declare a texture variable inside of vertex shaders.

Then, is there any other information required for texture mapping? It was mentioned earlier in this chapter. Yes, you need the UV coordinates. Do you remember where the UV coordinates are stored? They are stored in vertex data since they can differ across vertices. Therefore, the UV coordinates are passed via vertex data instead of global variables. Now, with this knowledge, let’s take a look at the input and output data of the vertex shader.

Input Data to Vertex Shader
We start from the input data structure used in Chapter 2.

struct VS_INPUT
{
    float4 mPosition : POSITION;
};

We will add the UV coordinates to this structure. The UV coordinates have two components, U and V, so the data type should be float2. Then which semantic must be used to retrieve the UV information from the vertex buffer? Just like how the position information was retrieved via POSITION semantic, UV coordinates have their own semantic: TEXCOORD.[4] After adding the data field for UV to the structure, it looks like below:

struct VS_INPUT
{
    float4 mPosition : POSITION;
    float2 mTexCoord : TEXCOORD0;
};

The reason why the number 0 follows TEXCOORD is because multiple TEXCOORDs are supported by HLSL. There are cases where multiple textures are used in a shader. In those cases, you would use different semantics, such as TEXCOORD0, TEXCOORD1 and so on.

Output Data from Vertex Shader
Again, we start from the output structure used in Chapter 2.

struct VS_OUTPUT 
{
    float4 mPosition : POSITION;
};

Can you guess if we need to add another information here? One of those things that were not explained in Chapter 2 is that a vertex shader can return more than just the vertex position. The reason why a vertex shader must return a vertex position was to allow the rasterizer to find pixels. However, this is not the reason why a vertex shader returns information other than the position. It does so solely for the pixel shader, and a good example is the UV coordinates.

Pixel shaders cannot directly access the vertex buffer data. Therefore, any data that needs to be accessed by pixel shaders (e.g., UV coordinates) must be passed through vertex shaders. Does it feel like an unnecessary restriction? Once you look at Figure 3.4, you will understand why this restriction exists.


Figure 3.4 What would be the UV coordinates of this pixel?

Where the UV coordinates are defined is on each vertices, but as you can see in Figure 3.4, most pixels’ UV coordinates are different from any vertices UV coordinates. [5] Therefore, the right way of finding the correct UV coordinates of a pixel is smoothly blending the UV coordinates defined on three vertices based on the distance from the pixel to each vertices. Luckily, you do not have to do this calculation manually. Just like vertex positions, any other data is automatically handled by a device called interpolator. Let’s add the interpolator to the figure of a GPU pipeline presented in Chapter 1.

Figure 3.5 Still pretty simple 3D pipeline after adding the interpolator

By the way, this device doesn't stop at interpolating[6] the UV coordinates. It interpolates any data that is returned from vertex shaders and pass the result to pixel shaders.

By now, you should know that the UV coordinates need to be returned from this vertex shader. Let’s add the data field.

struct VS_OUTPUT 
{
    float4 mPosition : POSITION;
    float2 mTexCoord : TEXCOORD0;
};

Global Variables
We don’t need any extra global variables other than what we already used in Chapter 2. So, I’ll just show the code again and skip the explanation.

float4x4 gWorldMatrix;
float4x4 gViewMatrix;
float4x4 gProjectionMatrix;

Vertex Shader Function
You heard it enough. The most important responsibility of a vertex shader is transforming vertex positions into the projection space. The below code is identical to the one used in Chapter 2.

VS_OUTPUT vs_main( VS_INPUT Input )
{
   VS_OUTPUT Output;

   Output.mPosition = mul( Input.mPosition, gWorldMatrix );
   Output.mPosition = mul( Output.mPosition, gViewMatrix );
   Output.mPosition = mul( Output.mPosition, gProjectionMatrix );

Now, it’s time to pass through the UV coordinates, but do we need to apply any transformation before assigning the UV coordinates to Output structure? The answer is no. UV coordinates do not exist in any 3D spaces discussed in this book. Therefore, we will simply pass the UV coordinates without any transformation.

   Output.mTexCoord = Input.mTexCoord;

I cannot think of any other data that needs to be handled here, so I’ll finish this function by returning Output.

   return Output;
}

Pixel Shader
As done in Vertex Shader section, the full source code is listed first below:

sampler2D DiffuseSampler;

struct PS_INPUT
{
   float2 mTexCoord : TEXCOORD0;
};

float4 ps_main( PS_INPUT Input ) : COLOR
{
   float4 albedo = tex2D(DiffuseSampler, Input.mTexCoord);
   return albedo.rgba;
}

Input Data to Pixel Shader and Global Variables
It is time to look at the pixel shader. What we need to do here is retrieving a texel [7] from a texture image and output its color on the screen. Then, we need a texture and current pixels’ UV coordinates, right? A texture image is uniform for all the pixels, so it would be a global variable. Unlikely, the UV coordinates are part of the input data sent from the vertex shader and passed through the interpolator. First, let’s declare the input structure of the pixel shader.

struct PS_INPUT
{
   float2 mTexCoord : TEXCOORD0;
};

Wait. We saw something like this before. It is almost identical to the VS_OUTPUT structure except it is missing mPosition. In fact, the input structure of a pixel shader should match the output structure of its counter-part vertex shader. After all, the pixel shader is getting what is returned from the vertex shader, right?

The next step is texture declaration. Do you remember that we made a texture object named DiffuseSampler while setting up the RenderMonkey project earlier in this chapter? This object is the texture sampler and will be used to retrieve a texel. Therefore, the name of the texture sampler in HLSL must be DiffuseSampler, as well.

sampler2D DiffuseSampler;

sampler2D is another data type that is supported in HLSL, and is used to sample a texel from a 2D texture. There are also other samplers, such as sampler1D, sampler3D and samplerCUBE.

Now, we are ready to write the pixel shader function.

Pixel Shader Function
Let’s take a look at the function header first

float4 ps_main( PS_INPUT Input ) : COLOR
{

The only difference from previous pixel shader function headers is that it takes a parameter, and its type is PS_INPUT. This is to receive the UV coordinates the interpolator calculated for us. Equipped with the texture sampler and UV coordinates, we can get the value of the texel. A HLSL built-in function, tex2D() can do the magic. tex2D takes two parameters: a texture sampler and UV coordinates, in order.

    float4 albedo = tex2D(DiffuseSampler, Input.mTexCoord);

The above code reads a texel which is located at the coordinates which Input.mTexCoord specifies from DiffuseSampler. And the value will be stored in a variable named albedo. Now what do we do with this value? Well, we wanted to show the texture as is, so let’s just return it.

   return albedo.rgba;
}

If we press F5 to compile the vertex and pixel shaders and see the preview window…… uh… it’s messed up!
Figure 3.6 Something is messed up here!

Why? It is because that we forgot to map the UV coordinates element in the vertex buffer to the TEXCOORD semantic. To map it properly, go to Workspace panel and left-click on Stream Mapping. There is currently only one entry: POSITION. Now click on Add button to add a new entry, and then change Usage to TEXCOORD. Make sure Index is 0 and Data Type is FLOAT2. You do not need to change Attribute Name. Once you click on OK button, you will see a proper globe as shown in Figure 3.7.


Figure 3.7 A nice looking globe


By the way, did you notice that I used return albedo.rgba; instead of return albedo; while returning the final color? Although it is completely valid to use return albedo;, I intentionally did so to show you something new.

In HLSL, you can attach a postfix, such as .xyzw or .rgba to a vector variable to access the vector’s components with ease. For example, if we are dealing with a float4 variable, which has a four components, you can think it as an array of four floats. So if you add .x (or .r), it accesses the first component. Likewise, .y (or .g), .z (or .b) and .w (or .a) point to the second, third and fourth components, respectively. So, if you want to get only the rgb value from albedo, you would do something like this:

float3 rgb = albedo.rgb;

Neat, right? But it does not stop here. You can even change the order of the postfix to access vector components in an arbitrary order. Below example shows how to create a new vector with the same components, but in reverse order.

float4 newAlbedo = albedo.bgra;

Or you can even repeat only one channel three times like this:

float4 newAlbedo = albedo.rrra;

Pretty rad. We refer this technique, which allows us to access vectors’ components in any arbitrary order, swizzling.

Maybe you can do some practice here. How about switching the red and blue channels of the globe? Go ahead and try it. It should be a piece of cake for you. :-)

(Optional): DirectX Framework
This is an optional section for readers who want to use shaders in a C++ DirectX framework.

First, make a copy of the framework that we used in Chapter 2 into a new directory. Then, save the shader and 3D model into TextureMapping.fx and Sphere.x respectively so that they can be used in the DirectX framework. Also make a copy of earth.jpg texture file that we used in RenderMonkey. You can find this file in \Examples\Media\Textures folder from RenderMonkey installation folder.

First, let’s look at the global variables. In Chapter 2, we used gpColorShader variable for the shader. Change the name to gpTextureMappingShader:

// Shaders
LPD3DXEFFECT gpTextureMappingShader = NULL;

Also, we need to declare a texture pointer, which will be used to store the globe texture.

// Textures
LPDIRECT3DTEXTURE9 gpEarthDM = NULL;

Don’t forget to release D3D resources that we just declared. Go to CleanUp() function to do so. Doing so makes you a good programmer. You know that, right? ;) Also don’t forget to change the name of gpColourShader.

  // release shaders
  if (gpTextureMappingShader)
  {
    gpTextureMappingShader->Release();
    gpTextureMappingShader = NULL;
  }

  // release textures
  if (gpEarthDM)
  {
    gpEarthDM->Release();
    gpEarthDM = NULL;
  }

Now we will load the texture and shader. Of course, we do this in LoadAssets() function.

First, change the name of shader variable and file to gpTextureMappingShader and TextureMapping.fx, respectively.

  // loading shaders
  gpTextureMappingShader = LoadShader("TextureMapping.fx");
  if (!gpTextureMappingShader)
  {
    return false;
  }

Then, load earth.jpg file by using LoadTexture() function that we implemented earlier in this book.

  // loading textures
  gpEarthDM = LoadTexture("Earth.jpg");
  if (!gpEarthDM)
  {
    return false;
  }

Now go to RenderScene() function which takes care of all the drawings. There are multiple places where gpColorShader variable is used. Find and replace them all to gpTextureMappingShader.

There was a newly added global variable in the texture mapping shader, right? Yes, the texture sampler. But we can’t just assign the texture to the sampler directly in the D3D framework; instead, we have to assign it to a texture variable. Do you remember there was something called DiffuseMap? That was the texture variable. Then, you would think we should be able to assign the texture to a shader variable named DiffuseMap, right? Well that’s the most sensible thing to do, but guess what? RenderMonkey changed the texture variable’s name to something else. If you open TextureMapping.fx file in Notepad, you will see there’s only one variable which data type is texture, and apparently RenderMonkey added _Tex postfix to it. Bad, Bad Monkey!

texture DiffuseMap_Tex

Well, complaining does not solve anything. So we will just use this variable name. In order to pass a texture to a shader, we use SetTexture() function. Like SetMatrix() function, it takes the variable name in the shader as the first parameter.

     gpTextureMappingShader->SetTexture("DiffuseMap_Tex", gpEarthDM);

Now, compile and run the program. You should be able to see the same visual as RenderMonkey showed us. Hey! I have an idea. Why don’t we do something cooler? Let’s make it rotate! After all, it is the earth!

First, add a global variable which will remember the current rotation angle.

// Rotation around UP vector
float gRotationY = 0.0f;

The rotation and position of a 3D object are part of the world matrix. So, let’s go back to RenderScene() function and change the world matrix construction code like this:

  // for each frame, we rotate 0.4 degree
  gRotationY += 0.4f * PI / 180.0f;
  if (gRotationY > 2 * PI)
  {
    gRotationY -= 2 * PI;
  }

  // world matrix
  D3DXMATRIXA16 matWorld;
  D3DXMatrixRotationY(&matWorld, gRotationY);

The above code keeps adding 0.4 degree to the rotation each frame. Depending on the computer you are using, this might make the globe to rotate too fast or slow. Change the value appropriately. [8]

Run the code again. You can see the rotating earth, right?

Summary
A quick summary of what we learned in this chapter:

  • UV coordinates are required for texture mapping.
  • UV coordinates are varying values across vertices, thus defined on each vertex.
  • A pixel shader requires a vertex shader’s help to access vertex data.
  • Any data returned by a vertex shader goes through the interpolator.
  • tex2D() function is a magic function for texture sampling.

I cannot think of an advanced shading technique which doesn’t rely on texture mapping. So, texture mapping is very crucial in shader programming. Fortunately, performing a texture lookup is very easy with HLSL, so please practice it enough so that you can use it anytime!

Congratulations! You just finished texture mapping. Now take some break, and see you in Chapter 4. :D

----------------
Footnotes:

  1. You are basically mapping a pixel to a vertex.
  2. There are different ways of handling the UV coordinates outside 0~1 range. The current explanation is only valid when texture wrapping mode is used. Other modes, such as mirror and clamp, are also available.
  3. Again, this explanation is only correct with wrap mode.
  4. An abbreviation for texture coordinates.
  5. UV coordinates are same only when the positions of pixels are same as the vertices’.
  6. If you are having a hard time understanding this term just think it this way: it blends the values defined on three vertices. But by how much? Based on the distances to the vertices.
  7. As a pixel is the smallest element in a picture, a texel is the smallest element in a texture.
  8. For a real game, you would measure the elapsed time since the last frame and use it to calculate the proper rotation delta. This book’s code is definitely not ready for real-world applications. :P