Pixel Shader Basics

From Winamp Developer Wiki
Revision as of 19:45, 8 January 2009 by Cope (Talk | contribs)

Jump to: navigation, search

Introduction

This article assumes you've read through the wonderful Preset Authoring Guide and will attempt to build on the information there. By the end you should have a solid understanding of how pixel shaders work in Milkdrop.

Overview of Pixel Shaders

Simply put, a pixel shader is a tool that applies a set of instructions to every pixel in a display window. In almost every modern video game pixel shaders are used to create realistic lighting and shading effects in 3D scenes. A pixel Shader takes in information about models, textures, light sources and materials, applies a set of instructions to this data and returns a single float4 value, which holds the current pixel's red, blue, green and alpha channel values.

Pixel shaders in Milkdrop are a bit simpler, because we're limited to working in 2 dimensions without things like models or light sources. In addition, the final return value of our pixel shaders is a float3, leaving out the alpha channel. But the principle is still the same; when we write the code for a pixel shader we are giving it a set of instructions to perform on every pixel in the display window, which it uses to find the final color value for that pixel, which is assigned to ret.

Textures

In the Milkdrop pixel shaders a texture is simply a sampling source. They can be sampled using the tex2D() function, or the tex3D function if you're sampling from a 3D noise volume. There are 3 types of textures we can access in Milkdrop; sampler_main, noise textures and custom textures. The latter two are covered very well in the preset authoring guide, so we'll just take a moment to say a few things about sampler_main.

sampler_main

We can think of this texture as a screenshot of the display window. This snapshot is taken by Milkdrop just before it executes the Warp Shader, and is passed to the Warp Shader as sampler_main. Using sampler_main we can get the color value of the current pixel either by calling tex2D(sampler_main, uv), or by calling GetPixel(uv).

There is one important difference between the Warp Shader and the Composite Shader when it comes to sampler_main; Milkdrop takes a new snapshot of the scene after the Warp Shader has finished executing, and passes this new snapshot to the Composite Shader. What this means is that the Warp Shader actively alters the scene, while the Composite Shader only returns a copy of the scene to display. In other words, anything the Composite Shader does, does not affect the rest of Milkdrop because it only alters a copy of the scene, instead of the scene itself. The Warp Shader alters the scene itself, and anything it does will affect the rest of Milkdrop.

UV Coordinates

A pixel shader needs some way of knowing which pixel it's currently working on, and which pixel you want to sample in a texture. In Milkdrop these concepts are represented by uv, which is a float2 with x and y values between 0..1. The top left corner of the scene has the coordinates (0.0,0.0), while the bottom right corner is (1.0,1.0). How Milkdrop generates these values between 0..1 is not so important, what's important is that it stores the dimensions of the display window and the size of a single pixel in the float4 variable texsize.xyzw. Lets suppose we have a display window of size 600x400, in this case texsize will hold the following values:

  • texsize.x = 600
  • texsize.y = 400
  • texsize.z = 1/600 = 0.0017
  • texsize.w = 1/400 = 0.0025

A pixel shader always moves from left to right, starting in the upper left corner with pixel (0,0). Next it will increment to (move to) the next pixel, and because the size of a single pixel along the x axis for our supposed window is 0.0017 the current uv value is changed from (0,0) to (0.0017,0). For our window, the pixel shader does this 600 times, each time adding 0.0017 to the uv.x value and keeping uv.y the same. After 600 increments it's reached the right side of the screen, (1,0), so it moves back to the left side and starts with the next line of pixels, the uv value is now (0,0.0025). This process gets repeated for the entire display window, in our hypothetical case a total of 240,000 times.

This also means the shader code you write will be executed 240,000 times for that frame, for a window size of 1680x1050 that number increases to 1.7 million times, per frame! At 60 fps this means your graphics card is executing your instructions 106 million times per second, per shader! Something to think about the next time one of the more intense presets brings your machine to its knees.

Getting back to our example, remember that sampler_main contains a copy of the scene in the form of a texture. We know enough now to read the following instruction:

  ret = tex2D(sampler_main, uv);

as "Set ret to the rgb values from the sampler_main texture at the coordinates of the current pixel".

Modifying the uv coordinates

Some very interesting effects can be achieved by modifying the uv coordinates before you sample a texture. The theory behind this is very straight forward because there are really only two things we can do, translate and scale.

Translation

Lets say we want to create a movement effect similar to dx (aka Translation (X) in the Motion menu) that moves the entire scene one pixel to the left each frame. In mathematical terms this is called a Translation, and you achieve it by modifying the uv values like so:

  ret = tex2D(sampler_main, float2(uv.x + texsize.z, uv.y));

which reads as "Set ret to the rgb values from the sampler_main texture at one pixel to the right of the current pixel". When we do this for every pixel we've in effect moved the entire scene one pixel to the left. Conversely, subtracting texsize.z from uv.x would move the scene one pixel to the right. To move the scene 2 pixels to the left we just multiply texsize.z by 2.

Scaling

In the same way that translation is achieved by addition and subtraction, scaling is achieved by multiplying and dividing. An important note though, by dividing we really mean to multiply by a decimal fraction, uv*0.5 instead of uv/2 for example. This is because a computer can perform multiplication much faster than division, so we multiply whenever we can.

When we scale uv what we're really doing is increasing or decreasing the sampling area. For example, the instruction uv*0.5 will cut the available sampling area in half, and the sampling box now goes from (0,0) to (0.5,0.5). Notice this also means that the pixel ratio between the display window and the sampling area is no longer 1:1, one pixel from the sampling area gets scaled up to be 4 pixels in the display window. The effect of this is that the sampling area is blown up to fit the display window, and you lose some resolution in the process. The entire process is less confusing if we see it in action in the Composite Shader:

Notice in the third image we've moved the sampling area to the center of the texture. It's easy to see why this works, all we've done is add 0.25 to the coordinates so that the box now extends from (0.25,0.25) to (0.75,0.75).

Comp shader vs Warp shader

It's important to note that the effects of these transformations are very different in the two Milkdrop shaders. Remember, the composite shader treats sampler_main as a static texture and returns a modified copy of it that Milkdrop displays. Think of it as a camera, when we apply a translation to the uv coordinates we are moving the camera. When we scale the coordinates we are zooming the camera in or out. At the end the camera takes a picture of what it sees and returns that picture.

In the warp shader, on the other hand, we're really changing the scene when we transform the uv coordinates. And this transformation gets compounded in each successive frame. A lot of cool effects like fractals and error diffusion dither take advantage of this.