Kyle Hayward's


Real-Time Volume Ray-Casting


Volume ray-casting is a direct volume rendering method which employs a direct numerical evaluation of the volume rendering integral. For each pixel on the image plane, a ray is cast through the volume, and is sampled along equally spaced intervals. The scalar value is mapped to optical properties through the use of a transfer function which results in an RGBA color value that includes the corresponding emission and absorption coefficients for the location. The solution of the volume rendering integral is approximated by using alpha blending in either front-to-back or back-to-front order.

The 3D scalar data, along with gradient vectors, is loaded into a 3D floating point texture. Gradients are calculated using a central differences scheme, and then a simple NxNxN cube filter is used to average the gradients. A 1D transfer function is employed that takes control knots for both opacity and color. A cubic spline is fit to the knots to provide a smooth curve of transition from one color/opacity to the next. This is then loaded into a 256 dimension 1D texture and uploaded to the GPU.

A 3D cube is first tightly fit to the volume. To intersect the volume, the front and back facing triangles' positions are rendered to textures. The positions are used as the starting point of the sampling ray. This speeds up the main volume ray-casting shader by taking out the intersection code, and into a much simpler pre-pass shader.

To perform the ray-casting pass, the front faces of the cube are rendered. At each pixel a direction and starting position of the ray are obtained from the position textures. The volume is then iteratively sampled by advancing the current sampling position along the ray at equidistant steps. At each step the 3D texture containing the gradients and scalar value is sampled. The scalar value is used to index the transfer function texture and the gradient is used to evaluate a bidirectional reflectance distribution function, such as the Phong or Oren-Nayar models. Front-to-back compositing is then used to approximate the volume rendering integral. The project was developed in C# and XNA.


  • Real-time, 30 - 100 fps

  • Empty Space Leaping

  • Early ray termination

  • Variance shadow mapping

  • Phong shading

  • Approximated sub-surface scattering

  • Translucent rendering

  • Reflections

Screen Shots

Source Snippet

/// <summary>
/// Fits a cubic spline to the color and alpha control points
/// </summary>
private void computeTransferFunction()
    //initialize the cubic spline for the transfer function
    Vector4[] transferFunc = new Vector4[256];

    //temporary transfer function
    List<Vector4> tempColorKnots = new List<Vector4>(mColorKnots);
    List<Vector4> tempAlphaKnots = new List<Vector4>(mAlphaKnots);

    //calculate cubic spline from control knots
    Cubic[] colorCubic = Cubic.CalculateCubicSpline(mColorKnots.Count - 1, tempColorKnots);
    Cubic[] alphaCubic = Cubic.CalculateCubicSpline(mAlphaKnots.Count - 1, tempAlphaKnots);

    //calculate final interpolated transfer function
    int numTF = 0;
    for (int i = 0; i < mColorKnots.Count - 1; i++)
        int steps = mColorDistances[i];
        for (int j = 0; j < steps; j++)
            float k = (float)j / (float)(steps - 1);

            transferFunc[numTF++] = colorCubic[i].GetPointOnSpline(k);

    numTF = 0;
    for (int i = 0; i < mAlphaKnots.Count - 1; i++)
        int steps = mAlphaDistances[i];
        for (int j = 0; j < steps; j++)
            float k = (float)j / (float)(steps - 1);

            transferFunc[numTF++].W = alphaCubic[i].GetPointOnSpline(k).W;

    //write to a texture
    mTransferTex = new Texture2D(Game.GraphicsDevice, 256, 1, 1, TextureUsage.Linear, SurfaceFormat.Color);

    Byte4[] transfer = new Byte4[256];
    for (int i = 0; i < 256; i++)
        Vector4 color = transferFunc[i] * 255.0f;
        //store bgra
        transfer[i] = new Byte4(color.Z, color.Y, color.X, color.W);