Skeletal Animation

Loading and rendering 3D skeletal animations from scratch in C# and GLSL


  1. This Project
  2. 3D Graphics
  3. Objects
  4. Assets
  5. Textures
  6. Materials
  7. Shaders
  8. Models
  9. Skeletal Models
  10. View
  11. Viewer
  12. Program
  13. Further Research

This Project

Code

You get the full code from this project on my github.

Scope

The aim of this project was to gain a better understanding of how computers store and display 3D models and animations – particularly skeletal animations where only a skeleton is animation and the model moves with the skeleton. I used C# because it's the language I'm most familiar with.

References

Tools

I used the following:

  • Xamarin Studio as my C# IDE
  • OpenTK framework to handle cross platform windows, graphics contexts and input. I had to download and compile it myself because the Nuget package seems to be out of date.
  • Blender as a 3d editor. I also needed to install a plugin (io_scene_md5.zip) for importing/exporting id Tech 4 models (.md5mesh and .md5anim files).

3D Graphics

There is a bit of math involved in 3D graphics but getting up and running doesn't require anything too complicated. It turns out that you can do some pretty cool stuff with vectors and matrices – stuff I don't remember learning in school.

GPU are designed to perform some of these operations extremely fast so we will hand some of the heavy lifting over to the GLSL shader programs that run on it. For now, I can illustrate some of the stuff using the built in OpenTK components.

Vectors

We're going to use the vector structs available in the OpenTK library.

Console.WriteLine(new Vector2(1.0f, 3.0f));
Console.WriteLine(new Vector3(1.0f, 3.0f, 2.0f));
Console.WriteLine(Vector3.UnitY);
Console.WriteLine(Vector3.Zero);
Console.WriteLine(new Vector3(1.0f, 3.0f, 2.0f) + new Vector3(2.0f, -2.0f, -1.0f));
(1, 3)
(1, 3, 2)
(0, 1, 0)
(0, 0, 0)
(3, 1, 1)

Vertices

A vertex is generally used to refer to a 3D coordinate. However, in computer graphics we often bundle up other information as well as the position into each vertex.

  • Position: The coordinates of the point in 3D space.
  • Texture coordinates: The point on a 2D texture that this vertex is associated with.
  • Normals: This is a 3D vector used for lighting and other effects. We won't be making use of it at the moment.

Positions in 3D space require three elements: x, y and z. We will use Vector3s most of the time when dealing with positions. However, it's also common to use a Vector4 with a fourth element (w) to indicate whether it's a point or a vector. It's a little confusing but I mean a vector in the mathematical "distance with a direction" sense here. If w = 1 then we're describing a point and if w = 0 then we're describing a distance in a certain direction. You can add two distances together or a point and a distance together but not two points together – that doesn't make sense. Most of the time, we'll know whether something is a point or a vector so we won't store all these extra 0s or 1s, just add them in when we need to do calculations.

We will create our own vertex class containing all the information we want to include in it. We will then pass this information onto OpenGL which will store it on the graphics card memory for quick access. The vertex data is then passed into a shader program that OpenGL runs to figure out what and how to render those vertices.

Matrices

OpenTK also provides matrices structs and some creation methods. Note Matrix2 means a 2x2 matrix.

Console.WriteLine("Result 1\n" + new Matrix2(4.0f, 1.0f, 2.0f, -1.0f));
Console.WriteLine("Result 2\n" + Matrix3.Identity);
Console.WriteLine("Result 3\n" + Matrix4.Zero);
Console.WriteLine("Result 4\n" + new Matrix2(4.0f, 1.0f, 2.0f, -1.0f) * new Matrix2(1.0f, -2.0f, 0.0f, 1.0f));
Console.WriteLine("Result 5\n" + new Vector3(1.0f, 3.0f, 1.0f) * new Matrix3(1.0f, -2.0f, 0.0f, 1.0f,2.0f,-2.0f,0.0f,1.0f,2.0f));
Result 1:
(4, 1)
(2, -1)
Result 2:
(1, 0, 0)
(0, 1, 0)
(0, 0, 1)
Result 3:
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)
(0, 0, 0, 0)
Result 4:
(4, -7)
(2, -5)
Result 5:
(4, 5, -4)

Transformations

We'll mainly be using matrices for transforming vertices. Some of the transformations we'll be using are outlined below. A 4x4 matrix is used to transform a point in 3D space.

No Transformation

We can multiply a point by the identity matrix if we don't want to make any changes to it.

var vec = new Vector4(5.0f, 2.0f, 0.0f, 1.0f);
var mat = Matrix4.Identity;
Console.WriteLine(vec * mat);
(5, 2, 0, 1)

Translate

Translating a point is just another name for moving the point. The first three elements of the bottom row of the transformation matrix refer to the change in each axis.

var vec = new Vector4(5.0f, 2.0f, 0.0f, 1.0f);
var mat = Matrix4.CreateTranslation(-3.0f, -5.0f, 0.0f);
Console.WriteLine(vec * mat);
(2, -3, 0, 1)

Note that in these charts, the z-axis is running through the origin perpendicular to the screen, so +z is sticking out towards you.

Scale

Scaling allows us to shrink or grow a mesh. It's controlled by the first three elements in the diagonal of the transformation matrix.

var vec = new Vector4(4.0f, 5.0f, 0.0f, 1.0f);
var mat = Matrix4.CreateScale(0.5f, 0.5f, 0.0f);
Console.WriteLine(vec * mat);
(2, 2.5, 0, 1)

Rotate

Rotation is a little trickier. The example below makes it look straightforward but that is only because it's a 90° rotation. All elements in the top left 3x3 section are needed to allow for rotation around any axis.

We can define it in a number of different ways. Euler angles, axis and angle or using a quaternion (which we'll get to later). We use radians for angles (pi is the equivalent of 180°).

var vec = new Vector4(5.0f, 2.0f, 0.0f, 1.0f);
var mat = Matrix4.CreateRotationZ(Convert.ToSingle(Math.PI / 2));
Console.WriteLine(vec * mat);
(-2, 5, 0, 1)

Reflect

Reflection simply involves changing the sign on a specific axis. This is useful when you need to render a mirror or the surface of water. OpenTK doesn't have any helper method for reflection.

var vec = new Vector4(2.0f, 3.0f, 0.0f, 1.0f);
var mat = Matrix4.Identity;
mat[1, 1] = -1.0f;
Console.WriteLine(vec * mat);
(2, -3, 0, 1)

Orthographic Projection

Orthographic projection is a projection with no perspective, commonly seen in construction plans. We give the creation method a cuboid containing everything we want to render (defined by width, height and starting/ending depth) and it gives us a matrix that scales our cuboid that down to a 2x2x2 cuboid centered on the origin.

var vec = new Vector4(5.0f, 2.0f, 0.0f, 1.0f);
var mat = Matrix4.CreateOrthographic(10, 10, -10, 10);
Console.WriteLine(vec * mat);
(1, 0.4, 0, 1)

Perspective Projection

The perspective projection is a little different. Instead of a cuboid, we define a frustum, which is like a cuboid except in our case the back face of the cuboid is larger than the front face (these are the two faces that the z-axis runs through). When we scale the world down to a 2x2x2 cuboid then anything that is further away gets pulled closer the origin.

So in the example below, even though the red and purple points have the same x and y coordinates, the red dot is further back so it gets pulled closer to the origin.

Once we pass the point through this transformation matrix we need to divide it by the w value to get a readable result. I'm not really sure why this at this time.

var mat = Matrix4.CreatePerspectiveFieldOfView(Convert.ToSingle(Math.PI / 4), 1.0f, 1.0f, 50.0f);

var vec1 = new Vector4(4.0f, 3.0f, -10.0f, 1.0f);
var res1 = (vec1 * mat);
res1 /= res1.W;
Console.WriteLine(res1);

var vec2 = new Vector4(4.0f, 3.0f, -50.0f, 1.0f);
var res2 = (vec2 * mat);
res2 /= res2.W;
Console.WriteLine(res2);     
(0.9656854, 0.7242641, 0.8367347, 1)
(0.1931371, 0.1448528, 1, 1)


Combining transformations

You might think that so far you could probably do most of these transformations without any matrices. One of the advantages of matrices is that you can combine them by multiplying them together.

Note that the order we do this is very important. If we are combining a rotation transformation and a translate transformation, there are two different results depending on the order we apply them.

var vec = new Vector4(5.0f, 1.0f, 0.0f, 1.0f);
var mat1 = Matrix4.CreateTranslation(0.0f, 1.0f, 0.0f);
var mat2 = Matrix4.CreateRotationZ(Convert.ToSingle(Math.PI / 2));
Console.WriteLine(vec * mat1 * mat2);
Console.WriteLine(vec * mat2 * mat1);
(-2, 5, 0, 1)
(-1, 6, 0, 1)


Quaternions

Quaternions are fancy ways to store rotations. There are some shortcomings to using just three values to store a rotation, namely gimbal locking. Quaternions essentially define a 3D vector tell us which way to face and a fourth element describing our rotation once we're pointing in the right direction. They can be hard to think about and people generally seem to think you should avoid messing with them directly if possible.

var vec = new Vector4(5.0f, 2.0f, 0.0f, 1.0f);

var mat = Matrix4.CreateRotationZ(Convert.ToSingle(Math.PI / 4));
Console.WriteLine(vec * mat);

var qua = Quaternion.FromAxisAngle(Vector3.UnitZ, Convert.ToSingle(Math.PI / 4));
Console.WriteLine(qua * vec);
(2.12132, 4.949748, 0, 1)
(2.12132, 4.949748, 0, 1)

3D to 2D

We pass each vertex through a number of transformation matrices to convert it from a 3D point in our virtual world into a 2D point on our computer screen.

Vector42D = Matrix4Projection ⋅ Matrix4View ⋅ Matrix4Model ⋅ Vector43D

Model

There is a unique transformation matrix for each instance of each model we render. It is generated by combining a translation, scale and rotation matrix.

Matrix4Model = Matrix4Translate ⋅ Matrix4Scale ⋅ Matrix4Rotate

View

It is easiest to think of this as the camera. By default the camera lens is sitting at the origin pointing directly in the -z axis direction. The view matrix moves and rotates the camera to somewhere else in the scene. Note that we're actually moving and rotating the world around the camera not the camera around in the world. It's possible to scale everything in the world but I'm not sure why you'd do that, perhaps if it was a game where the player was able to grow and shrink.

Matrix4View = − Matrix4Translate ⋅ Matrix4Scale ⋅ Matrix4Rotate

Also note the Matrix4.LookAt() method to useful for creating rotation matrices we can use to generate the rotation part of the view matrix.

Projection

This matrix specifies how everything is projected into 2d, usually either a orthogonal or perspective projection. OpenGL wants everything to be inside (-1, 1) on any axis so this matrix scales everything we want to be visible down to within these bounds.

Geometry

Once we've converted our 3D coordinates into 2D coordinates on our screen, we still need to tell OpenGL how to connect them together to form shapes or elements. You can leave them to be rendered as individual points or let it know which points to join together to form lines, triangles or quads (known as "elements"). We're going to stick with triangles for the most part.

Color

Colors are stored as four element vectors (Vector4) representing the red, green, blue and alpha channels. We can just fill each triangle in with a fixed color.

Textures

We can map 2D images onto our 3D object. This is done using the texture coordinates we have saved for each vertex. They represent a point on our 2D texture. Once we have three vertices in our triangle, we form a triangle on the texture using their respective texture coordinates. The resulting texture triangle is rotated, scaled and skewed so that it fits into our on screen triangle.

Note that we use u and v to represent the x and y axes of our texture to reduce confusion with our position axes (x,y and z). The texture is always scaled between 0 and 1.


Objects

At the moment, the WorldObject class mostly contains code for managing the object's transformation – that is, it's position, scale and rotation. These three properties are stored separately but we also cache a completed transformation matrix that is updated whenever a value is changed using a overridable RecalculateTransformation() method. Without this, we would need to calculate a transformation matrix for each object for every frame, which would be a bit wasteful.

I've added in some static directional vectors that will be useful. There is also a Space enum for signalling whether a transformation will be applied in the local or world space. Moving or rotation in the world space ignores the current rotation of the object. So moving forward (-z) in local space means moving forward in the direction the object is facing while moving forward in world space just means decreasing the Position.Z value.

using System;
using OpenTK;

namespace Graphics
{
    public static class Direction
    {
        public static Vector3 Up = Vector3.UnitY;
        public static Vector3 Down = -Vector3.UnitY;

        public static Vector3 Right = Vector3.UnitX;
        public static Vector3 Left = -Vector3.UnitX;

        public static Vector3 Backward = Vector3.UnitZ;
        public static Vector3 Forward = -Vector3.UnitZ;
    }

    public enum Space
    {
        Local,
        World
    }

    public class WorldObject
    {
        protected Vector3 position = Vector3.Zero;
        protected Vector3 scale = Vector3.One;
        protected Quaternion rotation = Quaternion.Identity;
        protected Matrix4 transformation = Matrix4.Identity;

        public Vector3 Position { get { return position; } set { position = value; RecalculateTransformation(); } }
        public Vector3 Scale { get { return scale; } set { scale = value; RecalculateTransformation(); } }
        public Quaternion Rotation { get { return rotation; } set { rotation = value; RecalculateTransformation(); } }
        public Matrix4 Transformation { get { return transformation; } }

        protected virtual void RecalculateTransformation()
        {
            transformation = Matrix4.CreateFromQuaternion(rotation) * Matrix4.CreateScale(scale) * Matrix4.CreateTranslation(position);
        }

        public void Move(Vector3 delta, Space space = Space.Local)
        {
            if(space == Space.World)
                position += delta;
            else
                position += rotation.Inverted() * delta;
            RecalculateTransformation();
        }

        public void Rotate(Quaternion rot, Space space = Space.Local)
        {
            if(space == Space.Local)
                rotation = rotation * rot;
            else
                rotation = rot * rotation;
            RecalculateTransformation();
        }

        public void Rotate(Vector3 angles, Space space = Space.Local)
        {
            Rotate(Quaternion.FromEulerAngles(angles), space);
        }

        public void Rotate(Vector3 direction, float angle, Space space = Space.Local)
        {
            Rotate(Quaternion.FromAxisAngle(direction, angle), space);
        }

        public void LookAt(Vector3 position, Vector3 up)
        {
            rotation = Matrix4.LookAt(this.position, position, up).ExtractRotation();
            RecalculateTransformation();
        }

        public void LookAt(Vector3 position)
        {
            LookAt(position, Direction.Up);
        }
    }
}

There is also some pose and animation code to select which of these to us when rendering the object's model.

private DateTime animationStartTime;
private Animation currentAnimation;
private string currentPose;

public void Pose(string pose)
{
    currentAnimation = null;
    currentPose = pose;
}

public void Animate(string animation, TimeSpan offset = default(TimeSpan))
{
    currentPose = null;
    currentAnimation = Assets.Retrieve<Animation>(animation);
    animationStartTime = DateTime.UtcNow - offset;
}

And finally the Model itself and the Render() method. We set the model to the correct pose or animation frame. We then tell the model to render passed on the input projection and view matrices as well as this object's transformation matrix.

public Model Model { get; set; }

public WorldObject(string modelName)
{
    Model = Assets.Retrieve<Model>(modelName);
}

public void Render(Matrix4 projection, Matrix4 view)
{
    if(Model == null)
        return;

    if(currentAnimation != null)
    {
        float seconds = Convert.ToSingle((DateTime.UtcNow - animationStartTime).TotalSeconds);
        float currentAnimationFrame = (seconds * currentAnimation.FrameRate) % currentAnimation.FrameCount;
        Model.SetAnimationFrame(currentAnimation.Name, currentAnimationFrame);
    }
    else if(currentPose != null)
        Model.SetPose(currentPose);
    else
        Model.SetPose("Default");
    
    Model.Render(projection, view, transformation);
}

Assets

There are a lot of different things that are assets. This code is still a little rough because there are a lot of possibilities:

  • Assets may or may not be stored in files.
  • Multiple assets may be stored each file.
  • Multiple asset types may be stored each file.
  • Assets may or may not have names.
  • Assets may or may not need to be loaded when requested.
  • Assets may or may not have dependencies on other assets.

We start by defining a simple Asset interface. Later on we might want to add a Reload() method to handle live reloading of the asset as well as a Reloaded event so any assets depending on it can be informed.

namespace Graphics
{
    public interface Asset
    {
        string Name { get; }
        string File { get; }
    }
}

Next up is the AssetImporter interface. Each importer can only handle one asset type but multiple file extensions. We're just going to use the file extension to decide which importer to use.

using System;

namespace Graphics
{
    public interface AssetImporter
    {
        Type AssetType { get; }
        string[] FileExtensions { get; }
        void Import(string file);
    }
}

Our Assets static class handles the importers and caches any imported assets. First, it scans the binary to find any implementations of AssetImporter interface and registers them automatically. You can also manually add any others using the RegisterImporter() method. The importers are stored in a Dictionary<string, AssetImporter> with an entry for each extensions they support. Second, it creates a list of all of the files in the Assets directory. This list may be required more than once so it's useful to cache it.

using System;
using System.Reflection;
using System.Linq;
using System.IO;
using System.Collections.Generic;

namespace Graphics
{
    public static class Assets
    {
        private readonly static List<string> ignorableFiles = new List<string>() { ".DS_Store" };
        private readonly static List<string> availableFiles = new List<string>();
        
        private readonly static Dictionary<string, AssetImporter> importers = new Dictionary<string, AssetImporter>();

        static Assets()
        {
            // search for any asset importers
            var importerTypes = Assembly.GetExecutingAssembly()
                                        .GetTypes()
                                        .Where(t => t != typeof(AssetImporter) && typeof(AssetImporter).IsAssignableFrom(t));

            foreach(var type in importerTypes)
                RegisterImporter((AssetImporter)Activator.CreateInstance(type));

            // scan all files in assets directory
            foreach(var f in Directory.EnumerateFiles("Assets", "*", SearchOption.AllDirectories))
            {
                string file = Path.GetFileName(f);
                if(!ignorableFiles.Contains(file))
                    availableFiles.Add(file);
            }
        }

        public static void RegisterImporter(AssetImporter importer)
        {
            foreach(var e in importer.FileExtensions)
                importers.Add(e, importer);
        }
    }
}

We can import files either by file name or by asset name. For files, the asset name is often just the filename without the extension. Importing by asset name involves search until we find an file with the same name that also has an extension which matches the requested asset type. The ImportFile() method checks whether we've already imported the asset and calls the relevant AssetImporter.Import() method.

private readonly static List importedFiles = new List();

public static void ImportFile<T>(string file)
{            
    if(importedFiles.Contains(file))
        return;
    importedFiles.Add(file);

    string extension = Path.GetExtension(file);
    if(!importers.ContainsKey(extension))
    {
        Console.WriteLine($"Asset unsupported format ({extension}): {file}");
        return;
    }

    string path = Path.Combine("Assets", importers[extension].AssetType.Name, file);

    importers[extension].Import(path);
    Console.WriteLine($"{typeof(T).Name} imported: {file}");
}

public static void ImportByName<T>(string name)
{
    string extension;

    if(!TryGetExtension<T>(name, out extension))
        return;

    ImportFile<T>(name + extension);
}

private static bool TryGetExtension<T>(string name, out string extension)
{
    foreach(var i in importers.Where(x => x.Value.AssetType == typeof(T)))
    {
        if(availableFiles.Exists(x => name + i.Key == x))
        {
            extension = i.Key;
            return true;
        }
    }

    extension = "";
    return false;
}

We can either choose to import all available assets or have the system load them as they are needed.

public static void ImportAll()
{
    ImportAll<VertexShader>();
    ImportAll<FragmentShader>();
    ImportAll<Texture>();
    ImportAll<Material>();
    ImportAll<Model>();
    ImportAll<Animation>();
}

public static void ImportAll<T>() where T : Asset
{
    foreach(var r in importers.Where(x => x.Value.AssetType == typeof(T)).Select(x => x.Key))
    {
        foreach(var f in availableFiles)
        {
            if(f.EndsWith(r, StringComparison.InvariantCulture))
                ImportFile<T>(f);
        }
    }
}

The importers or the user can register any instance of an Asset. This isn't done automatically.

private readonly static List<Asset> assets = new List<Asset>();
  
public static void Register(Asset asset)
{
    if(!assets.Contains(asset))
        assets.Add(asset);
}

We can retrieve an asset by name or by file. If the asset cannot be found, it will try to import it. You can also retrieve all assets of a certain type.

public static T Retrieve<T>(string name) where T : Asset
{
    var asset = (T)assets.Find(x => x is T && x.Name == name);

    if(asset == null)
    {
        ImportByName<T>(name);
        asset = (T)assets.Find(x => x is T && x.Name == name);
    }

    return asset;
}

public static T RetrieveFile<T>(string file) where T : Asset
{
    var asset = (T)assets.Find(x => x is T && x.File == file);

    if(asset == null)
    {
        ImportFile<T>(file);
        asset = (T)assets.Find(x => x is T && x.File == file);
    }

    return asset;
}

public static IEnumerable<T> RetrieveAll<T>() where T : Asset
{
    return assets.Where(x => x is T).Select(x => (T)x);
}

Textures

At this point, it's best to just think of textures as just regular 2D images. There are other types, such as 3D textures, but we're not going to deal with them here. The most obvious use is to simply wrap it around our 3D model so each triangle in our model isn't just one solid colour. Textures also have various other uses, mostly related to lighting and adding extra detail to our models.

We're going to use the Bitmap class to load in images and access the raw image data. One thing to note is that this is in the System.Drawing assembly which, as far as I'm aware, is not implemented in .NET Core. In order to retrieve the raw image data, we need to extract it using Bitmap.LockBits() to lock it and then Bitmap.UnlockBits() when we're done.

This is also our first bit of OpenGL code. The OpenGL code in OpenTK is basically a wrapper around the standard C functions (I believe most of the code is actually generated automatically). We need to take our image and give it to OpenGL to store in the video card memory. OpenGL gives us a handle which is just an identication number for the texture. Whenever we need the reference the texture, we use that number.

The process has a number of steps:

  1. GL.GenTexture(): Request a new texture handle.
  2. GL.BindTexture(): Make it the global active texture handle.
  3. GL.TexParameter(): Tell OpenGL how to scale the texture when it needs a smaller or larger version. We'll just set it to scale linearly.
  4. GL.GenerateMipmap(): Tell OpenGL to generate it's own mipmaps. These are scaled down versions of the texture to use if it only needs to render a small version of the texture. Having them already generated increases rendering speed (and helps prevent some artefacts).
  5. GL.TexImage2D(): This is the hand over of the actual raw image data to OpenGL to store in the graphics card memory.

Finally there is a Bind() method that activates this texture. We want to keep the handle to OpenGL texture private (the id field) so that all texture operations have to go through public methods of the class. If we ever decide to swap out OpenGL for something else, it's good to have all of our texture related code in the Texture class.

using System.Drawing;
using System.Drawing.Imaging;
using OpenTK;
using OpenTK.Graphics.OpenGL4;

namespace Graphics
{
    public class Texture : Asset
    {
        public string Name { get; private set; }
        public string File { get; private set; }

        private int id = -1;

        public Texture(string name, string file, Bitmap bitmap)
        {
            Name = name;
            File = file;
            id = GL.GenTexture();

            GL.BindTexture(TextureTarget.Texture2D, id);

            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
            GL.GenerateMipmap(GenerateMipmapTarget.Texture2D);

            BitmapData data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, 
                                              System.Drawing.Imaging.PixelFormat.Format32bppArgb);

            GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
                          OpenTK.Graphics.OpenGL4.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);

            bitmap.UnlockBits(data);
        }

        public void Bind()
        {
            GL.BindTexture(TextureTarget.Texture2D, this.id);
        }
    }
}

I've also put in a static Blank texture, which is a 1x1 perfectly white texture. This is mainly used due to the way I've set up the shader, which I'll explain in more detail later. We're basically going to use this when we want to draw using a fixed color instead of a texture – we need a white canvas to put our color on top of.

public static Texture Blank = new Texture("", "", Util.Bitmap.CreateBlank(1, 1, Vector4.One));

There is a small utility method that creates single color bitmaps.

using System;
using System.Drawing;
using OpenTK;

namespace Graphics.Util
{
    public static class Bitmap
    {
        public static System.Drawing.Bitmap CreateBlank(int width, int height, Vector4 color)
        {
            System.Drawing.Bitmap bmp = new System.Drawing.Bitmap(1, 1);
            Color c = Color.FromArgb(Convert.ToInt32(color.W * 255), Convert.ToInt32(color.X * 255), Convert.ToInt32(color.Y * 255), Convert.ToInt32(color.Z * 255));
            bmp.SetPixel(0, 0, c);
            return bmp;
        }
    }
}

Our importer for textures is short and sweet. It simply takes an image file, creates a Texture and registers it as an asset. We're banking on the Bitmap class to import the image. I've noticed it doesn't work on some formats – 16 bit PNGs with alpha for example.

using System;
using System.IO;
using System.Drawing;

namespace Graphics.Importers
{
    public class DefaultTextureImporter : AssetImporter
    {
        public Type AssetType { get; } = typeof(Texture);
        public string[] FileExtensions { get; } = new string[] { ".bmp", ".exif", ".tiff", ".png", ".gif", ".jpg", ".jpeg" };

        public void Import(string path)
        {
            string name = Path.GetFileNameWithoutExtension(path);
            string file = Path.GetFileName(path);

            Assets.Register(new Texture(name, file, new Bitmap(path)));
        }
    }
}

Materials

Materials describe the surface of a model or mesh. This is a huge area and there are a lot of options to tweak regarding simulating how light interacts with the model and ways to add extra detail without adding any extra polygons. Initially however, we're just going to focus on the "diffuse" texture, which is the standard one that wraps around our mesh.

Our Material class is just a data structure for storing all of these light and color related attributes. We'll also define a Blank material that only has our Texture.Blank assigned as the DiffuseTexture.

using OpenTK;

namespace Graphics
{    
    public class Material : Asset
    {
        public string Name { get; private set; }
        public string File { get; private set; }

        public Vector4 AmbientColor;
        public Vector4 DiffuseColor;
        public Vector4 SpecularColor;
        public Vector4 EmissionColor;

        public float Alpha;
        public float Shininess;

        public int IlluminationMode;

        public Texture AmbientTexture;
        public Texture DiffuseTexture;
        public Texture SpecularTexture;
        public Texture AlphaTexture;
        public Texture BumpTexture;
        public Texture NormalTexture;
        public Texture HeightTexture;

        public Material(string name, string file)
        {
            Name = name;
            File = file;
        }

        public static Material Blank = new Material("", "") { DiffuseTexture = Texture.Blank };
    }
}

Wavefront Importer (.mtl)

The only format we need to support for importing materials is the Wavefront Material Library (.mtl) file format. It can actually store several materials in each file. It's an open text format so we can just open it up in a text editor to see what's in there.

# Blender MTL File: 'Crate.blend'
# Material Count: 1

newmtl WoodenCrate
Ns 96.078431
Ka 1.000000 1.000000 1.000000
Kd 0.640000 0.640000 0.640000
Ks 0.500000 0.500000 0.500000
Ke 0.000000 0.000000 0.000000
Ni 1.000000
d 1.000000
illum 2
map_Kd crate.png

It's quite simple – it defines each new material followed by each property of that material. Each property can have various arguments – a value, a color or a texture file. This isn't a full implementation by any means and any unknown property flags are simply ignored.

using System;
using System.IO;
using System.Linq;
using OpenTK;

namespace Graphics.Importers.Wavefront
{    
    public class WavefrontMaterialImporter : AssetImporter
    {
        private const string CommentFlag = "#";
        private const string AmbientColorFlag = "Ka";
        private const string DiffuseColorFlag = "Kd";
        private const string SpecularColorFlag = "Ks";
        private const string AlphaFlag = "d";
        private const string InverseAlphaFlag = "Tr";
        private readonly string[] ShininessFlags = { "Ns", "Ni" };
        private const string IlluminationModeFlag = "illum";
        private const string AmbientTexture = "map_Ka";
        private const string DiffuseTexture = "map_Kd";
        private const string SpecularTexture = "map_Ks";
        private const string AlphaTexture = "map_d";
        private const string NewMaterialFlag = "newmtl";
        private readonly string[] BumpTextureFlags = { "map_bump", "bump" };

        public Type AssetType { get; } = typeof(Material);
        public string[] FileExtensions { get; } = new string[] { ".mtl" };

        public void Import(string path)
        {
            string file = Path.GetFileNameWithoutExtension(path);

            Material currentMaterial = null;

            using(StreamReader reader = new StreamReader(path))
            {
                while(reader.Peek() >= 0)
                {
                    string line = reader.ReadLine().TrimStart();

                    if(line.StartsWith(CommentFlag, StringComparison.InvariantCulture))
                        continue;

                    // split on whitespace
                    string[] parts = line.Split((char[])null, StringSplitOptions.RemoveEmptyEntries);

                    if(parts.Length < 1)
                        continue;

                    if(parts[0] == NewMaterialFlag)
                    {
                        currentMaterial = new Material(parts[1], file);
                        Assets.Register(currentMaterial);
                    }
                    else if(parts[0] == AmbientColorFlag)
                        currentMaterial.AmbientColor = new Vector4(Convert.ToSingle(parts[1]), Convert.ToSingle(parts[2]), Convert.ToSingle(parts[3]), 1.0F);
                    else if(parts[0] == DiffuseColorFlag)
                        currentMaterial.DiffuseColor = new Vector4(Convert.ToSingle(parts[1]), Convert.ToSingle(parts[2]), Convert.ToSingle(parts[3]), 1.0F);
                    else if(parts[0] == SpecularColorFlag)
                        currentMaterial.SpecularColor = new Vector4(Convert.ToSingle(parts[1]), Convert.ToSingle(parts[2]), Convert.ToSingle(parts[3]), 1.0F);
                    else if(parts[0] == AlphaFlag)
                        currentMaterial.Alpha = Convert.ToSingle(parts[1]);
                    else if(parts[0] == InverseAlphaFlag)
                        currentMaterial.Alpha = 1.0f - Convert.ToSingle(parts[1]);
                    else if(ShininessFlags.Contains(parts[0]))
                        currentMaterial.Shininess = Convert.ToSingle(parts[1]);
                    else if(parts[0] == IlluminationModeFlag)
                        currentMaterial.IlluminationMode = Convert.ToInt32(parts[1]);
                    else if(parts[0] == AmbientTexture)
                        currentMaterial.AmbientTexture = Assets.RetrieveFile<Texture>(parts[1]);
                    else if(parts[0] == DiffuseTexture)
                        currentMaterial.DiffuseTexture = Assets.RetrieveFile<Texture>(parts[1]);
                    else if(parts[0] == SpecularTexture)
                        currentMaterial.SpecularTexture = Assets.RetrieveFile<Texture>(parts[1]);
                    else if(parts[0] == AlphaTexture)
                        currentMaterial.AlphaTexture = Assets.RetrieveFile<Texture>(parts[1]);
                    else if(BumpTextureFlags.Contains(parts[0]))
                        currentMaterial.BumpTexture = Assets.RetrieveFile<Texture>(parts[1]);
                }
            }
        }
    }
}

Shaders

Shaders are special programs that run on the GPU. They do most of the heavy lifting. There are various different languages you can write shaders in but OpenGL have their own called OpenGL Shading Language or GLSL.

GLSL gives the user the ability to customise various parts of the graphics pipeline by creating a shader for each. The main two and the two that must be defined by the user are:

  • Vertex shader: converts our 3d points into 2d points on the screen
  • Fragment shader: decides which color to display for each pixel on screen.

Like many compiled languages, we need to compile our two GLSL shader files and then link them together to create a GLSL shader program.

The language itself it C-like and not that difficult to get a basic version working. Since it's a pipeline, the output of one shader is usually the input of the next. Each shader has a main() function as it's entry point. There are different types of variables:

  • uniform: a variable that is the same for the entire rendering operation.
  • in: we divide our incoming vertex into it's components (i.e. position, normal, texture coordinate) using these variables. They are also known as attributes.
  • out: we need to specify which data, if any, needs to be passed on to the next shader.

Vertex Shader

So, as mentioned before, vertex shader takes the 3D position of each vertex, transforms it using our projection, model and view matrices to tell us it's 2D position on the screen. Our transformation matrices are defined as uniform because they are the same for every vertex in the rendering operation. We currently only have position and texture coordinate information stored in each vertex so we define our these are in inputs. The texture coordinates need to be passed on to the fragment shader so we need to copy these to our out variable.

You can see our position comes in as a vec3 but we have to add a fourth element (1) before multiplying it by the matrices.

#version 150 core

uniform mat4 Projection;
uniform mat4 View;
uniform mat4 Model;

in vec3 Position;
in vec2 TextureCoordinates;

out vec2 TextureCoordinates0;

void main()
{		
    gl_Position = Projection * View * Model * vec4(Position, 1.0);   
    TextureCoordinates0 = TextureCoordinates;
}

Fragment Shader

The fragment shader decides what color to render at each pixel. For this shader, we have two different uniform variables: one defines a color to use and the other defines the current texture handle. We will use the texture coordinates coming in from the vertex shader and the special texture function to retrieve the color at the fragment position. We multiply the color taken from the texture by our uniform color. This lets us tint the texture or if the texture is a blank white colour, it will simply be the inputted color.

#version 150 core

uniform vec4 Color;
uniform sampler2D TextureId;

in vec2 TextureCoordinates0;

out vec4 Color0;

void main(void)
{
	Color0 = texture(TextureId, TextureCoordinates0) * Color;
}

Shader

The Shader class is quite straightforward. It is a bit weird to compile another language from within our C# application.

  1. GL.CreateShader(): create a new shader.
  2. GL.ShaderSource(): set the source code of the shader.
  3. GL.CompileShader(): compile the shader.
  4. GL.GetShader(): can be used to check whether the shader compilation was successful.

We also define an AttachToProgram() method that links the shader to a shader program.

using System;
using OpenTK.Graphics.OpenGL4;

namespace Graphics
{
    public class Shader : Asset
    {
        public string Name { get; private set; }
        public string File { get; private set; }

        private int id = -1;

        public Shader(string name, string file, ShaderType type, string code)
        {
            Name = name;
            File = file;
            id = GL.CreateShader(type);

            GL.ShaderSource(id, code);
            GL.CompileShader(id);

            int status;
            GL.GetShader(id, ShaderParameter.CompileStatus, out status);
            if(status != 1)
                throw new Exception(type + " compilation failed:\n" + GL.GetShaderInfoLog(id));
        }

        public void AttachToProgram(int programId)
        {
            GL.AttachShader(programId, id);
        }    
    }
}

We can define classes for VertexShader and FragmentShader that inherit from this base class.

public class VertexShader : Shader
{
    public VertexShader(string name, string file, string code)
        : base(name, file, ShaderType.VertexShader, code)
    { }
}

public class FragmentShader : Shader
{
    public FragmentShader(string name, string file, string code)
        : base(name, file, ShaderType.FragmentShader, code)
    { }
}

Importers (.vs, .fs)

For the time being, we have to define two separate quite similar importers. This is a limitation of only being able to define one AssetType for each importer. They simply create an new instance of the relevant shader type and register it. We have the DefaultVertexShaderImporter:

using System;
using System.IO;

namespace Graphics.Importers
{
    public class DefaultVertexShaderImporter : AssetImporter
    {
        public Type AssetType { get; } = typeof(VertexShader);
        public string[] FileExtensions { get; } = new string[] { ".vs" };

        public void Import(string path)
        {
            string name = Path.GetFileNameWithoutExtension(path);
            string file = Path.GetFileName(path);

            Assets.Register(new VertexShader(name, file, File.ReadAllText(path)));
        }
    }
}

And the DefaultFragmentShaderImporter:

using System;
using System.IO;

namespace Graphics.Importers
{    
    public class DefaultFragmentShaderImporter : AssetImporter
    {
        public Type AssetType { get; } = typeof(FragmentShader);
        public string[] FileExtensions { get; } = new string[] { ".fs" };

        public void Import(string path)
        {
            string name = Path.GetFileNameWithoutExtension(path);
            string file = Path.GetFileName(path);

            Assets.Register(new FragmentShader(name, file, File.ReadAllText(path)));
        }
    }
}

Shader Program

The shader program needs to take two shaders and link them together to form a working program. Note that each shader is a separate unit and different programs can be built using different combinations of the same shaders.

  1. GL.CreateProgram(): create a new shader program.
  2. GL.AttachShader(): adds the shader to the shader program. This is wrapped in the Shader.AttachToProgram() method.
  3. GL.BindFragDataLocation(): Tell OpenGL the name of the variable that holds the final color in the fragment shader.
  4. GL.LinkProgram(): Link the shaders to form the shader program.
  5. GL.GetProgram(): Can be used to check whether the linking was successful.

We also define an Activate() method that makes the current shader program the globally active one.

using System;
using System.Collections.Generic;
using OpenTK;
using OpenTK.Graphics.OpenGL4;

namespace Graphics
{
    public class ShaderProgram : Asset
    {
        public string Name { get; private set; }
        public string File { get; } = String.Empty;

        private int id = -1;

        private Shader vertexShader;
        private Shader fragmentShader;

        private Dictionary<string, int> attributes = new Dictionary<string, int>();
        private Dictionary<string, int> uniforms = new Dictionary<string, int>();

        public ShaderProgram(string name, string vertex, string fragment)
        {
            Name = name;

            id = GL.CreateProgram();

            vertexShader = Assets.Retrieve<VertexShader>(vertex);
            fragmentShader = Assets.Retrieve<FragmentShader>(fragment);

            vertexShader.AttachToProgram(id);
            fragmentShader.AttachToProgram(id);
            GL.BindFragDataLocation(id, 0, "Color0");
            GL.LinkProgram(id);

            int status;
            GL.GetProgram(id, GetProgramParameterName.LinkStatus, out status);
            if(status != 1)
                throw new Exception("shader program link failed:\n" + GL.GetProgramInfoLog(id));
            
            AddUniform("Projection");
            AddUniform("View");
            AddUniform("Model");
            AddUniform("Color");

            SetUniform("Color", Vector4.One);
        }

        public void Activate()
        {
            GL.UseProgram(id);
        }
    }
}

When we're handling over the vertex data to OpenGL, we hand it over as one big float[]. We then have to tell OpenGL how the data was organised using attributes, which splits it up and feeds it into the in variables in the GLSL code.

  1. GL.GetAttribLocation(): Get a handle for the attribute based on the in variable name in the shader.
  2. GL.EnableVertexAttribArray(): Enable it.
  3. GL.VertexAttribPointer(): Explain where it fits into the float[] using a stride and offset. All of our attributes are size-dimensional float vectors. Each vertex is stride bytes in size and this particular attribute is offset number of bytes from the start.

We store any attributes we define and their handles in a Dictionary. We don't need them for anything currently but may need them in the future.

public void AddAttribute(string name, int size, int stride, int offset)
{
    int attributeId = GL.GetAttribLocation(id, name);
    GL.EnableVertexAttribArray(attributeId);
    GL.VertexAttribPointer(attributeId, size, VertexAttribPointerType.Float, false, stride, offset);

    attributes.Add(name, attributeId);
}

Next up is our uniform input variables.

  1. GL.GetUniformLocation(): Retrieve a handle for the named uniform variable in our GLSL code.
  2. GL.Uniform4(): Set the value of a vec4 variable from a Vector4. Note that this function takes a reference as it's last argument and expects an array of floats to be there. Vector4 is a struct and has the [StructLayout(LayoutKind.Sequential)] attribute set which tells C# to make sure that the struct data is stored as it is defined, which is four floats in a row – exactly as an array would be stored.
  3. GL.UniformMatrix4(): Set the value of a mat4 or mat4[] variable from a Matrix4 or Matrix4[]. Similiarly, this function takes a reference as it's last argument and expects an array of floats to be there – a multiple of 16 defined by the second argument. Matrix4 is also struct and has the [StructLayout(LayoutKind.Sequential)] attribute set. It is defined as four Vector4s in a row which is the same as a 16 float array. An array of these structs ends up being identical to a large array of floats.
public void AddUniform(string name)
{
    uniforms.Add(name, GL.GetUniformLocation(id, name));
}

public void SetUniform(string name, Vector4 vector)
{
    Activate();
    GL.Uniform4(uniforms[name], ref vector);
}

public void SetUniform(string name, Matrix4 matrix)
{
    Activate();
    GL.UniformMatrix4(uniforms[name], false, ref matrix);
}

public void SetUniform(string name, Matrix4[] matrices)
{
    Activate();
    GL.UniformMatrix4(uniforms[name], matrices.Length, false, ref matrices[0].Row0.X);
}

Models

Our basic model can made up of a number of meshes, each of which are made up of number of vertices as well as information on how these vertices are connected to form elements (triangles).

Vertex

The core Vertex class contains just three bits of information: position, texture coordinates and normal. We don't actually use the normal anywhere currently but it will definitely be needed if we add in any sort of lighting.

The ToFloatArray() method returns a float array of the position and texture coordinate values. I feel that this class should really be a struct with [StructLayout(LayoutKind.Sequential)] set so that this is unnecessary (and all the values are arranged as a float[] in memory anyway). However, different model types will need additional information in their vertices and I want them to be able to define new vertex classes that inherit from this class. For now it will do but it isn't as clean as I would like.

using OpenTK;

namespace Graphics
{
    public class Vertex
    {
        public Vector3 Position = Vector3.Zero;
        public Vector3 Normal = Vector3.Zero;
        public Vector2 TextureCoordinates = Vector2.Zero;

        public virtual float[] FloatArray()
        {
            return new float[] {
                    Position.X, Position.Y, Position.Z,
                    TextureCoordinates.X, TextureCoordinates.Y
            };
        }
    }
}

Mesh

Each Mesh holds the actual vertex and element information as well as the name, Material and RenderMode. The RenderMode defines how the mesh is rendered. By default we use the diffuse texture but for debugging and other purposes it can be useful to see the mesh as a wireframe (RenderMode.Edge) or as a solid color (RenderMode.Face).

The vertices and elements in each mesh in the model get aggregated into two large arrays that are passed onto OpenGL to store in the graphics card memory. We store the offset and element count for each mesh so we can still select which meshes to render even after they have been combined into a single array of elements.

Once we've copied the vertex and element data to graphics memory and if we know the mesh is not going to change, we could actually clear out this from the program's memory if we wanted.

using System.Collections.Generic;

namespace Graphics
{
    public enum RenderMode
    {
        None,
        Point,
        Edge,
        Face,
        Texture
    }

    public class Mesh
    {
        public string Name { get; set; }
        public Material Material { get; set; }

        public List<Vertex> Vertices { get; set; }
        public List<int> Elements { get; set; }

        public int ElementOffset { get; set; }
        public int ElementCount { get; set; }

        public RenderMode RenderMode { get; set; } = RenderMode.Texture;
    }
}

Model

The Model class essential serves two purposes: storing the Meshes and rendering them. Note that there is no state stored in this class as multiple objects in the world may be using the same Model.

using System;
using System.Linq;
using System.Collections.Generic;
using OpenTK;
using OpenTK.Graphics.OpenGL4;

namespace Graphics
{
    public class Model : Asset
    {
        public string Name { get; private set; }
        public string File { get; private set; }

        public List<Mesh> Meshes { get; private set; } = new List<Mesh>();

        protected ShaderProgram shaderProgram;

        static Model()
        {
            Assets.Register(new ShaderProgram("Default", "Default", "Default"));
        }

        public Model(string name, string file, List<Mesh> meshes, string shaderProgramName = "Default")
        {
            Name = name;
            File = file;
            Meshes = meshes;
            shaderProgram = Assets.Retrieve<ShaderProgram>(shaderProgramName);
        }
    }
}

Next, we need to copy our vertex and element information over to the graphics memory. We're going to use vertex array objects (VAOs) – these are used to save certain aspects of the OpenGL state so when we want to render the model we just need to change the vertex array object rather than bind various other buffers and attributes.

  • GL.GenVertexArray(): Create a new vertex array object.
  • GL.BindVertexArray(): Set the active vertex array object.

We also need to create one buffer for the vertices and one buffer for the elements. Before we do that, there is come code that aggregates all of the vertices and elements into a float[] and uint[] respectively. Because the element array in each mesh stores the vertex indices starting from zero we need to adjust them.

You'll notice a similar pattern now when we're creating the buffers: create, bind, set data.

  • GL.GenBuffer(): Create a new buffer.
  • GL.BindBuffer(): Set the active buffer. We use BufferTarget.ArrayBuffer for our vertex data and BufferTarget.ElementArrayBuffer for our element data.
  • GL.BufferData(): Set the data in the active buffer. Note we're using BufferUsageHint.StaticDraw to tell OpenGL that we're not going to be changing the data in this buffer. There are various options indicating how often you're going to be writing to or using the buffer.
private int vertexArrayObject = -1;
private int vertexBufferId = -1;
private int elementBufferId = -1;
  
public void Setup()
{
    vertexArrayObject = GL.GenVertexArray();
    GL.BindVertexArray(vertexArrayObject);

    SetBuffers();
    SetShaderAttributes();

    GL.BindVertexArray(0);
}

private void SetBuffers()
{
    float[] vertices = Meshes.SelectMany(x => x.Vertices).SelectMany(x => x.FloatArray()).ToArray();
    uint[] elements = Meshes.SelectMany(x => x.Elements).Select(x => Convert.ToUInt32(x)).ToArray();

    int vertexOffset = 0;
    int elementOffset = 0;
    foreach(var m in Meshes)
    {
        m.ElementOffset = elementOffset;
        m.ElementCount = m.Elements.Count;
        elementOffset += m.Elements.Count;

        for(int i = 0; i < m.Elements.Count; i++)
            elements[m.ElementOffset + i] += Convert.ToUInt32(vertexOffset);

        vertexOffset += m.Vertices.Count;
    }

    GL.BindVertexArray(vertexArrayObject);

    vertexBufferId = GL.GenBuffer();
    GL.BindBuffer(BufferTarget.ArrayBuffer, vertexBufferId);
    GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * sizeof(float)), vertices, BufferUsageHint.StaticDraw);

    elementBufferId = GL.GenBuffer();
    GL.BindBuffer(BufferTarget.ElementArrayBuffer, elementBufferId);
    GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(elements.Length * sizeof(uint)), elements, BufferUsageHint.StaticDraw);
}

We're using a virtual method to set the shader attributes because a different model might use a different shader with different attributes. We just set the position and texture coordinates attributes that match the in variables in our GLSL shader code.

protected virtual void SetShaderAttributes()
{
    int stride = 5 * sizeof(float);
    shaderProgram.AddAttribute("Position", 3, stride, 0);
    shaderProgram.AddAttribute("TextureCoordinates", 2, stride, 3 * sizeof(float));
}

The Render method isn't too bad. We set the projection, view and model matrices. Then we loop through each mesh in our model, binding the correct texture and rendering each. If the RenderMode is something other than RenderMode.Texture we need to handle that in here.

  • GL.PolygonMode(): Specify how to draw the given mesh – as points, wireframe or complete.
  • GL.DrawElements(): Render the elements.
private static Vector4 TextureColor = Vector4.One;
private static Vector4 PointColor = new Vector4(0.1f, 0.0f, 0.0f, 1.0f);
private static Vector4 EdgeColor = new Vector4(0.0f, 1.0f, 0.0f, 1.0f);
private static Vector4 FaceColor = new Vector4(0.6f, 0.6f, 1.0f, 1.0f);

public void Render(Matrix4 projection, Matrix4 view, Matrix4 model)
{            
    GL.BindVertexArray(vertexArrayObject);

    shaderProgram.SetUniform("Projection", projection);
    shaderProgram.SetUniform("View", view);
    shaderProgram.SetUniform("Model", model);

    foreach(var m in Meshes)
    {
        if(m.RenderMode == RenderMode.None)
            continue;
        
        if(m.RenderMode == RenderMode.Texture)
        {
            shaderProgram.SetUniform("Color", TextureColor);
            m.Material.DiffuseTexture.Bind();
            GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Fill);
        }
        else if(m.RenderMode == RenderMode.Face)
        {
            shaderProgram.SetUniform("Color", FaceColor);
            Texture.Blank.Bind();
            GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Fill);
        }
        else if(m.RenderMode == RenderMode.Edge)
        {
            shaderProgram.SetUniform("Color", EdgeColor);
            Texture.Blank.Bind();
            GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Line);
        }
        else if(m.RenderMode == RenderMode.Point)
        {                    
            shaderProgram.SetUniform("Color", PointColor);
            Texture.Blank.Bind();
            GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Point);
        }

        GL.DrawElements(BeginMode.Triangles, m.ElementCount, DrawElementsType.UnsignedInt, m.ElementOffset * sizeof(float));
    }

    GL.BindVertexArray(0);
}

Wavefront Importer (.obj)

The Wavefront model file format is open and can be read in a text editor like it's material file cousin. Let's have a look. It's quite painless – it outlines which material library file to use and then an object (equivalent to a Mesh). It then defines the distinct positions, texture coordinates and normals. Finally it forms elements specifying a position, texture coordinate and normal by index.

There are two minor quirks. First, the indices start at 1 rather the traditional 0. Second, a later object can reference the positions and normals defined in earlier objects. The index doesn't reset for each new object.

# Blender v2.77 (sub 0) OBJ File: 'Crate.blend'
# www.blender.org
mtllib Crate2.mtl
o Cube_Cube.001
v 25.000000 -25.000000 25.000000
v 25.000000 25.000000 25.000000
v -25.000000 -25.000000 25.000000
[...]
vt 1.0000 0.0000
vt 0.0000 1.0000
vt 0.0000 0.0000
[...]
vn 0.0000 0.0000 1.0000
vn -1.0000 0.0000 0.0000
vn -0.5605 0.0000 -0.8282
[...]
usemtl Material.001
s off
f 4/1/1 1/2/1 2/3/1
f 8/4/2 3/5/2 4/6/2
f 10/7/3 7/8/3 8/9/3
[...]

The WavefrontModelImporter.Import() method streams in each line from the file and parses it. It pushes positions, normals and texture coordinates into separate lists that can than be accessed by index when we're creating an element. We also import and retrieve relevant Materialss.

using System;
using System.IO;
using System.Linq;
using System.Collections.Generic;
using OpenTK;

namespace Graphics.Importers.Wavefront
{
    public class WavefrontModelImporter : AssetImporter
    {
        private const string CommentFlag = "#";
        private const string ObjectFlag = "o";
        private const string PositionFlag = "v";
        private const string TextureCoordinatesFlag = "vt";
        private const string NormalFlag = "vn";
        private const string FaceFlag = "f";
        private const char FacePartsFlag = '/';
        private const string ShadingModeFlag = "s";
        private const string MaterialLibraryFlag = "mtllib";
        private const string MaterialFlag = "usemtl";

        public Type AssetType { get; } = typeof(Model);
        public string[] FileExtensions { get; } = new string[] { ".obj" };

        public void Import(string path)
        {
            string name = Path.GetFileNameWithoutExtension(path);
            string file = Path.GetFileName(path);

            List<Mesh> meshes = new List<Mesh>();
            Mesh currentMesh = null;

            List<Vector3> positions = new List<Vector3>();
            List<Vector3> normals = new List<Vector3>();
            List<Vector2> textureCoordinates = new List<Vector2>();

            using(StreamReader reader = new StreamReader(path))
            {
                while(reader.Peek() >= 0)
                {
                    string line = reader.ReadLine().TrimStart();

                    if(line.StartsWith(CommentFlag, StringComparison.InvariantCulture))
                        continue;

                    // split on whitespace
                    string[] parts = line.Split((char[])null, StringSplitOptions.RemoveEmptyEntries);

                    if(parts.Length < 1)
                        continue;

                    if(parts[0] == ObjectFlag)
                    {
                        currentMesh = new Mesh()
                        {
                            Name = parts[1],
                            Vertices = new List<Vertex>(),
                            Elements = new List<int>()
                        };
                        meshes.Add(currentMesh);
                    }
                    else if(parts[0] == PositionFlag)
                    {
                        positions.Add(new Vector3(Convert.ToSingle(parts[1]), Convert.ToSingle(parts[2]), Convert.ToSingle(parts[3])));
                    }
                    else if(parts[0] == TextureCoordinatesFlag)
                    {
                        textureCoordinates.Add(new Vector2(Convert.ToSingle(parts[1]), 1.0f - Convert.ToSingle(parts[2])));
                    }
                    else if(parts[0] == NormalFlag)
                    {
                        normals.Add(new Vector3(Convert.ToSingle(parts[1]), Convert.ToSingle(parts[2]), Convert.ToSingle(parts[3])));
                    }
                    else if(parts[0] == FaceFlag)
                    {
                        for(int i = 1; i < parts.Length; i++)
                        {
                            string[] subParts = parts[i].Split(FacePartsFlag);

                            currentMesh.Vertices.Add(new Vertex()
                            {
                                Position = positions[Convert.ToInt32(subParts[0]) - 1],
                                Normal = normals[Convert.ToInt32(subParts[2]) - 1],
                                TextureCoordinates = textureCoordinates[Convert.ToInt32(subParts[1]) - 1]
                            });
                        }
                    }
                    else if(parts[0] == ShadingModeFlag)
                    { }
                    else if(parts[0] == MaterialLibraryFlag)
                    {
                        Assets.ImportFile<Material>(parts[1]);
                    }
                    else if(parts[0] == MaterialFlag)
                    {
                        currentMesh.Material = Assets.Retrieve<Material>(parts[1]);
                    }
                }
            }

This format removes duplicates of the individual positions, normals and texture coordinates however we undo all this work because we store these three items of information together for each Vertex. In order to be fully efficient, we need to check if there are any Vertex duplicates and remove them.

            foreach(var mesh in meshes)
            {
                mesh.Elements = Enumerable.Range(0, mesh.Vertices.Count).ToList();

                // remove duplicates
                for(int i = mesh.Vertices.Count - 1; i >= 0; i--)
                {
                    // check if this is the first instance of this vertex
                    var current = mesh.Vertices[i];
                    int firstIndex = mesh.Vertices.FindIndex(x => x.Position == current.Position && 
                                                                  x.Normal == current.Normal && 
                                                                  x.TextureCoordinates == current.TextureCoordinates);
                    if(i == firstIndex)
                        continue;

                    // remove duplicate vertex
                    mesh.Vertices.RemoveAt(i);
                    // remove references to duplicate
                    mesh.Elements = mesh.Elements.Select(x => x == i ? firstIndex : x).ToList();
                    // adjust references to indexes greater than the removed item
                    mesh.Elements = mesh.Elements.Select(x => (x > i ? x - 1 : x)).ToList();
                }
            }

Finally, we just create the model, call Setup() and register it.

            Model model = new Model(name, file, meshes);
            model.Setup();
            Assets.Register(model);
        }
    }
}

Pose

The Pose class is just a base class for the moment since our current Model doesn't have multiple poses and isn't animated.

namespace Graphics
{
    public class Pose : Asset
    {
        public string Name { get; set; }
        public string File { get; set; }

        public virtual Pose Clone(string name)
        {
            return new Pose() { Name = name };
        }
    }
}

Animation

The base Animation class is quite bare bones. It includes properties for frame rate and frame count. The frames themselves are simply defined as a List<Pose>.

You can also extract an individual frame as a Pose or a slice of the animation into a new Animation. This will be useful if you want to place a series of poses or multiple animations into a single animation file and then need to extract them.

using System;
using System.Linq;
using System.Collections.Generic;

namespace Graphics
{
    public class Animation : Asset
    {
        public string Name { get; private set; }
        public string File { get; private set; }

        public int FrameRate { get; set; }
        public int FrameCount { get; set; }

        public List<Pose> Frames { get; set; } = new List<Pose>();

        public Animation(string name, string file)
        {
            Name = name;
            File = file;
        }

        public Pose ExtractPose(string name, int frame)
        {
            Pose pose = Frames[frame].Clone(name);

            Assets.Register(pose);

            return pose;
        }

        public Animation ExtractAnimation(string name, int startFrame, int count)
        {
            Animation animation = new Animation(name, "")
            {
                FrameRate = FrameRate,
                FrameCount = count,
                Frames = Frames.Skip(startFrame).Take(count).Select((x, i) => x.Clone(name + "." + i)).ToList()
            };

            Assets.Register(animation);

            return animation;
        }
    }       
}

Finally, we can add some pose and animation handling code into the Model class. These mainly add poses or animations to the relevant lists or set the current pose or animation frame. They will need to be overloaded in order to do anything.

public List<Pose> Poses { get; private set; } = new List<Pose>();
public List<Animation> Animations { get; private set; } = new List<Animation>();

public virtual void AddPose(Pose pose)
{
    Poses.Add(pose);
}

public virtual void AddPose(string pose)
{
    AddPose(Assets.Retrieve<Pose>(pose));
}

public virtual void SetPose(string pose)
{
    if(pose != "Default")
        throw new NotImplementedException();
}

public virtual void AddAnimation(string animation)
{
    AddAnimation(Assets.Retrieve<Animation>(animation));
}

public virtual void AddAnimation(Animation animation)
{
    Animations.Add(animation);
}

public virtual void SetAnimationFrame(string animation, float frame)
{
    throw new NotImplementedException();
}

Skeletal Models

Skeletal models are different to regular models because they hold additional information. There are three kinds of new information we have to hold.

  • Bone hierarchy: how the bones are connected to each other
  • Vertex weights: how each vertex is connected to one or more bones
  • Bone transformations: the position and rotation of each bone. These will be stored as Poses.

Bone

The SkeletalBone class represents a point on the skeleton. Bones are hierarchical so we need to store the parent bone too. We're going to store them, as well as the bone transformations, in an array rather than a tree so we'll store the index as well. The origin bone will have no parent.

namespace Graphics
{
    public class SkeletalBone
    {
        public int Index;
        public string Name;
        public SkeletalBone Parent;
    }
}

Weight

The SkeletalWeight class stores a position offset and bias describing where the vertex will be relative to the bone. If a vertex is only connected to one bone, it will be locked and move in sync as the bone moves and rotates. If a vertex is connected to more than one more, it will use a weighted average with Bias as a weight for each bone.

using OpenTK;

namespace Graphics
{
    public class SkeletalWeight
    {
        public int BoneIndex;
        public float Bias;
        public Vector3 Position;
    }
}

Vertex

Our SkeletalVertex class inherits from the standard Vertex class and expands it to include a list of bones this vertex is connected to. We also need to override the ToFloatArray() method to include the bone indices and biases.

SkeletalModel.MaximumWeightsPerVertex is a static variable that defines the maximum number of bones a vertex can be weighted against. In this application, it will be four.

using System;
using System.Collections.Generic;

namespace Graphics
{
    public class SkeletalVertex : Vertex
    {
        public List<SkeletalWeight> Weights;

        public override float[] FloatArray()
        {
            float[] ret = new float[5 + SkeletalModel.MaximumWeightsPerVertex * 2];
            ret[0] = Position.X;
            ret[1] = Position.Y;
            ret[2] = Position.Z;
            ret[3] = TextureCoordinates.X;
            ret[4] = TextureCoordinates.Y;

            for(int i = 0; i < Weights.Count; i++)
            {
                ret[5 + i] = Weights[i].BoneIndex;
                ret[5 + SkeletalModel.MaximumWeightsPerVertex + i] = Weights[i].Bias;
            }

            return ret;
        }
    }
}

Pose

Our SkeletalPose class simply stores a Matrix4[], with one transformation matrix for each bone in the model. Because the relationship between the vertices and the bones are already defined, we only need to transformation the bones in order to animate the entire model. The class also includes some getter and setter methods and operators to make life a bit easier.

using System;
using OpenTK;

namespace Graphics
{
    public class SkeletalPose : Pose
    {
        private Matrix4[] boneTransformations;

        public Matrix4 this[int i]
        {
            get { return boneTransformations[i]; }
            set { boneTransformations[i] = value; }
        }

        public Matrix4[] MatrixArray { get { return boneTransformations; } }

        public SkeletalPose(int count)
        {
            boneTransformations = new Matrix4[count];
        }

        public SkeletalPose(int count, Matrix4 template)
        {
            boneTransformations = new Matrix4[count];
            for(int i = 0; i < count; i++)
                boneTransformations[i] = template;
        }

        public SkeletalPose(Matrix4[] matrices)
        {
            boneTransformations = matrices;
        }

        public Vector3 Position(int boneIndex)
        {
            return boneTransformations[boneIndex].ExtractTranslation();
        }

        public Quaternion Rotation(int boneIndex)
        {
            return boneTransformations[boneIndex].ExtractRotation();
        }

        public void Set(int boneIndex, Vector3 position, Quaternion rotation)
        {
            boneTransformations[boneIndex] = Matrix4.CreateFromQuaternion(rotation) * Matrix4.CreateTranslation(position);
        }

        public override Pose Clone(string name)
        {
            return new SkeletalPose((Matrix4[])this.boneTransformations.Clone()) { Name = name };
        }
    }
}

Skeleton

The skeleton holds the list of SkeletonBones. It also holds some important SkeletalPoses.

  • BindPose: This stores the position and rotation of the skeleton in it's default pose. It's up to the modeller to decide really but for human like models this is usually standing up with arms straight out. We will actually generate the vertices for the bind pose and sort them in the graphics card memory in this pose.
  • Identity: Because we've already set the vertices up in the BindPose, if we want to render the model in the bind pose, we just have to set each bone transformation as the identity matrix.
  • InverseBindPose: Made up of the inverse of each transformation matrix of the bind pose. Since we have set the vertices to the graphics card memory in the bind pose, whenever we want to change to a different pose we set need to calculate position and rotation deltas if you will. To get these, we multiply each matrix by the inverse bind of the bind pose.

Handing over the vertices to OpenGL in the bind pose causes things to be a little messy as you can see. I'd prefer to change it so that we just pass the bind pose transformations when we want the bind pose and different transformations when we want a different pose. Then we wouldn't need an Identity pose or InverseBindPose. I'm not 100% certain this is possible though, I need to do some more research.

using System.Collections.Generic;
using OpenTK;

namespace Graphics
{    
    public class Skeleton
    {
        public List<SkeletalBone> Bones { get; set; }
        public SkeletalPose Identity { get; set; }
        public SkeletalPose BindPose { get; set; }
        public SkeletalPose InverseBindPose { get; set; }

        public Skeleton(int boneCount)
        {
            Bones = new List<SkeletalBone>(boneCount);
            Identity = new SkeletalPose(boneCount, Matrix4.Identity);
            BindPose = new SkeletalPose(boneCount);
            InverseBindPose = new SkeletalPose(boneCount);
        }
    }
}

Shader

We need a new VertexShader to handle our new model type. It is quite similar to the default one but now has a Bones matrix array and two new in attribute variables for bone index and weighting. The position of each vertex is calculated by taking the input position (which is the vertex's position in the bind pose) and then moving and rotating it by the difference between the bind pose and the new pose. If the vertex is connected to multiple bones, then this is done using a weighted average.

You can see we're limiting the number of bones in a model to 50 here as well.

#version 150 core

uniform mat4 Projection;
uniform mat4 View;
uniform mat4 Model;

uniform mat4 Bones[50];

in vec3 Position;
in vec2 TextureCoordinates;
in vec4 Index;
in vec4 Weight;

out vec2 TextureCoordinates0;

void main()
{
  vec4 newPosition = vec4(0.0);
  int index = 0;

  for(int i=0; i<4; i++)
  {		
    index = int(Index[i]);	
    newPosition += (Bones[index] * vec4(Position,1.0)) * Weight[i];		
  }

  gl_Position = Projection * View * Model * vec4(newPosition.xyz, 1.0);
  TextureCoordinates0 = TextureCoordinates;
}

Model

The SkeletalModel class has two constant values.

  • MaximumWeightsPerVertex: The maximum number of bones each vertex can be connected to.
  • MaximumBonesPerModel: The maximum number of bones each model can have.

We also need to store a Skeleton for the model and we going to use the Skeletal vertex shader we just created above.

using System;
using System.Collections.Generic;
using OpenTK;

namespace Graphics
{
    public class SkeletalModel : Model
    {
        public const int MaximumWeightsPerVertex = 4;
        public const int MaximumBonesPerModel = 50;

        public Skeleton Skeleton { get; private set; }

        static SkeletalModel()
        {
            Assets.Register(new ShaderProgram("Skeletal", "Skeletal", "Default"));
        }

        public SkeletalModel(string name, string file, Skeleton skeleton, List<Mesh> meshes) 
            : base(name, file, meshes, "Skeletal")
        {
            Skeleton = skeleton;
            AddPose(skeleton.BindPose.Clone("Default"));

            shaderProgram.AddUniform("Bones");

            Meshes.Add(Util.Mesh.GenerateSkeletonMesh(skeleton));
        }
    }
}

Now for a slight detour. We need a static method that generates a mesh so that we can visualise the model's skeleton. We're just going to create one triangle the connection between each bone and it's parent bone. The third vertex will have a small offset so the triangle is visible.

using System;
using System.Collections.Generic;
using OpenTK;

namespace Graphics.Util
{
    public static class Mesh
    {
        public static Graphics.Mesh GenerateSkeletonMesh(Skeleton skeleton)
        {
            List<Vertex> verts = new List<Vertex>();
            List<int> els = new List<int>();

            for(int i = 0; i < skeleton.Bones.Count; i++)
            {
                int pi = skeleton.Bones[i].Parent?.Index ?? 0;

                verts.AddRange(new[] { CreateBoneVertex(skeleton, i), CreateBoneVertex(skeleton, i, 0.5f), CreateBoneVertex(skeleton, pi) });
                els.AddRange(new[] { i * 3, i * 3 + 1, i * 3 + 2 });
            }

            return new Graphics.Mesh()
            {
                Name = "Skeleton",
                Vertices = verts,
                Elements = els,
                RenderMode = RenderMode.None,
                Material = Material.Blank
            };
        }

        private static SkeletalVertex CreateBoneVertex(Skeleton skeleton, int boneIndex, float offset = 0.0f)
        {
            return new SkeletalVertex()
            {
                Position = skeleton.BindPose.Position(boneIndex) + (Vector3.One * offset),
                Weights = new List<SkeletalWeight>() { new SkeletalWeight() { Bias = 1.0f, BoneIndex = boneIndex, Position = Vector3.Zero } }
            };
        }
    }
}

We need to tell the shader there are going to be two extra in variables for each vertex.

protected override void SetShaderAttributes()
{
    int stride = (5 + (2 * MaximumWeightsPerVertex)) * sizeof(float);
    shaderProgram.AddAttribute("Position", 3, stride, 0);
    shaderProgram.AddAttribute("TextureCoordinates", 2, stride, 3 * sizeof(float));
    shaderProgram.AddAttribute("Index", MaximumWeightsPerVertex, stride, 5 * sizeof(float));
    shaderProgram.AddAttribute("Weight", MaximumWeightsPerVertex, stride, (5 + MaximumWeightsPerVertex) * sizeof(float));
}

Now for the pose and animation code. Whenever we add a new pose or animation, we're going to generate all of the transformation matrices and have them ready to pass on to the shader. Remember we need to multiply each by the inverse bind pose.

private Dictionary<string, List<SkeletalPose>> animationPoses = new Dictionary<string, List<SkeletalPose>>();
        private Dictionary<string, SkeletalPose> posePoses = new Dictionary<string, SkeletalPose>();

public override void AddPose(Pose pose)
{
    Poses.Add(pose);

    // precompute the matrix for each joint for each animation frame 
    SkeletalPose p = pose as SkeletalPose;
    SkeletalPose calculatedPose = new SkeletalPose(p.MatrixArray.Length);

    // multiply each animation joint matrix by its relevant inverse bind pose joint matrix
    for(int i = 0; i < calculatedPose.MatrixArray.Length; i++)
        calculatedPose[i] = Skeleton.InverseBindPose[i] * p[i];

    posePoses.Add(pose.Name, calculatedPose);
}

public override void AddAnimation(Animation animation)
{            
    Animations.Add(animation);

    // precompute the matrix for each joint for each animation frame 
    List<SkeletalPose> poses = new List<SkeletalPose>();
    foreach(var a in animation.Frames)
    {
        SkeletalPose f = a as SkeletalPose;
        SkeletalPose pose = new SkeletalPose(f.MatrixArray.Length);

        // multiply each animation joint matrix by its relevant inverse bind pose joint matrix
        for(int i = 0; i < f.MatrixArray.Length; i++)
            pose[i] = Skeleton.InverseBindPose[i] * f[i];

        poses.Add(pose);
    }

    animationPoses.Add(animation.Name, poses);
}

Finally, we have the methods that set the Bones uniform variable (which is a matrix array) in shader program. We have to do this just before we render each instance of the model (as each instance of the model might be in a different pose or on a different animation frame). The current animation frame comes in as a float and we have to interpolate between two frames if it's not an integer.

public override void SetPose(string pose)
{
    shaderProgram.SetUniform("Bones", posePoses[pose].MatrixArray);
}

public override void SetAnimationFrame(string animation, float frame)
{
    int prevFrame = Convert.ToInt32(Math.Floor(frame));
    int nextFrame = Convert.ToInt32(Math.Ceiling(frame));

    // we're sitting on an exact frame
    if(prevFrame == nextFrame)
    {
        shaderProgram.SetUniform("Bones", animationPoses[animation][nextFrame].MatrixArray);
        return;
    }

    if(nextFrame >= animationPoses[animation].Count)
        nextFrame = 0;

    // we need to interpolate between framese
    float blend = Convert.ToSingle(frame % 1.0);

    Matrix4[] prev = animationPoses[animation][prevFrame].MatrixArray;
    Matrix4[] next = animationPoses[animation][nextFrame].MatrixArray;

    Matrix4[] inter = Util.Math.InterpolateMatrix(prev, next, blend);

    shaderProgram.SetUniform("Bones", inter);
}

Here is the code for interpolating two transformation matrices. We have to extract the position, scale and rotation and interpolate them separately. Simply interpolating the values in the matrix doesn't work.

using System;
using OpenTK;

namespace Graphics.Util
{
    public static class Math
    {
        public static Matrix4[] InterpolateMatrix(Matrix4[] prev, Matrix4[] next, float blend)
        {
            Matrix4[] result = new Matrix4[prev.Length];

            for(int i = 0; i < prev.Length; i++)
            {
                Vector3 positionInter = Vector3.Lerp(prev[i].ExtractTranslation(), next[i].ExtractTranslation(), blend);
                Vector3 scaleInter = Vector3.Lerp(prev[i].ExtractScale(), next[i].ExtractScale(), blend);
                Quaternion rotationInter = Quaternion.Slerp(prev[i].ExtractRotation(), next[i].ExtractRotation(), blend);

                result[i] = Matrix4.CreateFromQuaternion(rotationInter) * Matrix4.CreateTranslation(positionInter) * Matrix4.CreateScale(scaleInter);
            }
            return result;
        }
    }
}

Id Tech 4 Model Importer (.md5mesh)

This is the model format used by the id Tech 4 engine, which was used in Doom 3 and Quake 4 amoung others.

It's a text file so we can open it up. It's quite large so I've removed chunks that are similar.

  • numJoints: number of bones in the model (called joints in this format)
  • numMeshes: number of meshes
  • joints: a list of bones, one per line, with the bone name, parent index, a position vector and rotation vector. The position and rotation here are the bind pose of the model.
  • mesh: denotes the start of a mesh
  • shader: which diffuse texture to use for this mesh.
  • numverts: number of vertices in this mesh.
  • vert: vertex index, texture coordinates, weight start index, weight count
  • numtris: number of triangles in this mesh.
  • tri: trangle index, 3 vertex indices
  • numweights: number of weights in this mesh.
  • weight: weight index, bone index, bias, position offset vector
MD5Version 10 // Parameters used during export: Reorient: False; Scale: 1.0
commandline ""

numJoints 33
numMeshes 6

joints {
  "origin" -1 ( 0.0000000000 -0.0060440009 -0.0164299998 ) ( -0.5000001192 -0.4999998510 -0.4999998510 )
  "sheath" 0 ( 11.0048131943 31.7024726868 3.1771361828 ) ( 0.6890682578 -0.1586981714 0.6595855951 )
  [...]
}

mesh {
  shader "BobBody.png"
  numverts 140
  vert 0 ( 0.1621090025 0.5507810116 ) 0 1
  vert 1 ( 0.1777340025 0.5683589876 ) 1 1
  vert 2 ( 0.1621090025 0.5683589876 ) 2 1
  vert 3 ( 0.1777340025 0.5507810116 ) 3 1
  [...]
  numtris 106
  tri 0 1 2 0
  tri 1 3 1 0
  tri 2 5 6 4
  [...]
  numweights 140
  weight 0 2 1.0000000000 ( -3.3587818146 11.0518865585 0.5483892560 )
  weight 1 2 1.0000000000 ( -3.7815830708 10.4317817688 0.2539222538 )
  weight 2 2 1.0000000000 ( -3.3194971085 10.4438304901 0.7197070718 )
  [...]
}

mesh {
  [...]
}

[...]

This isn't too difficult to load into our C# data structures. Loading in the skeleton is straight forward. Loading in the mesh elements is too. We create a dictionary based on the weight start index and count so we can easily figure out which weight index belongs to which vertex. We calculate the position of each vertex in the bind pose as the weights are processed.

using System;
using System.IO;
using System.Linq;
using System.Collections.Generic;
using OpenTK;

namespace Graphics.Importers.IdTech4
{
    public class IdTech4ModelImporter : AssetImporter
    {
        private const string VersionFlag = "MD5Version";
        private const string MeshCountFlag = "numMeshes";
        private const string BoneCountFlag = "numJoints";
        private const string BonesStartFlag = "joints";
        private const string BonesEndFlag = "}";
        private const string MeshStartFlag = "mesh";
        private const string MeshEndFlag = "}";
        private const string VertexCountFlag = "numverts";
        private const string TriangleCountFlag = "numtris";
        private const string WeightCountFlag = "numweights";
        private const string TriangleFlag = "tri";
        private const string WeightFlag = "weight";
        private const string ShaderFlag = "shader";
        private const string VertexFlag = "vert";
        private const string DiffuseTextureSuffix = "_d";
        private const string SpecularTextureSuffix = "_s";
        private const string NormalTextureSuffix = "_local";
        private const string HeighTextureSuffix = "_h";

        public Type AssetType { get; } = typeof(Model);
        public string[] FileExtensions { get; } = new string[] { ".md5mesh" };

        public void Import(string path)
        {
            string name = Path.GetFileNameWithoutExtension(path);
            string file = Path.GetFileName(path);

            // skeleton
            int expectedBoneCount = 0;
            bool collectingBones = false;
            Skeleton skeleton = null;
            int boneIndex = 0;

            // meshes
            int expectedMeshCount = 0;
            bool collectingMesh = false;
            List<Mesh> meshes = null;

            // mesh
            Mesh currentMesh = null;
            List<SkeletalVertex> weightToVertex = new List<SkeletalVertex>();
            int expectedVertexCount = 0;
            int expectedFaceCount = 0;
            int expectedWeightCount = 0;

            using(StreamReader reader = new StreamReader(path))
            {
                while(reader.Peek() >= 0)
                {
                    string line = reader.ReadLine().TrimStart();

                    // split on whitespace
                    string[] parts = line.Split((char[])null, StringSplitOptions.RemoveEmptyEntries);

                    if(parts.Length < 1)
                        continue;

                    if(parts[0] == VersionFlag && parts[1] != "10")
                    {
                        throw new NotSupportedException("only id Tech 4 (.md5mesh) version 10 supported");
                    }

                    if(parts[0] == BoneCountFlag)
                    {
                        expectedBoneCount = Convert.ToInt32(parts[1]);
                        skeleton = new Skeleton(expectedBoneCount);

                        if(expectedBoneCount > SkeletalModel.MaximumBonesPerModel)
                            throw new NotSupportedException($"only id Tech 4 (.md5mesh/.md5anim) models with less than {SkeletalModel.MaximumBonesPerModel + 1} are supported");
                    }
                    else if(parts[0] == MeshCountFlag)
                    {
                        expectedMeshCount = Convert.ToInt32(parts[1]);
                        meshes = new List<Mesh>(expectedMeshCount);
                    }
                    // bones
                    else if(parts[0] == BonesStartFlag)
                    {
                        collectingBones = true;
                    }
                    else if(collectingBones && parts[0] == BonesEndFlag)
                    {
                        collectingBones = false;

                        if(skeleton.Bones.Count != expectedBoneCount)
                            throw new FormatException("incorrect number of bones/joints");                        
                    }
                    else if(collectingBones)
                    {
                        // "name" parent ( px py pz ) ( rx ry rz )
                        int parentBoneIndex = Convert.ToInt32(parts[1]);
                        skeleton.Bones.Add(new SkeletalBone()
                        {
                            Index = boneIndex,
                            Name = parts[0].Replace("\"",""),
                            Parent = parentBoneIndex < 0 ? null : skeleton.Bones[parentBoneIndex]
                        });

                        var position = new Vector3(Convert.ToSingle(parts[3]), Convert.ToSingle(parts[4]), Convert.ToSingle(parts[5]));
                        var rotation = Util.Math.ComputeW(new Quaternion(Convert.ToSingle(parts[8]), Convert.ToSingle(parts[9]), Convert.ToSingle(parts[10]), 0.0f));

                        skeleton.BindPose.Set(boneIndex, position, rotation);
                        skeleton.InverseBindPose[boneIndex] = skeleton.BindPose[boneIndex].Inverted();
                        boneIndex++;
                    }
                    // mesh
                    else if(parts[0] == MeshStartFlag)
                    {
                        collectingMesh = true;

                        currentMesh = new Mesh();
                        meshes.Add(currentMesh);

                        weightToVertex.Clear();
                    }

                    else if(collectingMesh && parts[0] == MeshEndFlag)
                    {
                        collectingMesh = false;

                        if(expectedVertexCount != currentMesh.Vertices.Count)
                            throw new FormatException("incorrect number of vertices for mesh '" + currentMesh.Name + "', expected=" + expectedVertexCount + ", actual=" + currentMesh.Vertices.Count);
                        if(expectedFaceCount * 3 != currentMesh.Elements.Count)
                            throw new FormatException("incorrect number of faces for mesh '" + currentMesh.Name + "', expected=" + expectedFaceCount + ", actual=" + currentMesh.Elements.Count);
                        if(expectedWeightCount != currentMesh.Vertices.Sum(x => ((SkeletalVertex)x).Weights.Count))
                            throw new FormatException("incorrect number of weights for mesh '" + currentMesh.Name + "', expected=" + expectedWeightCount + ", actual=" + currentMesh.Vertices.Sum(x => ((SkeletalVertex)x).Weights.Count));
                    }
                    else if(parts[0] == VertexCountFlag)
                    {
                        expectedVertexCount = Convert.ToInt32(parts[1]);
                        currentMesh.Vertices = new List<Vertex>(expectedVertexCount);
                    }
                    else if(parts[0] == TriangleCountFlag)
                    {
                        expectedFaceCount = Convert.ToInt32(parts[1]);
                        currentMesh.Elements = new List<int>(expectedFaceCount * 3);
                    }
                    else if(parts[0] == WeightCountFlag)
                    {
                        expectedWeightCount = Convert.ToInt32(parts[1]);
                    }
                    else if(parts[0] == VertexFlag)
                    {
                        // vert index ( u v ) startWeight weightCount
                        SkeletalVertex v = new SkeletalVertex()
                        {
                            TextureCoordinates = new Vector2(Convert.ToSingle(parts[3]), Convert.ToSingle(parts[4])),
                            Weights = new List<SkeletalWeight>()
                        };
                        currentMesh.Vertices.Add(v);

                        int weightCount = Convert.ToInt32(parts[7]);
                        for(int i = 0; i < weightCount; i++)
                            weightToVertex.Add(v);
                    }
                    else if(parts[0] == TriangleFlag)
                    {
                        // tri index v0 v1 v2
                        currentMesh.Elements.AddRange(new[] { Convert.ToInt32(parts[2]), Convert.ToInt32(parts[3]), Convert.ToInt32(parts[4]) });
                    }
                    else if(parts[0] == WeightFlag)
                    {
                        // weight index joint bias ( x y z )
                        SkeletalWeight w = new SkeletalWeight()
                        {
                            BoneIndex = Convert.ToInt32(parts[2]),
                            Bias = Convert.ToSingle(parts[3]),
                            Position = new Vector3(Convert.ToSingle(parts[5]), Convert.ToSingle(parts[6]), Convert.ToSingle(parts[7]))
                        };

                        int id = Convert.ToInt32(parts[1]);

                        SkeletalVertex v = weightToVertex[id];
                        v.Weights.Add(w);

                        var rotpos = Vector3.Transform(w.Position, skeleton.BindPose.Rotation(w.BoneIndex));
                        v.Position += (skeleton.BindPose.Position(w.BoneIndex) + rotpos) * w.Bias;
                    }
                    else if(parts[0] == ShaderFlag)
                    {
                        // shader "file"
                        string materialFile = parts[1].Replace("\"", "");
                        bool isDiffuseOnly = Path.GetExtension(materialFile) != String.Empty;
                        currentMesh.Name = Path.GetFileNameWithoutExtension(materialFile);
                        currentMesh.Material = new Material(currentMesh.Name, "");

                        if(isDiffuseOnly)
                        {
                            currentMesh.Material.DiffuseTexture = Assets.RetrieveFile<Texture>(materialFile);
                        }
                        else
                        {   
                            currentMesh.Material.DiffuseTexture = Assets.RetrieveFile<Texture>(materialFile + DiffuseTextureSuffix + ".png");
                            currentMesh.Material.SpecularTexture = Assets.RetrieveFile<Texture>(materialFile + SpecularTextureSuffix + ".png");
                            currentMesh.Material.NormalTexture = Assets.RetrieveFile<Texture>(materialFile + NormalTextureSuffix + ".png");
                            currentMesh.Material.HeightTexture = Assets.RetrieveFile<Texture>(materialFile + HeighTextureSuffix + ".png");
                        }
                    }
                }
            }

            if(meshes.Count != expectedMeshCount)
                throw new FormatException("incorrect number of meshes");

            Model model = new SkeletalModel(name, file, skeleton, meshes);
            model.Setup();
            Assets.Register(model);
        }
    }
}

One final thing is that rotations are stored as the x, y and z components of a quaternion. We need to calculate the w component ourselves. We can add a helper function to our Util.Math static class.

public static Quaternion ComputeW(Quaternion q)
{
    float t = 1.0f - (q.X * q.X) - (q.Y * q.Y) - (q.Z * q.Z);
    float w = 0.0f;
    if(t >= 0.0f)
        w = -Convert.ToSingle(System.Math.Sqrt(t));

    return new Quaternion(q.Xyz, w);
}

Id Tech 4 Animation Importer (.md5anim)

I believe this format supports partial animations so for example, we can blend on animation for the top half of a person (say aiming a gun) with another animation for the bottom half (say running).

Again, it's a text file so we can open it up.

  • numFrames: number of frames in the animation
  • numAnimatedComponents: number of values that change for each frame, we have 33 bones with 6 values each (3 for position and 3 for rotation) so 33x6=198
  • frameRate: frames per second
  • numJoints: number of bones in the model (called joints in this format)
  • hierarchy: a list of bones, one per line, with the bone name, parent index, a flag indicating which values change in each frame (63 means all position and rotation values can change), offset into the frame float array for this bone
  • baseframe: the position and rotation values to use if any are the same for multiple frames or not defined for a bone.
  • frame: a float array of position and rotation values for each bone. In this case, every bone is defined so there are six per bone – three for position and three for rotation. The offset defined in the heirarchy section tells you the index into this array to start pulling values for a bone. The flag defines which component of the position or rotation they are for.
MD5Version 10 // Parameters used during export: Reorient: False; Scale: 1.0
commandline ""

numFrames 140
numJoints 33
frameRate 24
numAnimatedComponents 198

hierarchy {
  "origin" -1 63 0
  "sheath" 0 63 6
  [...]
}

bounds {
  ( -16.3410358429 -0.2868178487 -10.3359422684 ) ( 16.3195762634 66.4729003906 12.9775495529 )
  ( -16.3441123962 -0.2880810201 -10.3344449997 ) ( 16.3159980774 66.4781188965 12.9743013382 )
  [...]
}

baseframe {
  ( 0.0000000000 -0.0060440009 -0.0164299998 ) ( -0.5000001192 -0.4999998510 -0.4999998510 )
  ( 31.7085227966 3.1935679913 11.0047931671 ) ( -0.0628157184 -0.0333328731 -0.8810992837 )
  [...]
}

frame 0 {
  0.0000000000 -0.0060440009 -0.0164299998 -0.5000000000 -0.5001711249 -0.4998288453
  31.2289047241 6.2519450188 9.2366247177 0.0223979745 -0.1336329877 -0.8522329926
  [...]
}

frame 1 {
  [...]
}

[...]

This isn't a full importer. I'm assuming every component in the position and rotation of every bone is defined for every frame. And that the animation covers the entire model. We also ignore the bounds defined in the file. We're mainly reading the frames and pushing the position and rotation information into a matrix array that we can store as a Pose. The animation is just a List<Pose> after all.

using System;
using System.IO;
using System.Collections.Generic;
using OpenTK;

namespace Graphics.Importers.IdTech4
{
    public class IdTech4AnimationImporter : AssetImporter
    {
        private const string VersionFlag = "MD5Version";
        private const string FrameCountFlag = "numFrames";
        private const string BoneCountFlag = "numJoints";
        private const string FrameRateFlag = "frameRate";
        private const string AnimatedComponentsFlag = "numAnimatedComponents";
        private const string BonesStartFlag = "hierarchy";
        private const string BonesEndFlag = "}";
        private const string BoundsStartFlag = "bounds";
        private const string BoundsEndFlag = "}";
        private const string BasePoseStart = "baseframe";
        private const string BasePoseEnd = "}";
        private const string PoseStart = "frame";
        private const string PoseEnd = "}";

        public Type AssetType { get; } = typeof(Animation);
        public string[] FileExtensions { get; } = new string[] { ".md5anim" };

        public void Import(string path)
        {
            string name = Path.GetFileNameWithoutExtension(path);
            string file = Path.GetFileName(path);

            bool collectingBoneInfo = false;
            bool collectingBounds = false;
            bool collectingBasePose = false;
            bool collectingPose = false;

            int expectedFrameCount = 0;
            int expectedBoneCount = 0;

            int frameRate = 0;

            List<SkeletalBone> bones = new List<SkeletalBone>();
            int boneIndex = 0;

            Skeleton basePose = null;
            int basePoseBoneIndex = 0;

            List<Pose> frames = new List<Pose>();
            int poseIndex = 0;

            SkeletalPose pose = null;
            int poseBoneIndex = 0;

            using(StreamReader reader = new StreamReader(path))
            {
                while(reader.Peek() >= 0)
                {
                    string line = reader.ReadLine().TrimStart();

                    // split on whitespace
                    string[] parts = line.Split((char[])null, StringSplitOptions.RemoveEmptyEntries);

                    if(parts.Length < 1)
                        continue;

                    if(parts[0] == VersionFlag && parts[1] != "10")
                    {
                        throw new NotSupportedException("only id Tech 4 (.md5mesh) version 10 supported");
                    }

                    if(parts[0] == FrameCountFlag)
                    {
                        expectedFrameCount = Convert.ToInt32(parts[1]);
                    }
                    else if(parts[0] == BoneCountFlag)
                    {
                        expectedBoneCount = Convert.ToInt32(parts[1]);
                        basePose = new Skeleton(expectedBoneCount);

                        if(expectedBoneCount > SkeletalModel.MaximumBonesPerModel)
                            throw new NotSupportedException($"only id Tech 4 (.md5mesh/.md5anim) models with less than {SkeletalModel.MaximumBonesPerModel + 1} are supported");
                    }
                    else if(parts[0] == FrameRateFlag)
                    {
                        frameRate = Convert.ToInt32(parts[1]);
                    }
                    else if(parts[0] == AnimatedComponentsFlag)
                    { }
                    // bones
                    else if(parts[0] == BonesStartFlag)
                    {
                        collectingBoneInfo = true;
                    }
                    else if(collectingBoneInfo && parts[0] == BonesEndFlag)
                    {
                        collectingBoneInfo = false;

                        if(bones.Count != expectedBoneCount)
                            throw new FormatException("incorrect number of bones/joints"); 
                    }
                    else if(collectingBoneInfo && parts[0] != BonesEndFlag)
                    {
                        // "name" parent flags startIndex
                        int parentBoneIndex = Convert.ToInt32(parts[1]);
                        bones.Add(new SkeletalBone()
                        {
                            Index = boneIndex,
                            Name = parts[0],
                            Parent = parentBoneIndex < 0 ? null : bones[parentBoneIndex]
                        });
                        boneIndex++;
                    }
                    // bounds
                    else if(parts[0] == BoundsStartFlag)
                    {
                        // start collecting bounds
                        collectingBounds = true;
                    }
                    else if(collectingBounds && parts[0] == BoundsEndFlag)
                    {
                        collectingBounds = false;
                    }
                    else if(collectingBounds)
                    { }
                    // poses
                    else if(parts[0] == BasePoseStart)
                    {
                        collectingBasePose = true;
                    }
                    else if(collectingBasePose && parts[0] == BasePoseEnd)
                    {
                        collectingBasePose = false;
                    }
                    else if(collectingBasePose)
                    {
                        // ( x y z ) ( rx ry rz )
                        basePose.Bones.Add(new SkeletalBone()
                        {
                            Parent = bones[basePoseBoneIndex].Parent,
                            Name = bones[basePoseBoneIndex].Name
                        });
                        basePoseBoneIndex++;
                    }
                    else if(parts[0] == PoseStart)
                    {
                        // start collecting another frame
                        collectingPose = true;
                        pose = new SkeletalPose(expectedBoneCount);
                        pose.Name = name + "." + poseIndex;
                        frames.Add(pose);
                        poseBoneIndex = 0;
                    }
                    else if(collectingPose && parts[0] == PoseEnd)
                    {                        
                        collectingPose = false;
                        poseIndex++;
                    }
                    else if(collectingPose)
                    {
                        // px py pz rz ry rz
                        var pos = new Vector3(Convert.ToSingle(parts[0]), Convert.ToSingle(parts[1]), Convert.ToSingle(parts[2]));
                        var rot = Util.Math.ComputeW(new Quaternion(Convert.ToSingle(parts[3]), Convert.ToSingle(parts[4]), Convert.ToSingle(parts[5]), 0.0f));
                        pose.Set(poseBoneIndex, pos, rot);

                        SkeletalBone parent = bones[poseBoneIndex].Parent;

                        // convert to model space
                        if(parent != null)
                            pose[poseBoneIndex] = pose[poseBoneIndex] * pose[parent.Index];

                        poseBoneIndex++;
                    }
                }
            }

            if(frames.Count != expectedFrameCount)
                throw new FormatException("incorrect number of frames");

            Assets.Register(new Animation(name, file)
            {
                FrameRate = frameRate,
                FrameCount = frames.Count,
                Frames = frames
            });
        }
    }
}

View

Before we get into the View, we need a Camera class. Ours doesn't do much at the moment. The only change is to override the Transformation property to reverse the position. Remember we are moving the world around the camera, not the camera around the in world.

using OpenTK;

namespace Graphics
{
    public class Camera : WorldObject
    {
        protected override void RecalculateTransformation()
        {
            transformation = Matrix4.CreateTranslation(position * -1.0f) * Matrix4.CreateScale(scale) * Matrix4.CreateFromQuaternion(rotation);
        }
    }
}

You could extend this class to create a first or third person camera that follows another WorldObject around (ideally the player's character). Another possible change would be to give this class a model so that you could see the position of each camera if there are multiple camera, mainly for debugging purposes.

Now, back to the View. It represents three things:

  • It represents the rectangle on the screen that we're going to draw on.
  • It stores the camera we're using the render the scene.
  • It store the projection too – that is, whether it's a orthographic or perspective view.

Most games only have one fullscreen view but you might want another for a reverse mirror or a map or something.

using System;
using System.Collections.Generic;
using OpenTK;
using OpenTK.Graphics;
using OpenTK.Graphics.OpenGL4;

namespace Graphics
{
    public enum ProjectionMode
    {
        Orthographic,
        Perspective
    }

    public class View
    {
        public Camera Camera { get; private set; }

        public int X { get; set; }
        public int Y { get; set; }
        public int Width { get; set; }
        public int Height { get; set; }
        public float AspectRatio { get { return (float)Width / (float)Height; } }

        public Matrix4 projectionMatrix;
        public Matrix4 ProjectionMatrix { get { return projectionMatrix; } }
        private ProjectionMode projectionMode = ProjectionMode.Perspective;
        public ProjectionMode ProjectionMode
        {
            get { return projectionMode; }
            set
            {
                projectionMode = value;
                if(ProjectionMode == ProjectionMode.Perspective)
                    Matrix4.CreatePerspectiveFieldOfView(0.78f, AspectRatio, 0.1f, 1000.0f, out projectionMatrix);
                else
                    Matrix4.CreateOrthographic(40.0F * AspectRatio, 40.0F, -1.0f, 1000.0f, out projectionMatrix);
            }
        }

        public View(int x, int y, int width, int height, ProjectionMode mode = ProjectionMode.Perspective, Camera camera = null)
        {
            X = x;
            Y = y;
            Width = width;
            Height = height;
            ProjectionMode = mode;
            Camera = camera ?? new Camera();
        }

        public View(double x, double y, double width, double height, ProjectionMode mode = ProjectionMode.Perspective, Camera camera = null)
            : this(Convert.ToInt32(x),Convert.ToInt32(y), Convert.ToInt32(width), Convert.ToInt32(height), mode, camera)
        {
        }

        public void Resize(int x, int y, int width, int height)
        {
            X = x;
            Y = y;
            Width = width;
            Height = height;
        }

        public void Resize(double x, double y, double width, double height)
        {
            Resize(Convert.ToInt32(x), Convert.ToInt32(y), Convert.ToInt32(width), Convert.ToInt32(height));
        }
    }
}

Now for the OpenGL rendering stuff. First, we use a static constructor to set some flags:

  • GL.Enable(EnableCap.ScissorTest): allow us to only render to part of the screen.
  • GL.Enable(EnableCap.DepthTest): don't render objects if they are behind other objects.
  • GL.ClearColor(Color4.White): set the background color.

The Render() method

  • GL.Viewport(): set the rectangle to render to on.
  • GL.Scissor(): set the rectangle beyond which not to render anything on.
  • GL.Clear(): wipe the color and depth buffers.
static View()
{
    GL.Enable(EnableCap.ScissorTest);
    GL.Enable(EnableCap.DepthTest);

    GL.ClearColor(Color4.White);
}

public void Render(IEnumerable<WorldObject> objects)
{
    GL.Viewport(X, Y, Width, Height);
    GL.Scissor(X, Y, Width, Height);

    GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);

    foreach(var obj in objects)
        obj.Render(this.projectionMatrix, Camera.Transformation);            
}

Viewer

Our Viewer class inherits from the OpentTk.GameWindow class which uses SDL to set up a native window and create an OpenGL context for us. It also sets up the main game loop (two actually – for one updating and one for rendering).

It will be made up of four views. Three orthogonal views (front, side and top) and one perspective view. It will also handle the user input for controlling the camera and object. So first, we set up the four views. We will be able to toggle between a single view and the four view mode.

using System;
using System.IO;
using System.Collections.Generic;
using OpenTK;
using OpenTK.Graphics;
using OpenTK.Input;

namespace Graphics
{
    public class Viewer : GameWindow
    {
        private View mainView;
        private View topOrtho;
        private View rightOrtho;
        private View frontOrtho;

        private bool isMultiMode = true;

        public Viewer()
            : base(640, 480, new GraphicsMode(new ColorFormat(8, 8, 8, 8), 16), "Viewer",
                   GameWindowFlags.Default, DisplayDevice.Default, 4, 1, GraphicsContextFlags.Debug)
        {
            frontOrtho = new View(0, 0, 0.5 * Width, 0.5 * Height, ProjectionMode.Orthographic);
            frontOrtho.Camera.Position = Direction.Forward * 100.0f;
            frontOrtho.Camera.LookAt(Vector3.Zero);

            topOrtho = new View(0, 0.5 * Height, 0.5 * Width, Height - 0.5 * Height, ProjectionMode.Orthographic);
            topOrtho.Camera.Position = Direction.Up * 100.0f;
            topOrtho.Camera.LookAt(Vector3.Zero, Direction.Forward);

            rightOrtho = new View(0.5 * Width, 0, Width - 0.5 * Width, 0.5 * Height, ProjectionMode.Orthographic);
            rightOrtho.Camera.Position = Direction.Right * 100.0f;
            rightOrtho.Camera.LookAt(Vector3.Zero);

            mainView = new View(0.5 * Width, 0.5 * Height, Width - 0.5 * Width, Height - 0.5 * Height);
            mainView.Camera.Position = new Vector3(50.0F, 100.0F, -100.0F);
            mainView.Camera.LookAt(Vector3.Zero);

            Keyboard.KeyDown += OnKeyDown;
            Mouse.Move += OnMouseMove;
        }

        protected override void OnLoad(EventArgs e)
        {
            VSync = VSyncMode.On;
        }

        protected override void OnResize(EventArgs e)
        {
            base.OnResize(e);
            UpdateViews();
        }

        private void UpdateViews()
        {
            if(isMultiMode)
            {
                frontOrtho.Resize(0, 0, 0.5 * Width, 0.5 * Height);
                topOrtho.Resize(0, 0.5 * Height, 0.5 * Width, Height - 0.5 * Height);
                rightOrtho.Resize(0.5 * Width, 0, Width - 0.5 * Width, 0.5 * Height);
                mainView.Resize(0.5 * Width, 0.5 * Height, Width - 0.5 * Width, Height - 0.5 * Height);
            }
            else
            {
                mainView.Resize(0, 0, Width, Height);
            }
        }
        
        private void ToggleMultiMode()
        {
            isMultiMode = !isMultiMode;
            UpdateViews();
        }
    }
}

The rendering method simply notifies the relevant Views that they need to render the scene and then swaps the buffers.

Currently, there is a very crude toggle to switch the RenderMode of every mesh in every model. In the debug mode, normal meshes are rendered as wireframes while the skeletons are rendered as solid colour faces.

public List<WorldObject> Objects = new List<WorldObject>();
  
protected override void OnRenderFrame(FrameEventArgs e)
{
    mainView.Render(Objects);

    if(isMultiMode)
    {
        frontOrtho.Render(Objects);
        rightOrtho.Render(Objects);
        topOrtho.Render(Objects);
    }

    SwapBuffers();
}

private bool isDebugMeshMode = false;

private void ToggleMeshMode()
{
    foreach(var m in Assets.RetrieveAll<Model>())
    {
        foreach(var n in m.Meshes)
        {
            if(isDebugMeshMode)
                n.RenderMode = (n.Name != "Skeleton" ? RenderMode.Texture : RenderMode.None);
            else
                n.RenderMode = (n.Name != "Skeleton" ? RenderMode.Edge : RenderMode.Face);
        }
    }
    isDebugMeshMode = !isDebugMeshMode;
}

Next we add keyboard and mouse handling. We use the Up/Down/Left/Right/O/L keys to move the camera and C to recenter it on the origin. Hold down T or R and use those keys to translate or rotate the focussed object. P will reset the position or rotation.

The camera rotation based on mouse clicking and dragging is a little tricky. We rotate in the world space on the X axis (that's looking left and right) but rotate in the local space on the Y axis (that is looking up and down). This gives us the classic first person camera controls. If we set this up in any other way the camera will start to roll or behavour unusually.

public WorldObject FocusObject = null;

protected override void OnUpdateFrame(FrameEventArgs e)
{
    CheckKeyboard();
}

private void CheckKeyboard()
{
    if(Keyboard[Key.Escape])
        Exit();

    if(Keyboard[Key.T] && FocusObject != null)
    {
        if(Keyboard[Key.Up])
            FocusObject.Move(Direction.Forward * MovementSpeed);
        if(Keyboard[Key.Down])
            FocusObject.Move(Direction.Backward * MovementSpeed);
        if(Keyboard[Key.Left])
            FocusObject.Move(Direction.Left * MovementSpeed);
        if(Keyboard[Key.Right])
            FocusObject.Move(Direction.Right * MovementSpeed);
        if(Keyboard[Key.O])
            FocusObject.Move(Direction.Up * MovementSpeed);
        if(Keyboard[Key.L])
            FocusObject.Move(Direction.Down * MovementSpeed);
        if(Keyboard[Key.P])
            FocusObject.Position = Vector3.Zero;
    }
    else if(Keyboard[Key.R] && FocusObject != null)
    {
        if(Keyboard[Key.Up])
            FocusObject.Rotate(Direction.Forward, RotationSpeed);
        if(Keyboard[Key.Down])
            FocusObject.Rotate(Direction.Backward, RotationSpeed);
        if(Keyboard[Key.Left])
            FocusObject.Rotate(Direction.Left, RotationSpeed);
        if(Keyboard[Key.Right])
            FocusObject.Rotate(Direction.Right, RotationSpeed);
        if(Keyboard[Key.O])
            FocusObject.Rotate(Direction.Up, RotationSpeed);
        if(Keyboard[Key.L])
            FocusObject.Rotate(Direction.Down, RotationSpeed);
        if(Keyboard[Key.P])
            FocusObject.Rotation = Quaternion.Identity;
    }
    else
    {
        if(Keyboard[Key.Up])
            mainView.Camera.Move(Direction.Forward * MovementSpeed);
        if(Keyboard[Key.Down])
            mainView.Camera.Move(Direction.Backward * MovementSpeed);
        if(Keyboard[Key.Right])
            mainView.Camera.Move(Direction.Right * MovementSpeed);
        if(Keyboard[Key.Left])
            mainView.Camera.Move(Direction.Left * MovementSpeed);
        if(Keyboard[Key.O])
            mainView.Camera.Move(Direction.Up * MovementSpeed, Space.World);
        if(Keyboard[Key.L])
            mainView.Camera.Move(Direction.Down * MovementSpeed, Space.World);
    }

    if(Keyboard[Key.C] && FocusObject != null)
    {
        mainView.Camera.LookAt(FocusObject.Position);
    }
}

private void OnKeyDown(object sender, KeyboardKeyEventArgs e)
{
    if(e.Key == Key.Number1)
        ToggleMeshMode();
    else if(e.Key == Key.Number2)
        ToggleMultiMode();
}

private void OnMouseMove(object sender, MouseMoveEventArgs e)
{
    if(Mouse.GetState().LeftButton == ButtonState.Pressed)
    {
        mainView.Camera.Rotate(Vector3.UnitY, CameraRotationSpeed * e.XDelta, Space.Local);
        mainView.Camera.Rotate(Vector3.UnitX, CameraRotationSpeed * e.YDelta, Space.World);
    }
}

Program

Finally, all we have to do is write a quick Program class to import the model and load it into the Viewer. Everything has to go inside the using statement because OpenGL isn't set up prior to that. The Run() method starts both the update and render loops.

using System;
using OpenTK;

namespace Graphics
{
    public class Program
    {
        [STAThread]
        public static void Main(string[] args)
        {
            using(Viewer viewer = new Viewer())
            {
                var obj = new WorldObject("Crate");
                viewer.Objects.Add(obj);
                viewer.FocusObject = obj;

                viewer.Run(60.0);
            }
        }
    }
}

You can modify this to display the animated man – or indeed an army of them.

using System;
using OpenTK;

namespace Graphics
{
    public class Program
    {
        [STAThread]
        public static void Main(string[] args)
        {
            using(ModelViewer viewer = new ModelViewer())
            {
                Model model = Assets.Retrieve<Model>("Bob");
                model.AddAnimation("Looking");
                              
                Random rand = new Random();
                int x = 5;
                int z = 5;
                for(int i = 0; i < x; i++)
                {
                    for(int j = 0; j < z; j++)
                    {
                        WorldObject obj3 = new WorldObject() { Model = model, Position = new Vector3(50.0f * i, 0.0f, 50.0f * j) };
                        viewer.Objects.Add(obj3);
                        obj3.Animate("Looking", TimeSpan.FromSeconds(rand.Next(0, 100)));
                    }
                }
                
                viewer.Run(60.0);
            }
        }
    }
}

Further Research

  • Automatic asset reloading
  • Possibly move frame interpolation to shader
  • Possibly only send positions and rotations of each bone to shader rather than a full 4x4 matrix
  • Materials and lighting (which use normals)
  • Add in collision detection or a physics engine.