Tuesday, May 21, 2013

Let’s Build a 3D Graphics Engine: Points, Vectors, and Basic Concepts

This entry is part 1 of 1 in the series Let's Build a 3D Graphics Engine
The 3D game engines that are behind today’s biggest games are staggering works of mathematics and programming, and many game developers find that understanding them in their entirety is a difficult task. If you are lacking in experience (or a college degree like myself), this task becomes even more arduous. In this series, I aim to walk you through the basics of graphics systems in 3D engines. More specifically, in this tutorial we will be discussing points and vectors, and all of the fun that comes with them. If you have a basic grasp of algebra (variables and variable math) and Computer Science (the basics of any object-oriented programming language), you should be able to make it through most of these tutorials, but if you’re having trouble with any of the concepts, please ask questions! Some of these topics can be terribly difficult.

Basics of Coordinate Systems

Let’s start from the basics. Three-dimensional graphics require the concept of a three-dimensional space. The most widely used of these spaces is called the Cartesian Space, which gives us the benefit of Cartesian coordinates (the basic \((x,y)\) notations and 2D grid-spaced graphs that are taught in most high schools). Build a 3D Graphics Engine tutorial Pictured: the bane of many high schoolers’ existence. 3-dimensional Cartesian space gives us an x, y, and z axis (describing position based on horizontal placement, vertical placement, and depth respectively). The coordinates for any point within this space are shown as a tuple (in this case a 3-tuple, since there are three axes). On a 2-dimensional plane, a tuple could be depicted as \((x,y)\), and in 3-dimensional plane, it is depicted as \((x,y,z)\). The use of this 3-tuple is that it shows a point’s location relative to the space’s origin (which itself is typically shown as \((0,0,0)\)). Tip: Tuple: an ordered list (or sequence) of elements in computer science or mathematics. So, \((K,y,l,e)\) would be a 4-tuple, showing a sequence of characters that make up my name. Build a 3D Graphics Engine tutorial Within this space, we are going to define a point as a representation of a 3-tuple. This can also be shown as: \[P = (x,y,z)\] In addition to this definition of a point, we must define its parts. Each of the elements within this 3-tuple is a scalar (number) that defines a position along a basis vector. Each basis vector must have a unit length (that is, a length of exactly 1), so 3-tuples such as \((1,1,1)\) and \((2,2,2)\) could not be basis vectors as they are too long. We define three basis vectors for our space: \[\begin{aligned} X & = (1,0,0)\\ Y & = (0,1,0)\\ Z & = (0,0,1) \end{aligned}\] Build a 3D Graphics Engine tutorial Source: http://www.thefullwiki.org/Arithmetics/Cartesian_Coordinate.

The Coordinate System

Now let’s talk about the mathematical definition of our coordinate system, how it affects our graphics system, and the calculations we can make.

Representing Points

The origin point of our coordinate system can be depicted as the point \(O\), which represents the 3-tuple (0,0,0). This means that the mathematical representation of our coordinate system can be depicted as: \[\{O;X,Y,Z\}\] By this statement, you can say that \((x,y,z)\) represents a point’s position in relation to the origin. This definition also means that any point \(P\), \((a, b, c)\), can be represented as: \[P = O + aX + bY + cZ\] From here on, I will reference scalars in lower-case and vectors in upper-case – so \(a\), \(b\), and \(c\) are scalars, and \(X\), \(Y\), and \(Z\) are vectors. (They are actually the basis vectors we defined earlier.) This means that a point whose tuple is (2,3,4) could be represented as: \[\begin{aligned} (2,3,4) & = (2,0,0) + (0,3,0) + (0,0,4)\\ & = (0,0,0) + (2,0,0) + (0,3,0) + (0,0,4)\\ & = (0,0,0) + 2(1,0,0) + 3(0,1,0) + 4(0,0,1)\\ & = O + 2X + 3Y + 4Z\\ \end{aligned}\] So, we’ve taken this abstract concept of “a point in 3D space” and defined it as four separate objects added together. This kind of definition is very important whenever we want to put any concept into code.

Mutually Perpendicular

The coordinate system that we will be using also has the valuable property of being mutually perpendicular. This means that there is a 90 degree angle between each of the axes where they meet on their respective planes. Build a 3D Graphics Engine tutorial Our coordinate system will also be defined as “right-handed”: Build a 3D Graphics Engine tutorial Source: http://viz.aset.psu.edu/gho/sem_notes/3d_fundamentals/html/3d_coordinates.html. In mathematical terms, this means that: \[X = Y \times Z\] …where \(\times\) represents the cross product operator. In case you aren’t sure what a cross product is, it can be defined by the following equation (assuming you are given two 3-tuples): \[(a,b,c) \times (d,e,f) = (bf - ce, cd - af, ae - bd)\] These statements might seem tedious, but later on, they will allow us to do a variety of different calculations and transformations much more easily. Luckily, you don’t have to memorize all of these equations when building a game engine – you can simply build from these statements, and then build even less complicated systems on top of that. Well, until you have to edit something fundamental within your engine and have to refresh yourself on all of this again!

Points and Vectors

With all of the basics of our coordinate system locked down, its time to talk about points and vectors and, more importantly, how they interact with one another. The first thing to note is that points and vectors are distinctly different things: a point is a physical location within your space; a vector is the space between two points. Build a 3D Graphics Engine tutorial To make sure that the two don’t get confused, I’ll write points in capital italics, as \(P\), and vectors in capital boldface, as \(\mathbf{V}\). There are two main axioms that we are going to deal with when using points and vectors, and they are:
  • Axiom 1: The difference of two points is a vector, so \(\mathbf{V} = P – Q\)
  • Axiom 2: The sum of a point and a vector is a point, so \(Q = P + \mathbf{V}\)
Tip: An axiom is a point of reasoning, often seen as evident enough to be accepted without argument.

Building the Engine

With these axioms stated, we now have enough information to create the building block classes that are at the heart of any 3D game engine: the Point class and the Vector class. If we were going to build our own engine using this information, there would be some other important steps to take when creating these classes (mostly having to do with optimization or dealing with already existing APIs), but we are going to leave these out for the sake of simplicity. The class examples below are all going to be in pseudocode so that you can follow along with your programming language of choice. Here are outlines of our two classes:
Point Class
{
 Variables:
  num tuple[3]; //(x,y,z)

 Operators:
  Point AddVectorToPoint(Vector);
  Point SubtractVectorFromPoint(Vector);
  Vector SubtractPointFromPoint(Point);

 Function:
  //later this will call a function from a graphics API, but for now
  //this should just be printing the points coordinates to the screen
  drawPoint; 
}
Vector Class
{
 Variables:
  num tuple[3]; //(x,y,z)

 Operators:
  Vector AddVectorToVector(Vector);
  Vector SubtractVectorFromVector(Vector);
}

[...]
Read more: Let’s Build a 3D Graphics Engine: Points, Vectors, and Basic Concepts

Wednesday, May 1, 2013

How to Create a Custom 2D Physics Engine: The Core Engine

This entry is part 2 of 2 in the series How to Create a Custom Physics Engine
In this part of my series on creating a custom 2D physics engine for your games, we’ll add more features to the impulse resolution we got working in the first part. In particular, we’ll look at integration, timestepping, using a modular design for our code, and broad phase collision detection.

Introduction

In the last post in this series I covered the topic of impulse resolution. Read that first, if you haven’t already!
Let’s dive straight into the topics covered in this article. These topics are all of the necessities of any half-decent physics engine, so now is an appropriate time to build more features on top of the core resolution from the last article.

Integration

Integration is entirely simple to implement, and there are very many areas on the internet that provide good information for iterative integration. This section will mostly show how to implement a proper integration function, and point to some different locations for further reading, if desired.
First it should be known what acceleration actually is. Newton’s Second Law states:
\[Equation 1:\\
F = ma\]
This states that the sum of all forces acting upon some object is equal to that object’s mass m multiplied by its acceleration a. m is in kilograms, a is in meters/second, and F is in Newtons.
Rearranging this equation a bit to solve for a yields:
\[Equation 2:\\
a = \frac{F}{m}\\
\therefore\\
a = F * \frac{1}{m}\]
The next step involves using acceleration to step an object from one location to another. Since a game is displayed in discrete separate frames in an illusion-like animation, the locations of each position at these discrete steps has to be calculated. For a more in-depth cover of these equations please see: Erin Catto’s Integration Demo from GDC 2009 and Hannu’s addition to symplectic Euler for more stability in low FPS environments.
Explicit Euler (pronounced “oiler”) integration is shown in the following snippet, where x is position and v is velocity. Please note that 1/m * F is acceleration, as explained above:
// Explicit Euler
x += v * dt
v += (1/m * F) * dt
dt here refers to delta time. Δ is the symbol for delta, and can be read literally as “change in”, or written as Δt. So whenever you see dt it can be read as “change in time”. dv would be “change in velocity”.This will work, and is commonly used as a starting point. However, it has numerical inaccuracies that we can get rid of without any extra effort. Here is what is known as Symplectic Euler:
// Symplectic Euler
v += (1/m * F) * dt
x += v * dt
Note that all I did was rearrange the order of the two lines of code – see “>the aforementioned article from Hannu.
This post explains the numerical inaccuracies of Explicit Euler, but be warned that he starts covering RK4, which I don’t personally recommend: gafferongames.com: Euler Inaccuracy.
These simple equations are all that we need to move all objects around with linear velocity and acceleration.

Timestepping

Since games are displayed at discrete time intervals, there needs to be a way of manipulating the time between these steps in a controlled manner. Have you ever seen a game that will run at different speeds depending on what computer it is being played on? That’s an example of a game running at a speed dependent on the computer’s ability to run the game.
We need a way to ensure that our physics engine only runs when a specific amount of time has passed. This way, the dt that is used within calculations is always the exact same number. Using the exact same dt value in your code everywhere will actually make your physics engine deterministic, and is known as a fixed timestep. This is a good thing.
A deterministic physics engine is one that will always do the exact same thing every time it is run assuming the same inputs are given. This is essential for many types of games where gameplay needs to be very fine-tuned to the physics engine’s behavior. This is also essential for debugging your physics engine, as in order to pinpoint bugs the behavior of your engine needs to be consistent.
Let’s first cover a simple version of a fixed timestep. Here is an example:
const float fps = 100
const float dt = 1 / fps
float accumulator = 0

// In units of seconds
float frameStart = GetCurrentTime( )

// main loop
while(true)
  const float currentTime = GetCurrentTime( )

  // Store the time elapsed since the last frame began
  accumulator += currentTime - frameStart( )

  // Record the starting of this frame
  frameStart = currentTime

  while(accumulator > dt)
    UpdatePhysics( dt )
    accumulator -= dt

  RenderGame( )
This waits around, rendering the game, until enough time has elapsed to update the physics. The elapsed time is recorded, and discrete dt-sized chunks of time are taken from the accumulator and processed by the physics. This ensures that the exact same value is passed to the physics no matter what, and that the value passed to the physics is an accurate representation of actual time that passes by in real life. Chunks of dt are removed from the accumulator until the accumulator is smaller than a dt chunk.
There are a couple of problems that can be fixed here. The first involves how long it takes to actually perform the physics update: What if the physics update takes too long and the accumulator goes higher and higher each game loop? This is called the spiral of death. If this is not fixed, your engine will quickly grind to a complete halt if your physics cannot be performed fast enough.
To solve this, the engine really needs to just run fewer physics updates if the accumulator gets too high. A simple way to do this would be to clamp the accumulator below some arbitrary value.
const float fps = 100
const float dt = 1 / fps
float accumulator = 0

// In units seconds
float frameStart = GetCurrentTime( )

// main loop
while(true)
  const float currentTime = GetCurrentTime( )

  // Store the time elapsed since the last frame began
  accumulator += currentTime - frameStart( )

  // Record the starting of this frame
  frameStart = currentTime

  // Avoid spiral of death and clamp dt, thus clamping
  // how many times the UpdatePhysics can be called in
  // a single game loop.
  if(accumulator > 0.2f)
    accumulator = 0.2f

  while(accumulator > dt)
    UpdatePhysics( dt )
    accumulator -= dt

  RenderGame( )
Now, if a game running this loop ever encounters some sort of stalling for whatever reason, the physics will not drown itself in a spiral of death. The game will simply run a little more slowly, as appropriate.
The next thing to fix is quite minor in comparison to the spiral of death. This loop is taking dt chunks from the accumulator until the accumulator is smaller than dt. This is fun, but there’s still a little bit of remaining time left in the accumulator. This poses a problem.
Assume the accumulator is left with 1/5th of a dt chunk every frame. On the sixth frame the accumulator will have enough remaining time to perform one more physics update than all the other frames. This will result in one frame every second or so performing a slightly larger discrete jump in time, and might be very noticeable in your game.
To solve this, the use of linear interpolation is required. If this sounds scary, don’t worry – the implementation will be shown. If you want to understand the implementation there are many resources online for linear interpolation.
// linear interpolation for a from 0 to 1
// from t1 to t2
t1 * a + t2(1.0f - a)
Using this we can interpolate (approximate) where we might be between two different time intervals. This can be used to render the state of a game in between two different physics updates.
With linear interpolation, the rendering of an engine can run at a different pace than the physics engine. This allows a graceful handling of the leftover accumulator from the physics updates.
Here is a full example:
const float fps = 100
const float dt = 1 / fps
float accumulator = 0

// In units seconds
float frameStart = GetCurrentTime( )

// main loop
while(true)
  const float currentTime = GetCurrentTime( )

  // Store the time elapsed since the last frame began
  accumulator += currentTime - frameStart( )

  // Record the starting of this frame
  frameStart = currentTime

  // Avoid spiral of death and clamp dt, thus clamping
  // how many times the UpdatePhysics can be called in
  // a single game loop.
  if(accumulator > 0.2f)
    accumulator = 0.2f

  while(accumulator > dt)
    UpdatePhysics( dt )
    accumulator -= dt

  const float alpha = accumulator / dt;

  RenderGame( alpha )

void RenderGame( float alpha )
  for shape in game do
    // calculate an interpolated transform for rendering
    Transform i = shape.previous * alpha + shape.current * (1.0f - alpha)
    shape.previous = shape.current
    shape.Render( i )
Here, all objects within the game can be drawn at variable moments between discrete physics timesteps. This will gracefully handle all error and remainder time accumulation. This is actually rendering ever so slightly behind what the physics has currently solved for, but when watching the game run all motion is smoothed out perfectly by the interpolation.
The player will never know that the rendering is ever so slightly behind the physics, because the player will only know what they see, and what they’ll see is perfectly smooth transitions from one frame to another.
You may be wondering, “why don’t we interpolate from the current position to the next?”. I tried this and it requires the rendering to “guess” where objects are going to be in the future. Often, objects in a physics engine make sudden changes in movement, such as during collision, and when such a sudden movement change is made, objects will teleport around due to inaccurate interpolations into the future.

Modular Design

There are a few things every physics object is going to need. However, the specific things each physics object needs may change slightly from object to object. A clever way to organize all this data is required, and it would be assumed that the lesser amount of code to write to achieve such organization is desired. In this case some modular design would be of good use.
Modular design probably sounds a little pretentious or over-complicated, but it does make sense and is quite simple. In this context, “modular design” just means we want to break a physics object into separate pieces, so that we can connect or disconnect them however we see fit.

Bodies

A physics body is an object that contains all information about some given physics object. It will store the shape(s) that the object is represented by, mass data, transformation (position, rotation), velocity, torque, and so on. Here is what our body ought to look like:
struct body
{
  Shape *shape;
  Transform tx;
  Material material;
  MassData mass_data;
  Vec2 velocity;
  Vec2 force;
  real gravityScale;
};
This is a great starting point for the design of a physics body structure. There are some intelligent decisions made here that tend towards strong code organization.
The first thing to notice is that a shape is contained within the body by means of a pointer. This represents a loose relationship between the body and its shape. A body can contain any shape, and the shape of a body can be swapped around at will. In fact, a body can be represented by multiple shapes, and such a body would be known as a “composite”, as it would be composed of multiple shapes. (I’m not going to cover composites in this tutorial.)
Body and Shape interface.
Body and Shape interface. The shape itself is responsible for computing bounding shapes, calculating mass based on density, and rendering.
The mass_data is a small data structure to contain mass-related information:
struct MassData
{
  float mass;
  float inv_mass;

  // For rotations (not covered in this article)
  float inertia;
  float inverse_inertia;
};
It is nice to store all mass- and intertia-related values in a single structure. The mass should never be set by hand – mass should always be calculated by the shape itself. Mass is a rather unintuitive type of value, and setting it by hand will take a lot of tweaking time. It’s defined as:
\[ Equation 3:\\Mass = density * volume\]
Whenever a designer wants a shape to be more “massive” or “heavy”, they should modify the density of a shape. This density can be used to calculate the mass of a shape given its volume. This is the proper way to go about the situation, as density is not affected by volume and will never change during the runtime of the game (unless specifically supported with special code).
Some examples of shapes such as AABBs and Circles can be found in the previous tutorial in this series.

Materials

All this talk of mass and density leads to the question: Where does the density value lay? It resides within the Material structure:
struct Material
{
  float density;
  float restitution;
};
Once the values of the material are set, this material can be passed to the shape of a body so that the body can calculate the mass [...]

Read more: How to Create a Custom 2D Physics Engine: The Core Engine

Monday, April 8, 2013

Tutorial: Debugging Native Extensions

I've written before about how to create your own native extensions for Adobe AIR, but what if you need to debug your extension? If you're doing anything but the simplest of projects, you're going to want to be able to debug the code on the native side, in addition to your AS3 code. The ActionScript side is (hopefully!) already taken care of by your IDE so this tutorial will focus on the native code. It's a pretty short tutorial, but also incredibly useful! For this tutorial, I'm using Visual Studio 2012 Express, which you can get for free from Microsoft and a slightly modified version of the NativeAdd extension from the tutorial linked above. First, you'll want to have your AIR project running. Then switch over to Visual Studio and set a break point (f9 or click the side bar) on the line you want to investigate (shown below). Next, select Debug > Attach to Process from the Visual Studio menu. If you're using Visual Studio 2010 and you don't see this option, first click Tools > Settings > Expert Settings. This will bring up a window with a list of running processes on your PC. In this list find the adl.exe process, select it and click the Attach button. Once the debugger has attached itself to the process, go back to your AIR application and have it call into the native function where you added your break point. In this case, I'm using the NativeAdd extension from a previous tutorial. I modified it a bit so that when the stage is clicked, it calls the doAdd function on the native side. With the debugger attached, when I click the stage, my breakpoint gets hit as shown here: You can now take a look at the values of your native code variables and monitor them as you step through the code using the Visual Studio debugger tools. Here's a sample shot of the watch window with the NativeAdd extension.
[...] Read more: Tutorial: Debugging Native Extensions

Saturday, April 6, 2013

How to Create a Custom 2D Physics Engine: The Basics and Impulse Resolution

There are many reasons you might want to create a custom physics engine: first, learning and honing your skills in mathematics, physics and programming are great reasons to attempt such a project; second, a custom physics engine can tackle any sort of technical effect the creator has the skill to create. In this article I would like to provide a solid introduction on how to create a custom physics engine entirely from scratch. Physics provides a wonderful means for allowing a player to immerse themselves within a game. It makes sense that a mastery of a physics engine would be a powerful asset for any programmer to have at their disposal. Optimizations and specializations can be made at any time due to a deep understanding of the inner workings of the physics engine. By the end of this tutorial the following topics will have been covered, in two dimensions:
  • Simple collision detection
  • Simple manifold generation
  • Impulse resolution
Here’s a quick demo:

Prerequisites

This article involves a fair amount of mathematics and geometry, and to a much lesser extent actual coding. A couple prerequisites for this article are:
  • A basic understanding of simple vector math
  • The ability to perform algebraic math

Collision Detection

There are quite a few articles and tutorials throughout the internet, including here on Tuts+, that cover collision detection. Knowing this, I would like to run through the topic very quickly as this section is not the focus of this article.

Axis Aligned Bounding Boxes

An Axis Aligned Bounding Box (AABB) is a box that has its four axes aligned with the coordinate system in which it resides. This means it is a box that cannot rotate, and is always squared off at 90 degrees (usually aligned with the screen). In general it is referred to as a “bounding box” because AABBs are used to bound other more complex shapes. An example AABB. An example AABB. The AABB of a complex shape can be used as a simple test to see if more complex shapes inside the AABBs can possibly be intersecting. However in the case of most games the AABB is used as a fundamental shape, and does not actually bound anything else. The structure of your AABB is important. There are a few different ways to represent an AABB, however this is my favorite:
struct AABB
{
  Vec2 min;
  Vec2 max;
};
This form allows an AABB to be represented by two points. The min point represents the lower bounds of the x and y axis, and max represents the higher bounds – in other words, they represent the top left and bottom right corners. In order to tell whether two AABB shapes are intersecting you will need to have a basic understanding of the Separating Axis Theorem (SAT). Here’s a quick test taken from Real-Time Collision Detection by Christer Ericson, which makes use of the SAT:
bool AABBvsAABB( AABB a, AABB b )
{
  // Exit with no intersection if found separated along an axis
  if(a.max.x < b.min.x or a.min.x > b.max.x) return false
  if(a.max.y < b.min.y or a.min.y > b.min.y) return false

  // No separating axis found, therefor there is at least one overlapping axis
  return true
}

Circles

A circle is represented by a radius and point. Here is what your circle structure ought to look like:
struct Circle
{
  float radius
  Vec position
};
Testing for whether or not two circles intersect is very simple: take the radii of the two circles and add them together, then check to see if this sum is greater than the distance between the two circles. An important optimization to make here is get rid of any need to use the square root operator:
float Distance( Vec2 a, Vec2 b )
{
  return sqrt( (a.x + b.x)^2 + (a.y + b.y)^2 )
}

bool CirclevsCircleUnoptimized( Circle a, Circle b )
{
  float r = a.radius + b.radius
  return r < Distance( a.position, b.position )
}

bool CirclevsCircleOptimized( Circle a, Circle b )
{
  float r = a.radius + b.radius
  r *= r
  return r < (a.x + b.x)^2 + (a.y + b.y)^2
}
In general multiplication is a much cheaper operation than taking the square root of a value.

Impulse Resolution

Impulse resolution is a particular type of collision resolution strategy. Collision resolution is the act of taking two objects who are found to be intersecting and modifying them in such a way as to not allow them to remain intersecting. In general an object within a physics engine has two main degrees of freedom (in two dimensions): movement in the xy plane and rotation. In this article we implicitly restrict rotation and use just AABBs and Circles, so the only degree of freedom we really need to consider is movement along the xy plane. By resolving detected collisions we place a restriction upon movement such that objects cannot remain intersecting one another. The idea behind impulse resolution is to use an impulse (instantaneous change in velocity) to separate objects found colliding. In order to do this the mass, position, and velocity of each object must be taken into account somehow: we want large objects colliding with smaller ones to move a little bit during collision, and to send the small objects flying away. We also want objects with infinite mass to not move at all. Simple example of what impulse resolution can achieve. Simple example of what impulse resolution can achieve. In order to achieve such effects and follow along with natural intuition of how objects behave we’ll use rigid bodies and a fair bit of math. A rigid body is just a shape defined by the user (that is, by you, the developer) that is implicitly defined to be non-deformable. Both AABBs and Circles in this article are non-deformable, and will always be either an AABB or Circle. No squashing or stretching allowed. Working with rigid bodies allows for a lot of math and derivations to be heavily simplified. This is why rigid bodies are commonly used in game simulations, and why we’ll be using them in this article.

Our Objects Collided – Now What?

Assuming we have two shapes found to be intersecting, how does one actually separate the two? Lets assume our collision detection provided us with two important pieces of information:
  • Collision normal
  • Penetration depth
In order to apply an impulse to both objects and move them apart, we need to know what direction to push them and by how much. The collision normal is the direction in which the impulse will be applied. The penetration depth (along with some other things) determine how large of an impulse will be used. This means the only value that needs to be solved for is the magnitude of our impulse. Now let’s go on a long trek to discover how we can solve for this impulse magnitude. We’ll start with our two objects that have been found to be intersecting:
Equation 1
\[ V^{AB} = V^B - V^A \] Note that in order to create a vector from position A to position B, you must do: endpoint - startpoint. \(V^{AB}\) is the relative velocity from A to B. This equation ought to be expressed in terms of the collision normal \(n\) – that is, we would like to know the relative velocity from A to B along the collision normal’s direction:
Equation 2
\[ V^{AB} \cdot n = (V^B - V^A) \cdot n \] We are now making use of the dot product. The dot product is simple; it is the sum of component-wise products:
Equation 3
\[ V_1 = \begin{bmatrix}x_1 \\y_1\end{bmatrix}, V_2 = \begin{bmatrix}x_2 \\y_2\end{bmatrix} \\ V_1 \cdot V_2 = x_1 * x_2 + y_2 * y_2 \] The next step is to introduce what is called the coefficient of restitution. Restitution is a term that means elasticity, or bounciness. Each object in your physics engine will have a restitution represented as a decimal value. However only one decimal value will be used during the impulse calculation. To decide what restitution to use (denoted by \(e\) for epsilon), you should always use the lowest restitution involved in the collision for intuitive results:
// Given two objects A and B
e = min( A.restitution, B.restitution )
Once \(e\) is acquired we can place it into our equation solving for the impulse magnitude. Newton’s Law of Restitution states the following:
Equation 4
\[V' = e * V \] All this is saying is that the velocity after a collision is equal to the velocity before it, multiplied by some constant. This constant represents a “bounce factor”. Knowing this, it becomes fairly simple to integrate restitution into our current derivation:
Equation 5
\[ V^{AB} \cdot n = -e * (V^B - V^A) \cdot n \] Notice how we introduced a negative sign here. In Newton’s Law of Restitution, \(V’\), the resulting vector after the bounce, is actually going in the opposite direction of V. So how do we represent opposite directions in our derivation? Introduce a negative sign. So far so good. Now we need to be able to express these velocities while under the influence of an impulse. Here’s a simple equation for modify a vector by some impulse scalar \(j\) along a specific direction \(n\):
Equation 6
\[ V' = V + j * n \] Hopefully the above equation makes sense, as it is very important to understand. We have a unit vector \(n\) which represents a direction. We have a scalar \(j\) which represents how long our \(n\) vector will be. We then add our scaled \(n\) vector to \(V\) to result in \(V’\). This is just adding one vector onto another, and we can use this small equation to apply an impulse of one vector to another. There’s a little more work to be done here. Formally, an impulse is defined as a change in momentum. Momentum is mass * velocity. Knowing this, we can represent an impulse as it is formally defined like so:
Equation 7
\[ Impulse = mass * Velocity \\ Velocity = \frac{Impulse}{mass} \therefore V' = V + \frac{j * n}{mass}\] The three dots in a small triangle (\(\therefore\)) can be read as “therefore”. It is used to show that the thing beforehand can be used to conclude that whatever comes next is true. Good progress has been made so far! However we need to be able to express an impulse using \(j\) in terms of two different objects. During a collision with object A and B, A will be pushed in the opposite direction of B:
Equation 8
\[ V'^A = V^A + \frac{j * n}{mass^A} \\ V'^B = V^B - \frac{j * n}{mass^B} \] These two equations will push A away from B along the direction unit vector \(n\) by impulse scalar (magnitude of \(n\)) \(j\). All that is now required is to merge Equations 8 and 5. Our resulting equation will look something like this:
Equation 9
\[ (V^A - V^V + \frac{j * n}{mass^A} + \frac{j * n}{mass^B}) * n = -e * (V^B - V^A) \cdot n \\ \therefore \\ (V^A - V^V + \frac{j * n}{mass^A} + \frac{j * n}{mass^B}) * n + e * (V^B - V^A) \cdot n = 0 \] If you recall, the original goal was to isolate our magnitude. This is because we know what direction to resolve the collision in (assumed given by the collision detection), and only have left to solve the magnitude of this direction. The magnitude which is an unknown in our case is \(j\); we must isolate \(j\) and solve for it.
Equation 10
\[ (V^B - V^A) \cdot n + j * (\frac{j * n}{mass^A} + \frac{j * n}{mass^B}) * n + e * (V^B - V^A) \cdot n = 0 \\ \therefore \\ (1 + e)((V^B - V^A) \cdot n) + j * (\frac{j * n}{mass^A} + \frac{j * n}{mass^B}) * n = 0 \\ \therefore \\ j = \frac{-(1 + e)((V^B - V^A) \cdot n)}{\frac{1}{mass^A} + \frac{1}{mass^B}} \] Whew! That was a fair bit of math! It is all over for now though. It’s important to notice that in the final version of Equation 10 we have \(j\) on the left (our magnitude) and everything on the right is all known. This means we can write a few lines of code to solve for our impulse scalar \(j\). And boy is the code a lot more readable than mathematical notation!
void ResolveCollision( Object A, Object B )
{
  // Calculate relative velocity
  Vec2 rv = B.velocity - A.velocity

  // Calculate relative velocity in terms of the normal direction
  float velAlongNormal = DotProduct( rv, normal )

  // Do not resolve if velocities are separating
  if(velAlongNormal > 0)
    return;

  // Calculate restitution
  float e = min( A.restitution, B.restitution)

  // Calculate impulse scalar
  float j = -(1 + e) * velAlongNormal
  j /= 1 / A.mass + 1 / B.mass

  // Apply impulse
  Vec2 impulse = j * normal
  A.velocity -= 1 / A.mass * impulse
  B.velocity += 1 / B.mass * impulse
}
There are a few key things to note in the above code sample. The first thing is the check on Line 10, if(VelAlongNormal > 0). This check is very important; it makes sure that you only resolve a collision if the objects are moving towards each other. Two objects collide, but velocity will separate them next frame. Do not resolve this type of collision. Two objects collide, but velocity will separate them next frame. Do not resolve this type of collision. If objects are moving away from one another we want to do nothing. This will prevent objects that shouldn’t actually be considered colliding from resolving away from one another. This is important for creating a simulation that follows human intuition on what should happen during object interaction. The second thing to note is that inverse mass is computed multiple times for no reason. It is best to just store your inverse mass within each object and pre-compute it one time:
A.inv_mass = 1 / A.mass
Many physics engines do not actually store raw mass. Physics engines often times store inverse mass and inverse mass alone. It just so happens that most math involving mass is in the form of 1/mass.The last thing to note is that we intelligently distribute our impulse scalar \(j\) over the two objects. We want small objects to bounce off of big objects with a large portion of \(j\), and the big objects to have their velocities modified by a very small portion of \(j\). In order to do this you could do:
float mass_sum = A.mass + B.mass
float ratio = A.mass / mass_sum
A.velocity -= ratio * impulse

ratio = B.mass / mass_sum
B.velocity += ratio * impulse
It is important to realize that the above code is equivalent to the ResolveCollision() sample function from before. Like stated before, inverse masses are quite useful in a physics engine.

Sinking Objects

[...] Read more: How to Create a Custom 2D Physics Engine: The Basics and Impulse Resolution

Saturday, March 30, 2013

Postmortem: Building Hard-shell Hockey with NME

NME postmortem
After finally creating my first game with NME and publishing it to the Android store, I thought I would share my experience so far.

To begin, I should explain what NME is and what it can do since I have only given it a brief mention before.
According to the official site NME is:
“The best way to build cross-platform 2D games or rich applications.”
“Using the innovative Haxe language compiler, NME deploys the same source code to C++, JavaScript and SWF bytecode without sacrificing reliability. As a result, NME provides fast hardware-accelerated performance for Windows, Mac, Linux, iOS, Android and BlackBerry, while also supporting Flash Player and HTML5. NME mirrors the traditional Flash API, so developers can leverage their existing experience to target new platforms, with better results.”
So far I have to say that statement is pretty accurate. But what about other similar tools that exist such as Corona, Unity, libGDX, Marmalade etc and why did I choose NME over them?
One of the more obvious reasons why I chose NME was my past experience with the Flash API which certainly lowered the learning curve. However, it was the fact that NME is the only truly free solution that won me over. LibGDX came closest out of the other choices I mentioned but it still required a $400 license fee for iOS. Having witnessed the HTML5 vs. Flash war I feel it is much safer to bet on technologies that are free and open source.
Of course NME is not without its flaws. Bugs and issues with various platforms do exist although fortunately they are few and far between. Additionally the community is not huge and so one can’t expect the same level of documentation and tutorials. That being said, I have really noticed over the past year how polished NME is becoming and how the community is growing which is fantastic to see. I am very much looking forward to seeing shader support for mobiles being added too.
So, having decided upon NME as the tool to use I realised I needed a game engine. When I first started there was not much choice so I decided to port over an engine I had made in Flash. As of now, ports of the popular Flash game engines like HaxeFlixel and HaxePunk exist. Regardless, software architecture and game engine design has interested me so I don’t feel that I wasted my time by creating my own and so far am very pleased with my results.
That being said, I did greatly underestimate how much work was involved in getting the game engine working properly.
The first challenge was getting the hang of rendering in a different way to what I had been accustomed to with Flash. In Flash, I would mainly use the DisplayList and at times blitting. This works great in web browsers but less so on mobile devices. Blitting actually causes the mobile to render via software which kills performance. The DisplayList, whilst better (it is GPU accelerated), is still not the optimum way of rendering. The best way when considering performance on mobiles is drawTiles and drawTriangles respectively. This means using a third party tool to export a swf into a tilesheet to render via drawTiles.
After understanding and implementing drawTiles, my next problem was displaying Android ads. I managed to find a library that used NME extensions to achieve this except that it did not work out of the box. Eventually I tracked down another person who had managed to get it up and running and they gave me a lot of tips. In the end this required delving into some of the NME framework and modifying the Java code template. It was after this that I decided to just focus on developing for Android as I did not want to waste a similar amount of time getting iOS ads working (although I believe they work out of the box on that platform) [...]

Read more: Postmortem: Building Hard-shell Hockey with NME

Friday, March 22, 2013

From Zero to Pitch: A Walkthrough for Game Designers

How can you take a rough game idea and turn it into something fun, interesting and engaging? This is definitely one of the most difficult obstacles that you must tackle as a game designer, as your job is to craft enjoyable experiences in a constrained environment. Would you believe me if I told you that after you finish reading and participating in the activities established in this article, you will have a game designed and ready to be developed? Yes, I know it sounds inconceivable, but trust me – this series of unconventional exercises will explain the workflow of designing a brand new game from zero to pitch.

Strengthen Your Game Design Muscle

I want to share with you one of the habits that helps me strengthen my game design muscle all the time:s turning real life experiences into games. When I started developing video games, I acquired the habit of trying to take everyday experiences and imagine them in terms of a video game; this makes you very aware of the things that are happening around you and the fun or frustrating factors in them. For example, let’s say that you took a flight and when you arrived at the airport your luggage got lost. This is a very frustrating scenario, but now take that experience and try to imagine how did the luggage got lost and what are the persons involved in that particular situation. Could you make a game based on this?
Can you take something apparently dull and turn it into something fun? Can you take something apparently dull and turn it into something fun?
What about a game where the player has to manage the conveyor belts at an airport and he loses points when passengers reach a certain amount of frustration levels when they don’t get their luggage? Or what if you experienced some turbulence during your flight and you got a little bit nervous… could you make a game where you had to fly an airplane and gain points by doing abrupt manoeuvres and scaring the passengers? Of course you can! This is the raw idea that you need to write down on paper and start working on. As a matter of fact, there’s this little booklet called the Tiny Game Design Tool that can make this process a little bit easier, by helping you jot the idea down as soon as possible. This tool is basically a printable PDF that you can carry in your wallet which helps you establish in very little time the following basic elements of your game:
  • Emotion, Mechanic and Theme
  • Main Character
  • Objects
  • Obstacles
  • Level Design

Play With the Elements

The alchemy of creativity (Image taken from Everything is a Remix Part 3) The alchemy of creativity. (Image taken from Everything is a Remix Part 3.)
We have a raw idea. This idea is like clay: you can mold it, split it into two, make a shape with it, and so on. Now we have to play with this idea… but how? Well, another creative exercise that is very useful to constantly work your game design muscle is to put in good practice the three basic elements of creativity: copy, transform and combine. I took this elements from the third episode of Kirby Ferguson’s documentary Everything is a Remix and they will help us exercise our mind so that it can always be wide open to creativity.

Copy

006_copy
Get practice and experience by copying existing games, but also apply small modifications to make the game easier or more difficult. This process of repetition and modification is called iteration and is very common in software and video game development. An example of some games that follow this exercise are:
  • Plasma Pong, which is a Pong variant with fluid dynamics and a new game mechanic that lets the ball stick to your paddle.
  • FPS-MAN, which is a take on Pac-Man but with a change in perspective. Now you look at the action from the point of view of Pac-Man and it changes the game experience completely.
  • Portal: The Flash Version, which takes the basic mechanics of Portal but constrains them to a two-dimensional space. This is also known as a demake.

Transform

006_transform
Did you have an interesting experience today, this week or this month? If you didn’t… borrow one! Take a newspaper and watch the front page news; you will notice that these stories are generally based on events that have something interesting to tell. Pick a story you like and turn it into a video game. Ask yourself questions: Who are the people involved? Where did the action happen? Could the personalities involved be interesting characters for my game? A recent game that tackles this type of exercise is Riot, in which the player experiences both sides of a protest, as rioters and as police officers. This idea came to fruition when Leonard Menchiari wanted to capture the feeling of his own experience of protests.
A kid that has to reach an airplane while avoiding security guards, cameras and checkpoints A kid that has to reach an airplane while avoiding security guards, cameras and checkpoints.
A more personal experience I had with this exercise was a small game I made for a jam where participants had to develop games based on TV news. My game was called “Escape from el Dorado”, and was based on a TV report about a kid that infiltrated the “El Dorado” airport in Bogotá and got into a plane that ended up in Chile. This story made me think immediately about how I could implement stealth mechanics in an airport setting.

Combine

006_combine
Take an existing game and combine it with another game, or change its genre. What would happen? What if I took a game like Super Mario Bros. and applied a skill tree just like the ones found in Role Playing Games? A recent example of combination is Bombermine, which takes the classic Bomberman game and mixes it with multiplayer elements.

Planting the Characters’ Seeds

Now that we went from a raw idea to something that is starting to take shape, and slowly but steadily we are covering the basics. Now let’s design the characters. First we’re going to start with a game design tool called Game Seeds. This game is generally played in groups but for this particular exercise we will play a single player variant focused on character development. Tip:I assume that by now you know the basics of playing Game Seeds. If not, please take a couple of minutes to read my guide here.
Click here to see the full size image Click to see the full size image.
We’re going to start with this fifteen card grid as our foundation. Please pick just six cards that have game mechanics that appeal to you. Now, in three steps we’re going to fill the Game Seeds sheet found below with the information given to us by the cards:
This is the Game Seeds sheet. Click here to download the printable version This is the Game Seeds sheet. Click to download the printable version.

Step 1: Fill the Hero Profile

006_seeds
Now that you have selected the six cards, go through each one and write down the icons that appear the most for each category. (In the case of a draw, choose the one you like the most.) For example, in the “species” category of your six cards you might have three human icons, two robot icons and one creature icon; based on this your character will be human.

Example

From the fifteen card grid I chose Fight, Escape, Race, Duel, Destroy and Navigate. Using those cards this was my character:
  • Creature (3 occurrences) or Robot (3 occurrences) (a draw, therefore I pick Robot)
  • Androgyne (4 occurrences)
  • Lives in the past (4 occurrences)
  • Nomad (5 occurrences)
  • Tall (3 occurrences) or Short (3 occurrences) (a draw, therefore I pick Tall) and Heavy (4 occurrences)

Step 2: Fill the Hero Attributes

Now take a look at the bottom of the cards and mark the Hero Attributes section with an “X” for each attribute that appears.

Example

Using the cards I chose I get the following attributes:
  • STR (STRENGTH) (6)
  • HEA (HEALTH) (6)
  • DEX (DEXTERITY) (4)
  • WIS (WISDOM) (2)
  • CHAR (CHARISMA) (0)
So Strength and Health are the main attributes of my character, and it is neither wise nor charming! Once you’ve done this, add your personal touch to the character in the Character Development section. This will define all the details that are going to give personality to your character.

Step 3: Pick the Mechanic; Draw Your Hero

Finally, pick your favourite game mechanic from the six cards you chose. This will be your main gameplay mechanic. Now you can draw a sketch of your hero based on all the attributes and his profile. This is my hero sketch, a tall and heavy androgyne robot in a steampunk world who was built using parts from a bank vault:
This is the point were you realize I suck at drawing This is the point were you realize I suck at drawing.

The Basic Elements of Game Design

006_4_elements
Now that we have a basic foundation of our game and a main character, it’s time to play with the elements that will form the game. I recommend you start with the elemental tetrad shown above because it’s going to be very straightforward for you when defining your game and the experience you want to give to your players. Use what you have done so far as a guide to define the following:
  • Aesthetics: This is how the game looks, sounds and feels, and it’s also what has the most direct relationship to the player.
  • Limbo is a game that is very good at capturing someone's attention just by it's aesthetics Limbo is very good at capturing someone’s attention just by its aesthetics.
  • Mechanics: These are the rules of your game. Use the game mechanic you chose in the previous activity and expand on it, asking yourself what the main goal of your game is and how players are going to achieve it.
  • Pong is a game that has a very limited set of rules, but that doesn't mean the game will be boring Pong has a very limited set of rules, but that doesn’t mean it’s boring.
  • Story: What tale do you want to tell through your game? Is it going to be a micro-narrative as in NES games, or is it going to be a complex and branching story?
  • David Cage is always pushing the envelop in terms of storytelling with games like Indigo Prophecy and Heavy Rain David Cage always pushes the envelope in terms of storytelling, with games like Indigo Prophecy and Heavy Rain.
  • Technology: This is the medium in which the aesthetics, mechanics and story will take place. What kind of technology are you going to use that will make the game possible?
  • Maybe your game works better as a board game, or as a game for smartphones Maybe your game works better as a board game, or as a game for smartphones.

Look at Your Creation Through Lenses

Our game is growing and finally taking shape, and now it’s time to polish the rough corners. To do this we are going to use the questions from the Deck of Lenses and focus them in areas of our game that we think need improvement.
Read more: From Zero to Pitch: A Walkthrough for Game Designers

Tuesday, March 12, 2013

How to Speed Up A* Pathfinding With the Jump Point Search Algorithm

Pathfinding is ubiquitous in games. So it’s important to understand the implications which are present when using algorithms such as A*. In this tutorial we’re going to cover a relatively new method for searching grid based worlds: Jump Point Search, which can speed up A* by orders of magnitude. Note: Although this tutorial is written using AS3 and Flash, you should be able to use the same techniques and concepts in almost any game development environment. This implementation is based on the original paper and article on JPS found here: Jump Point Search. The Lua based implementation, Jumper, was used for help with some parts of the implementation.

Jump Point Search Demo

Click the SWF to give it focus, then move your mouse over non-blocking areas of the map to have the NPCs try to get to it. Hit Space to switch between A*, Jump Point Search, and both. No Flash? Check out the YouTube video instead:

Setup

The demo implementation above uses AS3 and Flash with the Starling Framework for GPU accelerated rendering and the polygonal-ds library for data structures.

Pathfinding

Pathfinding is often used in video games and you are sure to bump into it at some point during your game development career. Its primary use is to give intelligent looking movement behavior to artificial entities (NPCs), to avoid them bumping into things (often). In some games the player avatar is also subject to pathfinding (strategy games, many third person RPGs and adventure games). So you might assume that the problem of pathfinding is solved, but unfortunately that’s not the case; there is no silver bullet which you can use and just be done with it. And even in big AAA games, you will still find funny things like this:
There may not be a silver bullet, but there is a bullet: the A* (A star) algorithm. In this tutorial we are going to see a brief overview of A* and how to speed it up using another algorithm, Jump Point Search. First we need a way to represent our game world in a way that a pathfinding algorithm can use it.

World Representations

One of the most important things to consider when thinking about pathfinding for your game is world representation. How is the data of the passable areas and obstacles organized with programming structures in memory? The simplest representation you can use is a grid-based structure, where path nodes are organized in a grid and can be represented by a 2D array. We are going to use this representation in this tutorial. Specifically it will be an eight-way grid representation: allowing movement in straight and diagonal directions.
The black pixels in the image represent the blocking cells.
Your requirements might be different, so this structure might not suit you. Good thing is that with some processing (usually done offline) you can change pathfinding representations to other formats. Alternatives to grid based approach would include things like polygon (obstacles represented by polygons) or navigation meshes (navigation areas represented by polygons); these can represent the same data with fewer nodes. Another data that can be stored in map representation are costs: how much it costs to travel from one node to another. This can be used for the AI to determine the path that, for example, prefers roads over regular terrain (making the cost of the road less than the terrain). Jump Point Search is specifically designed for eight-way grid based map representation so we’ll use that. Also, in its vanilla form it doesn’t support weighted maps. (In the final section I’ll discuss a possible way to remedy this.)

A* Pathfinding Refresher

Now we have a world representation let’s take a quick look at implementing A*. It’s a weighted graph search algorithm which uses heuristics (little “hints”) of how to best search the area from the start node to the end node. I highly recommend that you check out this visualization of pathfinding algorithms: PathFinding.js – visual. Playing with it can boost your intuition of what the algorithm is actually doing – plus it’s fun! For pathfinding using A* in rectangular grids we do the following:
1. Find node closest to your position and declare it start node and put it on 
   the open list. 
2. While there are nodes in the open list:
   3. Pick the node from the open list having the smallest F score. Put it on 
      the closed list (you don't want to consider it again).
   4. For each neighbor (adjacent cell) which isn't in the closed list:
      5. Set its parent to current node.
      6. Calculate G score (distance from starting node to this neighbor) and 
         add it to the open list
      7. Calculate F score by adding heuristics to the G value.

  • A* Pathfinding for Beginners (In-depth article that explains F and G scores among other things.)

  • Heuristics is essentially making a guess at the chance that the node being evaluated will lead to the goal. Heuristics can make a big difference in efficiency of pathfinding algorithms as they tend to limit the number of nodes that need to be visited. We are going to use Manhattan distance for our purposes (meaning that nodes closer to the goal will have a smaller number):
    private function manhattanDistance(start:Node, end:Node):int {
        return Math.abs(end.x - start.x) + Math.abs(end.y - start.y);
    }
    
    This is more or less it. We stop the algorithm when we find the goal node, and then track back using parent variable of node to construct the path. Search algorithms can be used for other things as well. A* is a general weighted graph search algorithm, and can be used on any such graph. This can cover other fields in AI, such as finding the optimal steps to achieve certain objective: throw a bomb or run for shelter and try to sneak behind an enemy? In game development we need to do things fast, when updating our games at 60 frames per second every millisecond counts. Even though A* performs reasonably well for some uses there exists the need to make it faster or use less memory.

    Optimizations

    Choosing the representation is the first thing that will have an impact on pathfinding performance and your choice of pathfinding algorithm. The size of the graph that is being searched will have a big correlation on how your pathfinding performs (which makes sense; it’s easier to find your way in your room than in a big city). Then you would consider higher level optimizations which usually involve clustering data to smaller regions and then searching those while later refining paths in travelled smaller regions. For example, if you want to go to a restaurant in a neighboring city, you first consider how you get from your city to that one, and once you are in that city you limit your “searching” to the area where the restaurant is located, ignoring the rest. This would include things like swamps, dead end elimination and HPA*. On the lowest level you have to do the searching. You chose your data representation and possible abstractions and then plug them in an algorithm that will pick out nodes, travel here and there searching for the goal. These algorithms are usually based on the A* searching algorithm with possible modifications. In simpler cases you can get away with using straight A* which offers you the simplicity. I’ve provided a grid based implementation in the source download.

    Jump Point Search

    [...] Read more: How to Speed Up A* Pathfinding With the Jump Point Search Algorithm