Tuesday, May 21, 2013

Let’s Build a 3D Graphics Engine: Points, Vectors, and Basic Concepts

This entry is part 1 of 1 in the series Let's Build a 3D Graphics Engine
The 3D game engines that are behind today’s biggest games are staggering works of mathematics and programming, and many game developers find that understanding them in their entirety is a difficult task. If you are lacking in experience (or a college degree like myself), this task becomes even more arduous. In this series, I aim to walk you through the basics of graphics systems in 3D engines. More specifically, in this tutorial we will be discussing points and vectors, and all of the fun that comes with them. If you have a basic grasp of algebra (variables and variable math) and Computer Science (the basics of any object-oriented programming language), you should be able to make it through most of these tutorials, but if you’re having trouble with any of the concepts, please ask questions! Some of these topics can be terribly difficult.

Basics of Coordinate Systems

Let’s start from the basics. Three-dimensional graphics require the concept of a three-dimensional space. The most widely used of these spaces is called the Cartesian Space, which gives us the benefit of Cartesian coordinates (the basic \((x,y)\) notations and 2D grid-spaced graphs that are taught in most high schools). Build a 3D Graphics Engine tutorial Pictured: the bane of many high schoolers’ existence. 3-dimensional Cartesian space gives us an x, y, and z axis (describing position based on horizontal placement, vertical placement, and depth respectively). The coordinates for any point within this space are shown as a tuple (in this case a 3-tuple, since there are three axes). On a 2-dimensional plane, a tuple could be depicted as \((x,y)\), and in 3-dimensional plane, it is depicted as \((x,y,z)\). The use of this 3-tuple is that it shows a point’s location relative to the space’s origin (which itself is typically shown as \((0,0,0)\)). Tip: Tuple: an ordered list (or sequence) of elements in computer science or mathematics. So, \((K,y,l,e)\) would be a 4-tuple, showing a sequence of characters that make up my name. Build a 3D Graphics Engine tutorial Within this space, we are going to define a point as a representation of a 3-tuple. This can also be shown as: \[P = (x,y,z)\] In addition to this definition of a point, we must define its parts. Each of the elements within this 3-tuple is a scalar (number) that defines a position along a basis vector. Each basis vector must have a unit length (that is, a length of exactly 1), so 3-tuples such as \((1,1,1)\) and \((2,2,2)\) could not be basis vectors as they are too long. We define three basis vectors for our space: \[\begin{aligned} X & = (1,0,0)\\ Y & = (0,1,0)\\ Z & = (0,0,1) \end{aligned}\] Build a 3D Graphics Engine tutorial Source: http://www.thefullwiki.org/Arithmetics/Cartesian_Coordinate.

The Coordinate System

Now let’s talk about the mathematical definition of our coordinate system, how it affects our graphics system, and the calculations we can make.

Representing Points

The origin point of our coordinate system can be depicted as the point \(O\), which represents the 3-tuple (0,0,0). This means that the mathematical representation of our coordinate system can be depicted as: \[\{O;X,Y,Z\}\] By this statement, you can say that \((x,y,z)\) represents a point’s position in relation to the origin. This definition also means that any point \(P\), \((a, b, c)\), can be represented as: \[P = O + aX + bY + cZ\] From here on, I will reference scalars in lower-case and vectors in upper-case – so \(a\), \(b\), and \(c\) are scalars, and \(X\), \(Y\), and \(Z\) are vectors. (They are actually the basis vectors we defined earlier.) This means that a point whose tuple is (2,3,4) could be represented as: \[\begin{aligned} (2,3,4) & = (2,0,0) + (0,3,0) + (0,0,4)\\ & = (0,0,0) + (2,0,0) + (0,3,0) + (0,0,4)\\ & = (0,0,0) + 2(1,0,0) + 3(0,1,0) + 4(0,0,1)\\ & = O + 2X + 3Y + 4Z\\ \end{aligned}\] So, we’ve taken this abstract concept of “a point in 3D space” and defined it as four separate objects added together. This kind of definition is very important whenever we want to put any concept into code.

Mutually Perpendicular

The coordinate system that we will be using also has the valuable property of being mutually perpendicular. This means that there is a 90 degree angle between each of the axes where they meet on their respective planes. Build a 3D Graphics Engine tutorial Our coordinate system will also be defined as “right-handed”: Build a 3D Graphics Engine tutorial Source: http://viz.aset.psu.edu/gho/sem_notes/3d_fundamentals/html/3d_coordinates.html. In mathematical terms, this means that: \[X = Y \times Z\] …where \(\times\) represents the cross product operator. In case you aren’t sure what a cross product is, it can be defined by the following equation (assuming you are given two 3-tuples): \[(a,b,c) \times (d,e,f) = (bf - ce, cd - af, ae - bd)\] These statements might seem tedious, but later on, they will allow us to do a variety of different calculations and transformations much more easily. Luckily, you don’t have to memorize all of these equations when building a game engine – you can simply build from these statements, and then build even less complicated systems on top of that. Well, until you have to edit something fundamental within your engine and have to refresh yourself on all of this again!

Points and Vectors

With all of the basics of our coordinate system locked down, its time to talk about points and vectors and, more importantly, how they interact with one another. The first thing to note is that points and vectors are distinctly different things: a point is a physical location within your space; a vector is the space between two points. Build a 3D Graphics Engine tutorial To make sure that the two don’t get confused, I’ll write points in capital italics, as \(P\), and vectors in capital boldface, as \(\mathbf{V}\). There are two main axioms that we are going to deal with when using points and vectors, and they are:
  • Axiom 1: The difference of two points is a vector, so \(\mathbf{V} = P – Q\)
  • Axiom 2: The sum of a point and a vector is a point, so \(Q = P + \mathbf{V}\)
Tip: An axiom is a point of reasoning, often seen as evident enough to be accepted without argument.

Building the Engine

With these axioms stated, we now have enough information to create the building block classes that are at the heart of any 3D game engine: the Point class and the Vector class. If we were going to build our own engine using this information, there would be some other important steps to take when creating these classes (mostly having to do with optimization or dealing with already existing APIs), but we are going to leave these out for the sake of simplicity. The class examples below are all going to be in pseudocode so that you can follow along with your programming language of choice. Here are outlines of our two classes:
Point Class
{
 Variables:
  num tuple[3]; //(x,y,z)

 Operators:
  Point AddVectorToPoint(Vector);
  Point SubtractVectorFromPoint(Vector);
  Vector SubtractPointFromPoint(Point);

 Function:
  //later this will call a function from a graphics API, but for now
  //this should just be printing the points coordinates to the screen
  drawPoint; 
}
Vector Class
{
 Variables:
  num tuple[3]; //(x,y,z)

 Operators:
  Vector AddVectorToVector(Vector);
  Vector SubtractVectorFromVector(Vector);
}

[...]
Read more: Let’s Build a 3D Graphics Engine: Points, Vectors, and Basic Concepts

Wednesday, May 1, 2013

How to Create a Custom 2D Physics Engine: The Core Engine

This entry is part 2 of 2 in the series How to Create a Custom Physics Engine
In this part of my series on creating a custom 2D physics engine for your games, we’ll add more features to the impulse resolution we got working in the first part. In particular, we’ll look at integration, timestepping, using a modular design for our code, and broad phase collision detection.

Introduction

In the last post in this series I covered the topic of impulse resolution. Read that first, if you haven’t already!
Let’s dive straight into the topics covered in this article. These topics are all of the necessities of any half-decent physics engine, so now is an appropriate time to build more features on top of the core resolution from the last article.

Integration

Integration is entirely simple to implement, and there are very many areas on the internet that provide good information for iterative integration. This section will mostly show how to implement a proper integration function, and point to some different locations for further reading, if desired.
First it should be known what acceleration actually is. Newton’s Second Law states:
\[Equation 1:\\
F = ma\]
This states that the sum of all forces acting upon some object is equal to that object’s mass m multiplied by its acceleration a. m is in kilograms, a is in meters/second, and F is in Newtons.
Rearranging this equation a bit to solve for a yields:
\[Equation 2:\\
a = \frac{F}{m}\\
\therefore\\
a = F * \frac{1}{m}\]
The next step involves using acceleration to step an object from one location to another. Since a game is displayed in discrete separate frames in an illusion-like animation, the locations of each position at these discrete steps has to be calculated. For a more in-depth cover of these equations please see: Erin Catto’s Integration Demo from GDC 2009 and Hannu’s addition to symplectic Euler for more stability in low FPS environments.
Explicit Euler (pronounced “oiler”) integration is shown in the following snippet, where x is position and v is velocity. Please note that 1/m * F is acceleration, as explained above:
// Explicit Euler
x += v * dt
v += (1/m * F) * dt
dt here refers to delta time. Δ is the symbol for delta, and can be read literally as “change in”, or written as Δt. So whenever you see dt it can be read as “change in time”. dv would be “change in velocity”.This will work, and is commonly used as a starting point. However, it has numerical inaccuracies that we can get rid of without any extra effort. Here is what is known as Symplectic Euler:
// Symplectic Euler
v += (1/m * F) * dt
x += v * dt
Note that all I did was rearrange the order of the two lines of code – see “>the aforementioned article from Hannu.
This post explains the numerical inaccuracies of Explicit Euler, but be warned that he starts covering RK4, which I don’t personally recommend: gafferongames.com: Euler Inaccuracy.
These simple equations are all that we need to move all objects around with linear velocity and acceleration.

Timestepping

Since games are displayed at discrete time intervals, there needs to be a way of manipulating the time between these steps in a controlled manner. Have you ever seen a game that will run at different speeds depending on what computer it is being played on? That’s an example of a game running at a speed dependent on the computer’s ability to run the game.
We need a way to ensure that our physics engine only runs when a specific amount of time has passed. This way, the dt that is used within calculations is always the exact same number. Using the exact same dt value in your code everywhere will actually make your physics engine deterministic, and is known as a fixed timestep. This is a good thing.
A deterministic physics engine is one that will always do the exact same thing every time it is run assuming the same inputs are given. This is essential for many types of games where gameplay needs to be very fine-tuned to the physics engine’s behavior. This is also essential for debugging your physics engine, as in order to pinpoint bugs the behavior of your engine needs to be consistent.
Let’s first cover a simple version of a fixed timestep. Here is an example:
const float fps = 100
const float dt = 1 / fps
float accumulator = 0

// In units of seconds
float frameStart = GetCurrentTime( )

// main loop
while(true)
  const float currentTime = GetCurrentTime( )

  // Store the time elapsed since the last frame began
  accumulator += currentTime - frameStart( )

  // Record the starting of this frame
  frameStart = currentTime

  while(accumulator > dt)
    UpdatePhysics( dt )
    accumulator -= dt

  RenderGame( )
This waits around, rendering the game, until enough time has elapsed to update the physics. The elapsed time is recorded, and discrete dt-sized chunks of time are taken from the accumulator and processed by the physics. This ensures that the exact same value is passed to the physics no matter what, and that the value passed to the physics is an accurate representation of actual time that passes by in real life. Chunks of dt are removed from the accumulator until the accumulator is smaller than a dt chunk.
There are a couple of problems that can be fixed here. The first involves how long it takes to actually perform the physics update: What if the physics update takes too long and the accumulator goes higher and higher each game loop? This is called the spiral of death. If this is not fixed, your engine will quickly grind to a complete halt if your physics cannot be performed fast enough.
To solve this, the engine really needs to just run fewer physics updates if the accumulator gets too high. A simple way to do this would be to clamp the accumulator below some arbitrary value.
const float fps = 100
const float dt = 1 / fps
float accumulator = 0

// In units seconds
float frameStart = GetCurrentTime( )

// main loop
while(true)
  const float currentTime = GetCurrentTime( )

  // Store the time elapsed since the last frame began
  accumulator += currentTime - frameStart( )

  // Record the starting of this frame
  frameStart = currentTime

  // Avoid spiral of death and clamp dt, thus clamping
  // how many times the UpdatePhysics can be called in
  // a single game loop.
  if(accumulator > 0.2f)
    accumulator = 0.2f

  while(accumulator > dt)
    UpdatePhysics( dt )
    accumulator -= dt

  RenderGame( )
Now, if a game running this loop ever encounters some sort of stalling for whatever reason, the physics will not drown itself in a spiral of death. The game will simply run a little more slowly, as appropriate.
The next thing to fix is quite minor in comparison to the spiral of death. This loop is taking dt chunks from the accumulator until the accumulator is smaller than dt. This is fun, but there’s still a little bit of remaining time left in the accumulator. This poses a problem.
Assume the accumulator is left with 1/5th of a dt chunk every frame. On the sixth frame the accumulator will have enough remaining time to perform one more physics update than all the other frames. This will result in one frame every second or so performing a slightly larger discrete jump in time, and might be very noticeable in your game.
To solve this, the use of linear interpolation is required. If this sounds scary, don’t worry – the implementation will be shown. If you want to understand the implementation there are many resources online for linear interpolation.
// linear interpolation for a from 0 to 1
// from t1 to t2
t1 * a + t2(1.0f - a)
Using this we can interpolate (approximate) where we might be between two different time intervals. This can be used to render the state of a game in between two different physics updates.
With linear interpolation, the rendering of an engine can run at a different pace than the physics engine. This allows a graceful handling of the leftover accumulator from the physics updates.
Here is a full example:
const float fps = 100
const float dt = 1 / fps
float accumulator = 0

// In units seconds
float frameStart = GetCurrentTime( )

// main loop
while(true)
  const float currentTime = GetCurrentTime( )

  // Store the time elapsed since the last frame began
  accumulator += currentTime - frameStart( )

  // Record the starting of this frame
  frameStart = currentTime

  // Avoid spiral of death and clamp dt, thus clamping
  // how many times the UpdatePhysics can be called in
  // a single game loop.
  if(accumulator > 0.2f)
    accumulator = 0.2f

  while(accumulator > dt)
    UpdatePhysics( dt )
    accumulator -= dt

  const float alpha = accumulator / dt;

  RenderGame( alpha )

void RenderGame( float alpha )
  for shape in game do
    // calculate an interpolated transform for rendering
    Transform i = shape.previous * alpha + shape.current * (1.0f - alpha)
    shape.previous = shape.current
    shape.Render( i )
Here, all objects within the game can be drawn at variable moments between discrete physics timesteps. This will gracefully handle all error and remainder time accumulation. This is actually rendering ever so slightly behind what the physics has currently solved for, but when watching the game run all motion is smoothed out perfectly by the interpolation.
The player will never know that the rendering is ever so slightly behind the physics, because the player will only know what they see, and what they’ll see is perfectly smooth transitions from one frame to another.
You may be wondering, “why don’t we interpolate from the current position to the next?”. I tried this and it requires the rendering to “guess” where objects are going to be in the future. Often, objects in a physics engine make sudden changes in movement, such as during collision, and when such a sudden movement change is made, objects will teleport around due to inaccurate interpolations into the future.

Modular Design

There are a few things every physics object is going to need. However, the specific things each physics object needs may change slightly from object to object. A clever way to organize all this data is required, and it would be assumed that the lesser amount of code to write to achieve such organization is desired. In this case some modular design would be of good use.
Modular design probably sounds a little pretentious or over-complicated, but it does make sense and is quite simple. In this context, “modular design” just means we want to break a physics object into separate pieces, so that we can connect or disconnect them however we see fit.

Bodies

A physics body is an object that contains all information about some given physics object. It will store the shape(s) that the object is represented by, mass data, transformation (position, rotation), velocity, torque, and so on. Here is what our body ought to look like:
struct body
{
  Shape *shape;
  Transform tx;
  Material material;
  MassData mass_data;
  Vec2 velocity;
  Vec2 force;
  real gravityScale;
};
This is a great starting point for the design of a physics body structure. There are some intelligent decisions made here that tend towards strong code organization.
The first thing to notice is that a shape is contained within the body by means of a pointer. This represents a loose relationship between the body and its shape. A body can contain any shape, and the shape of a body can be swapped around at will. In fact, a body can be represented by multiple shapes, and such a body would be known as a “composite”, as it would be composed of multiple shapes. (I’m not going to cover composites in this tutorial.)
Body and Shape interface.
Body and Shape interface. The shape itself is responsible for computing bounding shapes, calculating mass based on density, and rendering.
The mass_data is a small data structure to contain mass-related information:
struct MassData
{
  float mass;
  float inv_mass;

  // For rotations (not covered in this article)
  float inertia;
  float inverse_inertia;
};
It is nice to store all mass- and intertia-related values in a single structure. The mass should never be set by hand – mass should always be calculated by the shape itself. Mass is a rather unintuitive type of value, and setting it by hand will take a lot of tweaking time. It’s defined as:
\[ Equation 3:\\Mass = density * volume\]
Whenever a designer wants a shape to be more “massive” or “heavy”, they should modify the density of a shape. This density can be used to calculate the mass of a shape given its volume. This is the proper way to go about the situation, as density is not affected by volume and will never change during the runtime of the game (unless specifically supported with special code).
Some examples of shapes such as AABBs and Circles can be found in the previous tutorial in this series.

Materials

All this talk of mass and density leads to the question: Where does the density value lay? It resides within the Material structure:
struct Material
{
  float density;
  float restitution;
};
Once the values of the material are set, this material can be passed to the shape of a body so that the body can calculate the mass [...]

Read more: How to Create a Custom 2D Physics Engine: The Core Engine

Monday, April 8, 2013

Tutorial: Debugging Native Extensions

I've written before about how to create your own native extensions for Adobe AIR, but what if you need to debug your extension? If you're doing anything but the simplest of projects, you're going to want to be able to debug the code on the native side, in addition to your AS3 code. The ActionScript side is (hopefully!) already taken care of by your IDE so this tutorial will focus on the native code. It's a pretty short tutorial, but also incredibly useful! For this tutorial, I'm using Visual Studio 2012 Express, which you can get for free from Microsoft and a slightly modified version of the NativeAdd extension from the tutorial linked above. First, you'll want to have your AIR project running. Then switch over to Visual Studio and set a break point (f9 or click the side bar) on the line you want to investigate (shown below). Next, select Debug > Attach to Process from the Visual Studio menu. If you're using Visual Studio 2010 and you don't see this option, first click Tools > Settings > Expert Settings. This will bring up a window with a list of running processes on your PC. In this list find the adl.exe process, select it and click the Attach button. Once the debugger has attached itself to the process, go back to your AIR application and have it call into the native function where you added your break point. In this case, I'm using the NativeAdd extension from a previous tutorial. I modified it a bit so that when the stage is clicked, it calls the doAdd function on the native side. With the debugger attached, when I click the stage, my breakpoint gets hit as shown here: You can now take a look at the values of your native code variables and monitor them as you step through the code using the Visual Studio debugger tools. Here's a sample shot of the watch window with the NativeAdd extension.
[...] Read more: Tutorial: Debugging Native Extensions

Saturday, April 6, 2013

How to Create a Custom 2D Physics Engine: The Basics and Impulse Resolution

There are many reasons you might want to create a custom physics engine: first, learning and honing your skills in mathematics, physics and programming are great reasons to attempt such a project; second, a custom physics engine can tackle any sort of technical effect the creator has the skill to create. In this article I would like to provide a solid introduction on how to create a custom physics engine entirely from scratch. Physics provides a wonderful means for allowing a player to immerse themselves within a game. It makes sense that a mastery of a physics engine would be a powerful asset for any programmer to have at their disposal. Optimizations and specializations can be made at any time due to a deep understanding of the inner workings of the physics engine. By the end of this tutorial the following topics will have been covered, in two dimensions:
  • Simple collision detection
  • Simple manifold generation
  • Impulse resolution
Here’s a quick demo:

Prerequisites

This article involves a fair amount of mathematics and geometry, and to a much lesser extent actual coding. A couple prerequisites for this article are:
  • A basic understanding of simple vector math
  • The ability to perform algebraic math

Collision Detection

There are quite a few articles and tutorials throughout the internet, including here on Tuts+, that cover collision detection. Knowing this, I would like to run through the topic very quickly as this section is not the focus of this article.

Axis Aligned Bounding Boxes

An Axis Aligned Bounding Box (AABB) is a box that has its four axes aligned with the coordinate system in which it resides. This means it is a box that cannot rotate, and is always squared off at 90 degrees (usually aligned with the screen). In general it is referred to as a “bounding box” because AABBs are used to bound other more complex shapes. An example AABB. An example AABB. The AABB of a complex shape can be used as a simple test to see if more complex shapes inside the AABBs can possibly be intersecting. However in the case of most games the AABB is used as a fundamental shape, and does not actually bound anything else. The structure of your AABB is important. There are a few different ways to represent an AABB, however this is my favorite:
struct AABB
{
  Vec2 min;
  Vec2 max;
};
This form allows an AABB to be represented by two points. The min point represents the lower bounds of the x and y axis, and max represents the higher bounds – in other words, they represent the top left and bottom right corners. In order to tell whether two AABB shapes are intersecting you will need to have a basic understanding of the Separating Axis Theorem (SAT). Here’s a quick test taken from Real-Time Collision Detection by Christer Ericson, which makes use of the SAT:
bool AABBvsAABB( AABB a, AABB b )
{
  // Exit with no intersection if found separated along an axis
  if(a.max.x < b.min.x or a.min.x > b.max.x) return false
  if(a.max.y < b.min.y or a.min.y > b.min.y) return false

  // No separating axis found, therefor there is at least one overlapping axis
  return true
}

Circles

A circle is represented by a radius and point. Here is what your circle structure ought to look like:
struct Circle
{
  float radius
  Vec position
};
Testing for whether or not two circles intersect is very simple: take the radii of the two circles and add them together, then check to see if this sum is greater than the distance between the two circles. An important optimization to make here is get rid of any need to use the square root operator:
float Distance( Vec2 a, Vec2 b )
{
  return sqrt( (a.x + b.x)^2 + (a.y + b.y)^2 )
}

bool CirclevsCircleUnoptimized( Circle a, Circle b )
{
  float r = a.radius + b.radius
  return r < Distance( a.position, b.position )
}

bool CirclevsCircleOptimized( Circle a, Circle b )
{
  float r = a.radius + b.radius
  r *= r
  return r < (a.x + b.x)^2 + (a.y + b.y)^2
}
In general multiplication is a much cheaper operation than taking the square root of a value.

Impulse Resolution

Impulse resolution is a particular type of collision resolution strategy. Collision resolution is the act of taking two objects who are found to be intersecting and modifying them in such a way as to not allow them to remain intersecting. In general an object within a physics engine has two main degrees of freedom (in two dimensions): movement in the xy plane and rotation. In this article we implicitly restrict rotation and use just AABBs and Circles, so the only degree of freedom we really need to consider is movement along the xy plane. By resolving detected collisions we place a restriction upon movement such that objects cannot remain intersecting one another. The idea behind impulse resolution is to use an impulse (instantaneous change in velocity) to separate objects found colliding. In order to do this the mass, position, and velocity of each object must be taken into account somehow: we want large objects colliding with smaller ones to move a little bit during collision, and to send the small objects flying away. We also want objects with infinite mass to not move at all. Simple example of what impulse resolution can achieve. Simple example of what impulse resolution can achieve. In order to achieve such effects and follow along with natural intuition of how objects behave we’ll use rigid bodies and a fair bit of math. A rigid body is just a shape defined by the user (that is, by you, the developer) that is implicitly defined to be non-deformable. Both AABBs and Circles in this article are non-deformable, and will always be either an AABB or Circle. No squashing or stretching allowed. Working with rigid bodies allows for a lot of math and derivations to be heavily simplified. This is why rigid bodies are commonly used in game simulations, and why we’ll be using them in this article.

Our Objects Collided – Now What?

Assuming we have two shapes found to be intersecting, how does one actually separate the two? Lets assume our collision detection provided us with two important pieces of information:
  • Collision normal
  • Penetration depth
In order to apply an impulse to both objects and move them apart, we need to know what direction to push them and by how much. The collision normal is the direction in which the impulse will be applied. The penetration depth (along with some other things) determine how large of an impulse will be used. This means the only value that needs to be solved for is the magnitude of our impulse. Now let’s go on a long trek to discover how we can solve for this impulse magnitude. We’ll start with our two objects that have been found to be intersecting:
Equation 1
\[ V^{AB} = V^B - V^A \] Note that in order to create a vector from position A to position B, you must do: endpoint - startpoint. \(V^{AB}\) is the relative velocity from A to B. This equation ought to be expressed in terms of the collision normal \(n\) – that is, we would like to know the relative velocity from A to B along the collision normal’s direction:
Equation 2
\[ V^{AB} \cdot n = (V^B - V^A) \cdot n \] We are now making use of the dot product. The dot product is simple; it is the sum of component-wise products:
Equation 3
\[ V_1 = \begin{bmatrix}x_1 \\y_1\end{bmatrix}, V_2 = \begin{bmatrix}x_2 \\y_2\end{bmatrix} \\ V_1 \cdot V_2 = x_1 * x_2 + y_2 * y_2 \] The next step is to introduce what is called the coefficient of restitution. Restitution is a term that means elasticity, or bounciness. Each object in your physics engine will have a restitution represented as a decimal value. However only one decimal value will be used during the impulse calculation. To decide what restitution to use (denoted by \(e\) for epsilon), you should always use the lowest restitution involved in the collision for intuitive results:
// Given two objects A and B
e = min( A.restitution, B.restitution )
Once \(e\) is acquired we can place it into our equation solving for the impulse magnitude. Newton’s Law of Restitution states the following:
Equation 4
\[V' = e * V \] All this is saying is that the velocity after a collision is equal to the velocity before it, multiplied by some constant. This constant represents a “bounce factor”. Knowing this, it becomes fairly simple to integrate restitution into our current derivation:
Equation 5
\[ V^{AB} \cdot n = -e * (V^B - V^A) \cdot n \] Notice how we introduced a negative sign here. In Newton’s Law of Restitution, \(V’\), the resulting vector after the bounce, is actually going in the opposite direction of V. So how do we represent opposite directions in our derivation? Introduce a negative sign. So far so good. Now we need to be able to express these velocities while under the influence of an impulse. Here’s a simple equation for modify a vector by some impulse scalar \(j\) along a specific direction \(n\):
Equation 6
\[ V' = V + j * n \] Hopefully the above equation makes sense, as it is very important to understand. We have a unit vector \(n\) which represents a direction. We have a scalar \(j\) which represents how long our \(n\) vector will be. We then add our scaled \(n\) vector to \(V\) to result in \(V’\). This is just adding one vector onto another, and we can use this small equation to apply an impulse of one vector to another. There’s a little more work to be done here. Formally, an impulse is defined as a change in momentum. Momentum is mass * velocity. Knowing this, we can represent an impulse as it is formally defined like so:
Equation 7
\[ Impulse = mass * Velocity \\ Velocity = \frac{Impulse}{mass} \therefore V' = V + \frac{j * n}{mass}\] The three dots in a small triangle (\(\therefore\)) can be read as “therefore”. It is used to show that the thing beforehand can be used to conclude that whatever comes next is true. Good progress has been made so far! However we need to be able to express an impulse using \(j\) in terms of two different objects. During a collision with object A and B, A will be pushed in the opposite direction of B:
Equation 8
\[ V'^A = V^A + \frac{j * n}{mass^A} \\ V'^B = V^B - \frac{j * n}{mass^B} \] These two equations will push A away from B along the direction unit vector \(n\) by impulse scalar (magnitude of \(n\)) \(j\). All that is now required is to merge Equations 8 and 5. Our resulting equation will look something like this:
Equation 9
\[ (V^A - V^V + \frac{j * n}{mass^A} + \frac{j * n}{mass^B}) * n = -e * (V^B - V^A) \cdot n \\ \therefore \\ (V^A - V^V + \frac{j * n}{mass^A} + \frac{j * n}{mass^B}) * n + e * (V^B - V^A) \cdot n = 0 \] If you recall, the original goal was to isolate our magnitude. This is because we know what direction to resolve the collision in (assumed given by the collision detection), and only have left to solve the magnitude of this direction. The magnitude which is an unknown in our case is \(j\); we must isolate \(j\) and solve for it.
Equation 10
\[ (V^B - V^A) \cdot n + j * (\frac{j * n}{mass^A} + \frac{j * n}{mass^B}) * n + e * (V^B - V^A) \cdot n = 0 \\ \therefore \\ (1 + e)((V^B - V^A) \cdot n) + j * (\frac{j * n}{mass^A} + \frac{j * n}{mass^B}) * n = 0 \\ \therefore \\ j = \frac{-(1 + e)((V^B - V^A) \cdot n)}{\frac{1}{mass^A} + \frac{1}{mass^B}} \] Whew! That was a fair bit of math! It is all over for now though. It’s important to notice that in the final version of Equation 10 we have \(j\) on the left (our magnitude) and everything on the right is all known. This means we can write a few lines of code to solve for our impulse scalar \(j\). And boy is the code a lot more readable than mathematical notation!
void ResolveCollision( Object A, Object B )
{
  // Calculate relative velocity
  Vec2 rv = B.velocity - A.velocity

  // Calculate relative velocity in terms of the normal direction
  float velAlongNormal = DotProduct( rv, normal )

  // Do not resolve if velocities are separating
  if(velAlongNormal > 0)
    return;

  // Calculate restitution
  float e = min( A.restitution, B.restitution)

  // Calculate impulse scalar
  float j = -(1 + e) * velAlongNormal
  j /= 1 / A.mass + 1 / B.mass

  // Apply impulse
  Vec2 impulse = j * normal
  A.velocity -= 1 / A.mass * impulse
  B.velocity += 1 / B.mass * impulse
}
There are a few key things to note in the above code sample. The first thing is the check on Line 10, if(VelAlongNormal > 0). This check is very important; it makes sure that you only resolve a collision if the objects are moving towards each other. Two objects collide, but velocity will separate them next frame. Do not resolve this type of collision. Two objects collide, but velocity will separate them next frame. Do not resolve this type of collision. If objects are moving away from one another we want to do nothing. This will prevent objects that shouldn’t actually be considered colliding from resolving away from one another. This is important for creating a simulation that follows human intuition on what should happen during object interaction. The second thing to note is that inverse mass is computed multiple times for no reason. It is best to just store your inverse mass within each object and pre-compute it one time:
A.inv_mass = 1 / A.mass
Many physics engines do not actually store raw mass. Physics engines often times store inverse mass and inverse mass alone. It just so happens that most math involving mass is in the form of 1/mass.The last thing to note is that we intelligently distribute our impulse scalar \(j\) over the two objects. We want small objects to bounce off of big objects with a large portion of \(j\), and the big objects to have their velocities modified by a very small portion of \(j\). In order to do this you could do:
float mass_sum = A.mass + B.mass
float ratio = A.mass / mass_sum
A.velocity -= ratio * impulse

ratio = B.mass / mass_sum
B.velocity += ratio * impulse
It is important to realize that the above code is equivalent to the ResolveCollision() sample function from before. Like stated before, inverse masses are quite useful in a physics engine.

Sinking Objects

[...] Read more: How to Create a Custom 2D Physics Engine: The Basics and Impulse Resolution

Saturday, March 30, 2013

Postmortem: Building Hard-shell Hockey with NME

NME postmortem
After finally creating my first game with NME and publishing it to the Android store, I thought I would share my experience so far.

To begin, I should explain what NME is and what it can do since I have only given it a brief mention before.
According to the official site NME is:
“The best way to build cross-platform 2D games or rich applications.”
“Using the innovative Haxe language compiler, NME deploys the same source code to C++, JavaScript and SWF bytecode without sacrificing reliability. As a result, NME provides fast hardware-accelerated performance for Windows, Mac, Linux, iOS, Android and BlackBerry, while also supporting Flash Player and HTML5. NME mirrors the traditional Flash API, so developers can leverage their existing experience to target new platforms, with better results.”
So far I have to say that statement is pretty accurate. But what about other similar tools that exist such as Corona, Unity, libGDX, Marmalade etc and why did I choose NME over them?
One of the more obvious reasons why I chose NME was my past experience with the Flash API which certainly lowered the learning curve. However, it was the fact that NME is the only truly free solution that won me over. LibGDX came closest out of the other choices I mentioned but it still required a $400 license fee for iOS. Having witnessed the HTML5 vs. Flash war I feel it is much safer to bet on technologies that are free and open source.
Of course NME is not without its flaws. Bugs and issues with various platforms do exist although fortunately they are few and far between. Additionally the community is not huge and so one can’t expect the same level of documentation and tutorials. That being said, I have really noticed over the past year how polished NME is becoming and how the community is growing which is fantastic to see. I am very much looking forward to seeing shader support for mobiles being added too.
So, having decided upon NME as the tool to use I realised I needed a game engine. When I first started there was not much choice so I decided to port over an engine I had made in Flash. As of now, ports of the popular Flash game engines like HaxeFlixel and HaxePunk exist. Regardless, software architecture and game engine design has interested me so I don’t feel that I wasted my time by creating my own and so far am very pleased with my results.
That being said, I did greatly underestimate how much work was involved in getting the game engine working properly.
The first challenge was getting the hang of rendering in a different way to what I had been accustomed to with Flash. In Flash, I would mainly use the DisplayList and at times blitting. This works great in web browsers but less so on mobile devices. Blitting actually causes the mobile to render via software which kills performance. The DisplayList, whilst better (it is GPU accelerated), is still not the optimum way of rendering. The best way when considering performance on mobiles is drawTiles and drawTriangles respectively. This means using a third party tool to export a swf into a tilesheet to render via drawTiles.
After understanding and implementing drawTiles, my next problem was displaying Android ads. I managed to find a library that used NME extensions to achieve this except that it did not work out of the box. Eventually I tracked down another person who had managed to get it up and running and they gave me a lot of tips. In the end this required delving into some of the NME framework and modifying the Java code template. It was after this that I decided to just focus on developing for Android as I did not want to waste a similar amount of time getting iOS ads working (although I believe they work out of the box on that platform) [...]

Read more: Postmortem: Building Hard-shell Hockey with NME

Friday, March 22, 2013

From Zero to Pitch: A Walkthrough for Game Designers

How can you take a rough game idea and turn it into something fun, interesting and engaging? This is definitely one of the most difficult obstacles that you must tackle as a game designer, as your job is to craft enjoyable experiences in a constrained environment. Would you believe me if I told you that after you finish reading and participating in the activities established in this article, you will have a game designed and ready to be developed? Yes, I know it sounds inconceivable, but trust me – this series of unconventional exercises will explain the workflow of designing a brand new game from zero to pitch.

Strengthen Your Game Design Muscle

I want to share with you one of the habits that helps me strengthen my game design muscle all the time:s turning real life experiences into games. When I started developing video games, I acquired the habit of trying to take everyday experiences and imagine them in terms of a video game; this makes you very aware of the things that are happening around you and the fun or frustrating factors in them. For example, let’s say that you took a flight and when you arrived at the airport your luggage got lost. This is a very frustrating scenario, but now take that experience and try to imagine how did the luggage got lost and what are the persons involved in that particular situation. Could you make a game based on this?
Can you take something apparently dull and turn it into something fun? Can you take something apparently dull and turn it into something fun?
What about a game where the player has to manage the conveyor belts at an airport and he loses points when passengers reach a certain amount of frustration levels when they don’t get their luggage? Or what if you experienced some turbulence during your flight and you got a little bit nervous… could you make a game where you had to fly an airplane and gain points by doing abrupt manoeuvres and scaring the passengers? Of course you can! This is the raw idea that you need to write down on paper and start working on. As a matter of fact, there’s this little booklet called the Tiny Game Design Tool that can make this process a little bit easier, by helping you jot the idea down as soon as possible. This tool is basically a printable PDF that you can carry in your wallet which helps you establish in very little time the following basic elements of your game:
  • Emotion, Mechanic and Theme
  • Main Character
  • Objects
  • Obstacles
  • Level Design

Play With the Elements

The alchemy of creativity (Image taken from Everything is a Remix Part 3) The alchemy of creativity. (Image taken from Everything is a Remix Part 3.)
We have a raw idea. This idea is like clay: you can mold it, split it into two, make a shape with it, and so on. Now we have to play with this idea… but how? Well, another creative exercise that is very useful to constantly work your game design muscle is to put in good practice the three basic elements of creativity: copy, transform and combine. I took this elements from the third episode of Kirby Ferguson’s documentary Everything is a Remix and they will help us exercise our mind so that it can always be wide open to creativity.

Copy

006_copy
Get practice and experience by copying existing games, but also apply small modifications to make the game easier or more difficult. This process of repetition and modification is called iteration and is very common in software and video game development. An example of some games that follow this exercise are:
  • Plasma Pong, which is a Pong variant with fluid dynamics and a new game mechanic that lets the ball stick to your paddle.
  • FPS-MAN, which is a take on Pac-Man but with a change in perspective. Now you look at the action from the point of view of Pac-Man and it changes the game experience completely.
  • Portal: The Flash Version, which takes the basic mechanics of Portal but constrains them to a two-dimensional space. This is also known as a demake.

Transform

006_transform
Did you have an interesting experience today, this week or this month? If you didn’t… borrow one! Take a newspaper and watch the front page news; you will notice that these stories are generally based on events that have something interesting to tell. Pick a story you like and turn it into a video game. Ask yourself questions: Who are the people involved? Where did the action happen? Could the personalities involved be interesting characters for my game? A recent game that tackles this type of exercise is Riot, in which the player experiences both sides of a protest, as rioters and as police officers. This idea came to fruition when Leonard Menchiari wanted to capture the feeling of his own experience of protests.
A kid that has to reach an airplane while avoiding security guards, cameras and checkpoints A kid that has to reach an airplane while avoiding security guards, cameras and checkpoints.
A more personal experience I had with this exercise was a small game I made for a jam where participants had to develop games based on TV news. My game was called “Escape from el Dorado”, and was based on a TV report about a kid that infiltrated the “El Dorado” airport in Bogotá and got into a plane that ended up in Chile. This story made me think immediately about how I could implement stealth mechanics in an airport setting.

Combine

006_combine
Take an existing game and combine it with another game, or change its genre. What would happen? What if I took a game like Super Mario Bros. and applied a skill tree just like the ones found in Role Playing Games? A recent example of combination is Bombermine, which takes the classic Bomberman game and mixes it with multiplayer elements.

Planting the Characters’ Seeds

Now that we went from a raw idea to something that is starting to take shape, and slowly but steadily we are covering the basics. Now let’s design the characters. First we’re going to start with a game design tool called Game Seeds. This game is generally played in groups but for this particular exercise we will play a single player variant focused on character development. Tip:I assume that by now you know the basics of playing Game Seeds. If not, please take a couple of minutes to read my guide here.
Click here to see the full size image Click to see the full size image.
We’re going to start with this fifteen card grid as our foundation. Please pick just six cards that have game mechanics that appeal to you. Now, in three steps we’re going to fill the Game Seeds sheet found below with the information given to us by the cards:
This is the Game Seeds sheet. Click here to download the printable version This is the Game Seeds sheet. Click to download the printable version.

Step 1: Fill the Hero Profile

006_seeds
Now that you have selected the six cards, go through each one and write down the icons that appear the most for each category. (In the case of a draw, choose the one you like the most.) For example, in the “species” category of your six cards you might have three human icons, two robot icons and one creature icon; based on this your character will be human.

Example

From the fifteen card grid I chose Fight, Escape, Race, Duel, Destroy and Navigate. Using those cards this was my character:
  • Creature (3 occurrences) or Robot (3 occurrences) (a draw, therefore I pick Robot)
  • Androgyne (4 occurrences)
  • Lives in the past (4 occurrences)
  • Nomad (5 occurrences)
  • Tall (3 occurrences) or Short (3 occurrences) (a draw, therefore I pick Tall) and Heavy (4 occurrences)

Step 2: Fill the Hero Attributes

Now take a look at the bottom of the cards and mark the Hero Attributes section with an “X” for each attribute that appears.

Example

Using the cards I chose I get the following attributes:
  • STR (STRENGTH) (6)
  • HEA (HEALTH) (6)
  • DEX (DEXTERITY) (4)
  • WIS (WISDOM) (2)
  • CHAR (CHARISMA) (0)
So Strength and Health are the main attributes of my character, and it is neither wise nor charming! Once you’ve done this, add your personal touch to the character in the Character Development section. This will define all the details that are going to give personality to your character.

Step 3: Pick the Mechanic; Draw Your Hero

Finally, pick your favourite game mechanic from the six cards you chose. This will be your main gameplay mechanic. Now you can draw a sketch of your hero based on all the attributes and his profile. This is my hero sketch, a tall and heavy androgyne robot in a steampunk world who was built using parts from a bank vault:
This is the point were you realize I suck at drawing This is the point were you realize I suck at drawing.

The Basic Elements of Game Design

006_4_elements
Now that we have a basic foundation of our game and a main character, it’s time to play with the elements that will form the game. I recommend you start with the elemental tetrad shown above because it’s going to be very straightforward for you when defining your game and the experience you want to give to your players. Use what you have done so far as a guide to define the following:
  • Aesthetics: This is how the game looks, sounds and feels, and it’s also what has the most direct relationship to the player.
  • Limbo is a game that is very good at capturing someone's attention just by it's aesthetics Limbo is very good at capturing someone’s attention just by its aesthetics.
  • Mechanics: These are the rules of your game. Use the game mechanic you chose in the previous activity and expand on it, asking yourself what the main goal of your game is and how players are going to achieve it.
  • Pong is a game that has a very limited set of rules, but that doesn't mean the game will be boring Pong has a very limited set of rules, but that doesn’t mean it’s boring.
  • Story: What tale do you want to tell through your game? Is it going to be a micro-narrative as in NES games, or is it going to be a complex and branching story?
  • David Cage is always pushing the envelop in terms of storytelling with games like Indigo Prophecy and Heavy Rain David Cage always pushes the envelope in terms of storytelling, with games like Indigo Prophecy and Heavy Rain.
  • Technology: This is the medium in which the aesthetics, mechanics and story will take place. What kind of technology are you going to use that will make the game possible?
  • Maybe your game works better as a board game, or as a game for smartphones Maybe your game works better as a board game, or as a game for smartphones.

Look at Your Creation Through Lenses

Our game is growing and finally taking shape, and now it’s time to polish the rough corners. To do this we are going to use the questions from the Deck of Lenses and focus them in areas of our game that we think need improvement.
Read more: From Zero to Pitch: A Walkthrough for Game Designers

Tuesday, March 12, 2013

How to Speed Up A* Pathfinding With the Jump Point Search Algorithm

Pathfinding is ubiquitous in games. So it’s important to understand the implications which are present when using algorithms such as A*. In this tutorial we’re going to cover a relatively new method for searching grid based worlds: Jump Point Search, which can speed up A* by orders of magnitude. Note: Although this tutorial is written using AS3 and Flash, you should be able to use the same techniques and concepts in almost any game development environment. This implementation is based on the original paper and article on JPS found here: Jump Point Search. The Lua based implementation, Jumper, was used for help with some parts of the implementation.

Jump Point Search Demo

Click the SWF to give it focus, then move your mouse over non-blocking areas of the map to have the NPCs try to get to it. Hit Space to switch between A*, Jump Point Search, and both. No Flash? Check out the YouTube video instead:

Setup

The demo implementation above uses AS3 and Flash with the Starling Framework for GPU accelerated rendering and the polygonal-ds library for data structures.

Pathfinding

Pathfinding is often used in video games and you are sure to bump into it at some point during your game development career. Its primary use is to give intelligent looking movement behavior to artificial entities (NPCs), to avoid them bumping into things (often). In some games the player avatar is also subject to pathfinding (strategy games, many third person RPGs and adventure games). So you might assume that the problem of pathfinding is solved, but unfortunately that’s not the case; there is no silver bullet which you can use and just be done with it. And even in big AAA games, you will still find funny things like this:
There may not be a silver bullet, but there is a bullet: the A* (A star) algorithm. In this tutorial we are going to see a brief overview of A* and how to speed it up using another algorithm, Jump Point Search. First we need a way to represent our game world in a way that a pathfinding algorithm can use it.

World Representations

One of the most important things to consider when thinking about pathfinding for your game is world representation. How is the data of the passable areas and obstacles organized with programming structures in memory? The simplest representation you can use is a grid-based structure, where path nodes are organized in a grid and can be represented by a 2D array. We are going to use this representation in this tutorial. Specifically it will be an eight-way grid representation: allowing movement in straight and diagonal directions.
The black pixels in the image represent the blocking cells.
Your requirements might be different, so this structure might not suit you. Good thing is that with some processing (usually done offline) you can change pathfinding representations to other formats. Alternatives to grid based approach would include things like polygon (obstacles represented by polygons) or navigation meshes (navigation areas represented by polygons); these can represent the same data with fewer nodes. Another data that can be stored in map representation are costs: how much it costs to travel from one node to another. This can be used for the AI to determine the path that, for example, prefers roads over regular terrain (making the cost of the road less than the terrain). Jump Point Search is specifically designed for eight-way grid based map representation so we’ll use that. Also, in its vanilla form it doesn’t support weighted maps. (In the final section I’ll discuss a possible way to remedy this.)

A* Pathfinding Refresher

Now we have a world representation let’s take a quick look at implementing A*. It’s a weighted graph search algorithm which uses heuristics (little “hints”) of how to best search the area from the start node to the end node. I highly recommend that you check out this visualization of pathfinding algorithms: PathFinding.js – visual. Playing with it can boost your intuition of what the algorithm is actually doing – plus it’s fun! For pathfinding using A* in rectangular grids we do the following:
1. Find node closest to your position and declare it start node and put it on 
   the open list. 
2. While there are nodes in the open list:
   3. Pick the node from the open list having the smallest F score. Put it on 
      the closed list (you don't want to consider it again).
   4. For each neighbor (adjacent cell) which isn't in the closed list:
      5. Set its parent to current node.
      6. Calculate G score (distance from starting node to this neighbor) and 
         add it to the open list
      7. Calculate F score by adding heuristics to the G value.

  • A* Pathfinding for Beginners (In-depth article that explains F and G scores among other things.)

  • Heuristics is essentially making a guess at the chance that the node being evaluated will lead to the goal. Heuristics can make a big difference in efficiency of pathfinding algorithms as they tend to limit the number of nodes that need to be visited. We are going to use Manhattan distance for our purposes (meaning that nodes closer to the goal will have a smaller number):
    private function manhattanDistance(start:Node, end:Node):int {
        return Math.abs(end.x - start.x) + Math.abs(end.y - start.y);
    }
    
    This is more or less it. We stop the algorithm when we find the goal node, and then track back using parent variable of node to construct the path. Search algorithms can be used for other things as well. A* is a general weighted graph search algorithm, and can be used on any such graph. This can cover other fields in AI, such as finding the optimal steps to achieve certain objective: throw a bomb or run for shelter and try to sneak behind an enemy? In game development we need to do things fast, when updating our games at 60 frames per second every millisecond counts. Even though A* performs reasonably well for some uses there exists the need to make it faster or use less memory.

    Optimizations

    Choosing the representation is the first thing that will have an impact on pathfinding performance and your choice of pathfinding algorithm. The size of the graph that is being searched will have a big correlation on how your pathfinding performs (which makes sense; it’s easier to find your way in your room than in a big city). Then you would consider higher level optimizations which usually involve clustering data to smaller regions and then searching those while later refining paths in travelled smaller regions. For example, if you want to go to a restaurant in a neighboring city, you first consider how you get from your city to that one, and once you are in that city you limit your “searching” to the area where the restaurant is located, ignoring the rest. This would include things like swamps, dead end elimination and HPA*. On the lowest level you have to do the searching. You chose your data representation and possible abstractions and then plug them in an algorithm that will pick out nodes, travel here and there searching for the goal. These algorithms are usually based on the A* searching algorithm with possible modifications. In simpler cases you can get away with using straight A* which offers you the simplicity. I’ve provided a grid based implementation in the source download.

    Jump Point Search

    [...] Read more: How to Speed Up A* Pathfinding With the Jump Point Search Algorithm

    Thursday, February 28, 2013

    Making it in Indie Games: Starter Guide

    image Every now and then someone will ask me for advice on making it as a professional indie game developer. First, it’s a huge honor to be asked that. So I want to say “Thank you!” Second… damn, if I really want to help out it’s a serious endeavor. Of course, I could always say “Give it your best! Work hard! Be true to yourself!” and it wouldn’t be a terrible reply… just not a terribly useful one, either. So here it is. Here is what I’m going to link when that rare situation arises again, because it’s too much work to write it up more than once! This is advice that I feel may actually be practical to someone who is just starting out as an indie game developer. Hope it helps! INDIEPENDENT So yeah, what does being “indie” even mean? Is “indie” short for independent? Is this game “indie”? Is “indie” a genre? IT’S CONFUSING - WHY DO WE NEED THE WORD “INDIE” AT ALL. To answer the last question, I offer the following scenarios. Scenario 1: a person is looking to make games, and perhaps start their own studio. They type “game development” into a search engine. The results, to say the least, are underwhelming. Dry. Academic. Programming-centric. (Try it yourself and see.) Scenario 2: the person instead types “indie games” into a search engine. Instead of pages upon pages of conferences, bachelor’s degrees, and programming tools, that person is met instead with pages upon pages of games to play and vibrant communities filled with people who are doing exactly what he or she wants to be doing. Some of them went to school, but many did not. A wealth of different ideas and tools are used. There are even documentaries about making games! It’s not just something where you get a degree and wait in line for a job. You can start making games RIGHT NOW. The word “indie” is more than just a way to describe a type of developmental process… like any label, it actually provides an avenue for people to explore that process and then flourish within it. It has a real purpose. It serves real lessons on game creation and entrepreneurialism. It offers real motivation! Of course, it can be irritating to see the term misused, or become a vehicle for pretentiousness and arrogance. Like any label, “indie” also breeds a certain amount dogmatism, croneyism, and other -isms. But the net result is really worth something. As someone who once gave up on professional game-making because I thought it meant a 9-to-5, I can tell you that it’s genuinely valuable. As for what games are “truly” indie, we’ll never fully agree, and that’s probably for the best. But I can tell you the criteria I’ve devised for The Independent Gaming Source to determine whether a game is fit for coverage: 1. “Independent”, as in no publisher. 2. Small studio (roughly 20 members or less). I choose that definition because it’s the most useful one. Someone who is looking to become an “indie” game developer is interested in what is possible under those constraints and how those types of studios operate. It excludes companies like Valve and Double Fine, which are certainly independent but too large to be “indie”. It also excludes “feels indie”-type games that are not self-published. Under that definition you still run into gray areas, but hey, just because we don’t know when “red” turns into “purple” doesn’t mean the words aren’t useful. Just think about someone who wants to make a game with a small team and self-publish it… what should they type into Google for inspiration, advice, community, etc.? “Indie” is still as good a word as any, in my opinion. So, should I go to school to learn how to make games? The most important thing to know about video game development and schooling is that no one, whether it’s an indie studio or big company, cares about degrees. How could it, when some of its most prominent members are drop-outs or never-beens? John Carmack, Cliff Bleszinski, Jonathan Blow, and Team Meat are all prominent members of this club. A degree is a piece of paper that says you can do something in theory - game developers want to know that you have enough passion to do real work, regardless of whether you’re being graded on it. And if you’re thinking of going indie, it won’t matter what other people think - you’ll simply need that passion to succeed or else you won’t. You’re the only one holding the door open in that case. This isn’t to dissuade you from going to college, per se (I studied computer science in college, and while it was far from a perfect experience, I also gained a lot from both the curriculum and the friends I made there). The point is make something - games, mods, art, and music. If school helps you with that, great. If it doesn’t, then you need to rethink how you’re spending your most valuable resources: time and money (both of which can be exorbitant costs for schooling). If I go to school, what should I study? At a regular university, I would suggest majoring in computer science, even if you “just want to be a designer”. The design of games is very much tied to how they are made. At an art school, illustration, concept art, and 3d modeling courses are probably the most useful for games. At a game school, they will hopefully try to involve you in all aspects of game creation, from programming to design. I would stay far away from design-only schools or curricula - those are either scams or are better suited to academia than actual game-making. Also, it’s worth finding out whether or not the school owns what you make while you’re a student there. See also: Jonathan Blow - How to Program Independent Games (read the comments as well as watch the video) Okay, you say make something. How do I start? My best advice for those starting out is not to get ahead of themselves. It’s easy to start worrying about tools, teams, platforms, deals, marketing, awards, and whatever else before you’ve even gotten a sprite moving around the screen. Those stars in your eyes will blind you. They’ll freeze you up. You need to be actively making games all the time. If we were talking about painting, I’d tell you to pick up a painting kit and a sketchpad at your local art store ASAP and just have at it. You’d proceed to put absolute crap down on the pad and get frustrated. But it’d also be kind of fun - so you’d keep doing it. Along the way you’d read some theory and study other people’s work. With good taste and under a critical eye, you would keep doing that until the day you painted something good. We’re talking about games, though. I recommend Game Maker and Unity as two all-purpose game-making suites. They both have a good balance of power versus ease-of-use; they’re both affordable or have free demos, and they both have a wealth of tutorials and plug-ins online. Both are used by professional developers (Unity in particular). Grab one of those and start running through the tutorials. When you run into trouble, ask for help. Give help back when you begin figuring things out. Get active in a game-making community. But above all else, keep making games. It’s the only way to truly answer all of those questions in your head right now. Also, watch this: LASTLY, MY TOP 10 TIPS
    Read more: Making it in Indie Games: Starter Guide

    Monday, February 11, 2013

    Creating Sprite Sheets in Five Minutes With Texture Packer

    So you’ve started working on your project, you got the basics down and now you are at the part where you actually want to implement some graphics… but you’re not quite sure how to do that. Let me introduce you to Texture Packer.

    What Is Texture Packer?

    Created by the guys over at Code n’ Web, Texture Packer is a great little piece of software that allows you to pack textures, assets, and pretty much anything you want in a nifty little sheet that is compatible with tons of frameworks!
    A free version is available, but some features are disabled.
    I love Texture Packer because it’s easy to use, it’s super flexible and it works with the engines you already use. It does an amazing job of letting you optimize your sheets with various options such as color depth, auto scaling, dithering and trimming invisible pixels. It also exports to many formats compatible with Cocos 2D, Unity, Corona, any engine that supports JSON, and more.

    The Main Interface

    Before we start to make our own sheet, let’s look at the interface for a second.
    Texture Packer: Sprite Sheets tool
    The main Texture Packer interface.
    All the settings on the left might look a bit menacing but don’t worry, they are actually really easy to set up. And even better, you only have to set them once; you can then save them and re-use them later for any other sheet you want to create!
    As you can see, the sheet’s settings are on the left, the list with our assets is on the right (which is empty right now), and the sheet preview itself in the center.

    Let’s Get Started!

    The first thing to do is open your folder with your magical art, and drag and drop it into Texture Packer’s assets list.
    Texture Packer: Sprite Sheets tool
    As you can see, they instantly appear in your sheet. You can click on a specific asset in that list and it will highlight in the sheet. (This works the other way around as well.)
    Tip: Right under the preview sheet there are some zoom options; feel free to play with those until the sheet fits nicely on screen for a better preview.
    Let’s take a look at the mess of options on the left. There are only three categories you have to worry about:
    • Output settings: These take care of where your sheet will be exported and the compression that’s applied to it.
    • Geometry settings: These take care of the sheet’s size and asset scaling.
    • Layout settings: These are all about assets’ placement in your sheet and overall optimization.
    We’ll go through each of these in turn.

    Output Settings

    Data Format

    This is the format for the sheet’s data file containing all the information you need, such as coordinates, dimensions, and rotation.
    Currently, the following options are available: Cocos2D, Corona, Sparrow/Starling, LibGDC, JSON, Unity3D, LibGDX, Css, Gideros, CEGUI/OGRE, AndEngine, AppGameKit, Slick2D, Moai, BHive, Xml, BatteryTech SDK , EaseIJS, Kwik2 and plain text.

    Data File

    This is the path where the data file will be saved, it’s a good practice to keep it in the same folder as the sheet itself.
    Tip: It’s a good idea to set the output folder so the sheet and its data file are exported directly into your engine’s directory.

    Texture Format

    From here you select the format of the sheet’s image. PNG is recommended, unless you need something else specific for your project.

    Png Opt. Level, DPI, and Premultiply Alpha

    Unless you know what you are doing, leave these settings untouched: Png Opt Level at “none”, DPI at 72 and Premultiply Alpha unchecked.

    Image Format

    In other words, color depth. It’s indicated to leave this untouched as well; the default RGBA8888 is the highest quality option. In case you are going for a specific look or you highly value the size of your sheets you can choose a lower option such as RGBA4444.
    The “A” in those abbreviations stands for “Alpha” (transparency), and you will notice that if you pick an option that says only “RGB” it will have a black background, meaning there is no alpha info there. If you need it, you can create an alpha map by selecting “ALPHA” from that list; this may come in handy in some cases so keep it in mind.
    Texture Packer: Sprite Sheets tool
    The difference between RGBA8888 and RGBA4444 is evident in the colors of the main body, above.

    Dithering

    If you haven’t touched the option from the previous step, dithering will be unavailable, but don’t worry about it. Now, in case you selected a lower option in the previous step, such as RGB444, you may notice that you can see where the color tones match… that’s ugly, so dithering is here to help you by mixing colors using some complex algorithms.
    Try out the “FloydSteinberg” or “Atkinson” options and see how they look.
    Texture Packer: Sprite Sheets tool
    The effect of the dither is easiest to see in the shirt’s fabric.

    Texture File

    Just like the “Data File” option, from here you choose the path where the sheet image will be exported.
    There is one more setting in the Output category called “Auto SD”. This allows you to automatically export a smaller version of the same sheet you are working on at the same time. This is useful when you work on an iOS app that requires two separate sheets in order to work on both normal and retina displays – Texture Packer has you covered.

    Geometry Settings

    Size

    This is the place where you can change the size of the sheet. You can pick a maximum size or a fixed size. There are predefined, recommended values in the drop-down menu (all powers of two), but you can also manually input any values you want.
    Unless you are working on a project that requires a specific size for the sheet, I suggest you enable the “Allow free sizes” option. This ensures that the sheet size is big enough to accommodate all the assets without leaving any wasted blank pixels. (Also, keep the “Pack” option on “Best”.)

    Scale

    From here you can easily re-size your entire sheet from the drop-down menu (also allowing you to add your own values). Right below the scaling number there is another option for the scaling method. The “smooth” and “fast” options are the most common; “fast” does the scaling without blending any pixel color (this could be used to scale up your pixelated graphics for a retro game), and “smooth” scales everything smoothly (obviously).
    [...]
    Read more: Creating Sprite Sheets in Five Minutes With Texture Packer

    Pixel Art Tutorial


    Note:
    This tutorial was created in 2007 for my personal website. Some small tweaks have been made since then, but nothing too significant.

    In this 10-step tutorial, I’ll teach you how to create a “sprite”, which is a stand-alone two-dimensional character or object. The term comes from video games, of course.
    Creating pixel art is a skill I picked up because I needed graphics for my games. After a lot of practice, I became kinda handy with it, and started to see it more as actual art rather than just a tool. These days, pixel art is quite popular in game development and illustration.
    This pixel tutorial was created many years ago to teach people the basic concepts behind pixel art, but I’ve streamlined it a lot since its first incarnation. There are other pixel tutorials around, but I find them to be overly-complicated and too wordy. Pixel art is not a science. You should never have to calculate a vector when doing pixel art.

    1. TOOLS

    One of the nice things about pixel art is that you don’t really need any fancy tools – your computer’s built-in paint program is probably good enough! That said, there are programs made specifically for pixel pushing, like Pro Motion, or, for Mac users, Pixen. Can’t say I’ve actually tried them, but I’ve heard good things. For this tutorial, I’m going to use Photoshop, which is an expensive beast, but it’s good for all kinds of art and it’s numerous features are pretty useful for pixelling.

    USING PHOTOSHOP FOR PIXEL ART
    When using Photoshop, your main weapon is going to be the pencil tool (shortcut “B”), which is the alternate for the brush tool. The pencil lets you color individual pixels, without any anti-aliasing.

    Two other tools that will come in handy are the marquee tool  (shortcut “M”) and the magic wand  (shortcut “W”) for selecting and dragging or copying and pasting. Remember that by holding “Shift” or “Alt” while you make your selection you can add or subtract from your current selection. This comes in handy when trying to grab areas that aren’t perfectly square.
    You’ll also use the eyedropper  (shortcut “I”) to grab colors. Color conservation is important in pixel art for a number of reasons, so you will want to grab the same colors and reuse them.
    Finally, make sure you learn the shortcuts. They can save you a lot of time and energy. A good one to know is “X”, which switches your primary and secondary colors.

    2. LINES

    Pixels are essentially little blocks of color. The first thing you need to learn is how to use these blocks effectively to make any kind of line that you want. We’ll discuss the two most basic types of lines, straight and curved.
    STRAIGHT LINES
    I know what you’re thinking – this seems too easy to even bother with! But with pixels, even straight lines can be problematic. What we want to avoid are “jaggies” – little breaks in the line that make the line look uneven. Jaggies crop up when one piece of the line is larger or smaller than the surrounding pieces.

    CURVED LINES
    For curvature, make sure that the decline or incline is consistent all the way through. In this next example, the clean-looking curve goes 6 > 3 > 2 > 1, whereas the curve with the jaggy goes 3 > 1 < 3.

    CONCLUSION
    Being comfortable making any line in pixels is crucial to doing pixel art. Later on, we’ll learn how to use anti-aliasing to make our lines look really smooth.

    3. CONCEPTUALIZING

    The first thing you need is a good idea! Try to visualize what you want to pixel, either in your head or on paper. A little work in the pre-planning department can let you concentrate on the actual pixelling.
    THINGS TO THINK ABOUT
    1. What will the sprite be used for? Is this for a website, or a game? Will I have to animate this later, or is this it? If the sprite will be animated later on, you may want to keep it smaller and less detailed. Conversely, you can pack as much detail into a static sprite that you will never have to deal with again. But think about where the sprite is going to go, and what will work best.
    2. What constraints are being placed on me? Earlier, I had said that color conservation is important. One of the reasons is that your palette of colors may be limited, either by hardware (less likely these days) or for coherency. Or accuracy, if you are emulating a specific style (C64, NES, etc.) Also, consider the dimensions of your sprite and how it will fit with its surroundings.
    LET’S WRASSLE!

    For this tutorial, I didn’t really have any constraints, but I wanted to make sure the sprite was large so that you could clearly see what was happening with each step. To that end, I decided to use the Lucha Lawyer, the ass-kickin’est wrestling attorney around, as my model! He could be in a fighting game, or something, with moves like the “Habeus Corpse Blaster”.

    4. THE OUTLINE

    A black outline will provide a good, basic structure to your sprite, so it’s the perfect place to start. The reason we choose black is that it’s nice and dark. Later on, I’ll show you how you can change the color of the outline for more realism.
    TWO APPROACHES
    There are two ways to approach the outline. You can draw the outline freehand and then clean it up, or you can start by placing the pixels as you want them from the start. You know, like, “click, click, click”.
    I think which approach you should use depends on the size of the sprite and your skill at pixelling. If a sprite is very large, it’s much easier to sketch the outline freehand to get the general shape and then clean it up later than to try and get it right the first time through.
    In this tutorial, we’re creating a fairly large sprite, so I’ll demonstrate the first method. It’s also easier to illustrate with text and pictures.
    STEP 1: CRUDE OUTLINE
    Using your mouse or tablet, sketch out a crude outline for your sprite. Make sure it’s not TOO crude, though – it should resemble more or less the final product that you want.

    In this case, I’m basing my outline almost entirely on my sketch.
    STEP 2: CLEAN UP THE OUTLINE
    First, crank up the zoom to around 6x or 8x magnification so that we can see each pixel clearly. Then clean up that outline! In particular, you want to trim away stray pixels (the outline should only be one pixel thick all the way through), get rid of any jaggies, and add any small details that were passed over in Step 1.

    Even large sprites never usually exceed 200 by 200 pixels. The phrase “doing more with less” never rings more true than when pixelling. And you will soon find that one pixel can make all the difference.
    Keep your outline simple. The details will emerge later on, but for now, concentrate on defining the “big pieces”, like muscle segmentation, for instance. It may not look like much now, but be patient.

    5. COLOR

    With the outline done, we have a coloring book of sorts that we can fill in. Paint buckets and other fill tools will make it even easier for us. Picking colors can be a little more difficult, however, and color theory is a topic that is beyond the scope of the tutorial. However, here are a few basic concepts that are good to know.
    HSB COLOR MODEL

    HSB stands for (H)ue, (S)aturation, and (B)rightness. It’s one of a number of computer color models (i.e. numerical representations of color). Other examples are RGB and CMYK, which you have probably heard of. Most paint programs use HSB for color-picking, so let’s break it down:
    Hue – What you understand “color” to be. You know, like “red”, “orange”, “blue”, etc.
    Saturation – How intense the color is, or how intense the color is. 100% saturation gives you the brightest color, and as saturation decreases, the color becomes more gray.
    Brightness (or “luminosity”) – Lightness of a color. 0% brightness is black.
    CHOOSING COLORS
    What colors you choose is ultimately up to you, but here are a few things to keep in mind:
    1. Less saturated and less bright colors tend to look more “earthy” and less cartoony.
    2. Think about the color wheel – the further away two colors are from one another, the more they will separate. On the other hand, colors like red and orange, which have close proximity on the color wheel, look good together.

    3. The more colors you use, the more distracted your sprite will look. To make a sprite stand out, use only two or three main colors. (Think about what just red and brown did for Super Mario back in the day!)
    APPLYING COLORS
    The actual application of color is pretty easy. If you’re using Photoshop, you want to first select the area you’re going to fill with the magic wand  (shortcut “W”) and then fill by pressing “Alt-F” (primary color) or “Ctrl-F” (secondary color).

    6. SHADING

    Shading is a crucial step on our quest for pixel demi-god status. Here’s where the sprite either gets some pop, or it devolves into a horrible mess. Heed my words, however, and you’re sure to succeed.
    STEP 1: CHOOSING A LIGHT SOURCE
    First, we have to pick a light source. If your sprite is part of a larger scene, there might be all kinds of local light sources (like lamps, fire, lamps on fire, etc.) shining on it. These can mix in very complex ways on the sprite. For most cases, however, picking a distant light source (like the sun) is a better idea. For games, you will want to create a sprite that is as generally lit as possible so that it can be used anywhere.
    I usually choose a distant light source that is somewhere above the sprite and slightly in front of it, so that anything that is on top or in front is well-lit and the rest is shaded. This lighting looks the most natural for a sprite.

    STEP 2: SHADING
    Once we have defined a light source, we start shading areas that are farthest from the light source with a darker color. Our “up and to the front” lighting model dictates that the undersides of the head, the arms, the legs, etc., should be shaded.
    Remember that the play between light and shadow defines things that are not flat. Crumple up a piece of white paper into a ball and then unroll it and lay it on a table – how can you tell that it’s not flat anymore? It’s because you can see the little shadows around the crinkles. Use shading to bring out the folds in clothing, and to define musculature, hair, fur, cracks, etc.

    STEP 3: SOFT SHADOWS
    A second shade, lighter than the first, should be used for soft shadows. These are areas that are indirectly lit. It can also be used to transition from the dark to the light, especially on curved surfaces.

    STEP 4: HIGHLIGHTS
    Places that are being hit directly by the light source can have highlights applied onto them. Highlights should be used in moderation (much less than shadows), because they are distracting.
    Always apply highlights after shadows, and you will save yourself some headache. Without the shadows already in place, you will be inclined to make the highlights too large.

    DO’S AND DON’T’S
    Shading is where most beginners get tripped up. Here are some rules you should always follow when shading:
    1. Don’t use gradients. The ultimate newb shading mistake. Gradients look dreadful, and don’t even begin to approximate the way light really plays off a surface.
    2. Don’t use “pillow-shading”. Pillow shading is when one shades from the outline inward. It’s called “pillow-shading” because it looks pillowy and undefined.
    3. Don’t use too many shades. It’s easy to think that “more colors equals more realistic”. In the real world, however, we tend to see things in big patches of light and dark – our brains filter out everything in between. Use at most two shades of dark (dark and really dark), and two shades of light (light and really light) on top of your foundation color.
    4. Don’t use colors that are too similar. There’s no reason to use two colors that are very similar to one another. Unless you want really blurry-looking sprites!

    7. DITHERING

    Color conservation is something that pixel artists have to worry about a lot. One way to get more shades without using more colors is to use a technique called “dithering”. Similar to “cross-hatching” or “stippling” in the traditional art world, you take two colors and interlace them to get, for all intents and purposes, an average of the two colors.
    SIMPLE EXAMPLE
    Here’s a simple example of using two colors to create four different shades using dithering:

    ADVANCED EXAMPLE

    Compare the top picture, which was made using the Photoshop gradient tool, and the bottom, which was created with just three colors using dithering. Notice the different patterns that were used to create the intermediary colors. Try experimenting with different patterns to create new textures.
    APPLICATION
    Dithering can give your sprite that nice retro feel, since a lot of old video games relied heavily on dithering to get the most out of their limited palettes (look to the Sega Genesis for lots of examples of dithering). It’s not something that I use very often, but for learning’s sake, here it is applied (possibly over-applied) to our sprite.

    You can use dithering as much or as little as you want. Honestly, it’s a technique that I’ve only seen a few people use really well.

    8. SELECTIVE OUTLINING

    Selective outlining, or “selout”, is kind of like shading the outline. Rather than using black all the way around, we apply a color that is closer to the color of the sprite itself. Also, we vary the brightness of this outline along the edge of the sprite, letting the light source dictate which colors we apply.
    Up until this point, we’ve kept the outline black. And there’s nothing wrong with that, really, since it looks pretty good and it keeps the sprite well-separated from its surroundings. But by using black, we are sacrificing more realism than we might like to, since it gives the sprite a “cartoony” look. Selout is a great way to get around that:

    You’ll notice I also used selout to soften the folds of his muscles. Finally, the sprite begins to look like a coherent whole rather than a whole bunch of separate pieces.
    [...]
    Read more: Pixel Art Tutorial