1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Pages

Monday, March 26, 2012

Currently Under Construction

Things have been a bit quiet here but rest assured I've been occupied with a number of things, some of which will have source code up here in less than a week.

Card Kingdom
The Card Kingdom project is a lot of fun and has a lot of potential to get better.  I've been working on gameplay code, camera systems, and helping with frustum culling.  I may very well be working on A.I. systems as well (which I look forward to, especially for the final boss).

JL Math
I wrote a post on SIMD processing but I want to prove I've truly dived into it.  Making an interface that works well with SSE2 and FPU is taking a bit longer than I originally expected, but should see something of an alpha release soon.  The math library, JL Math, is named after my initials because it's an individual project and I didn't have a witty name for it.  Ideally the interface could be expanded to work with Altivec sometime down the road too.  Right now it supports Vector2, Vector4, Matrix4, and Quaternions.

FMOD and Component Entity Models
I've been experimenting with the excellent FMOD library and trying to find ways to cleanly integrate the system into a component based engine.  One of the excellent things about FMOD is that you can often get most of the functionality you want with just a few function calls.  Under the hood this industry standard library does an awful lot of things for you (which are of course configurable).

Concurrency
In my ongoing efforts to increase my experience with multithreaded architectures I'm hoping to fit in one more non-school side project before I graduate.  So I bought Anthony Williams' excellent book, C++ Concurrency in Action and am auditing a Operating Systems course to get the concepts.  It'll take some careful planning to decide how I want to implement this knowledge, but I'm currently hoping to do something with particles that builds upon the SIMD math library in an effort to do everything in parallel.

Expect to hear more on all of these developments in the near future.

Sunday, March 11, 2012

Some Assembly Required

I recently took a class in Computer Organization. A number of topics pertinent to game programmers were discussed in the course including:

- Assembly Instructions
- The Memory Hierarchy
- Pipelining/Parallelism

Like most classes which explore assembly code the instruction set was MIPS. MIPS is generally considered the best assembly language to learn first, at least before x86. As someone who enjoys tackling low level problems, getting a better understanding of how C/C++ code is actually translated into machine instructions was very interesting to me.

One of the projects for the class was writing Connect 4 entirely in assembly.

This was surprisingly informative and fun to write.  Since I hope to explore asm { } inside C++ and x86 in the near future this was a good first step in that direction.  You can download the source code here.  If you have access to a RSIM on your machine, you will almost certainly need to adjust the paths in the makefile to the appropriate directories.

Friday, March 2, 2012

The Challenges In Building A SIMD Optimized Math Library

What is SIMD?
SIMD stands for Single Instruction Multiple Data, and does exactly what it says it does.  By packing a couple of integral/floating point types together, a considerable amount of arithmetic and logical computations can be done at once.


Why Should You Care?
These days, everything is designed for parallelism.  The beauty of intrinsic functions is that they exploit such parallelism at the instruction level.  With entirely separate registers and instructions, CPUs can additionally compute the same operation 3 more times at little additional cost.  Game Engines have been using intrinsics for a while now.  I noticed while browsing the Doom 3 source code, which was released in 2004, that they had full support for SIMD intrinsics.  Even if you don't care to write high performance code you're only kidding yourself if you think this technology is going away.  In all likelihood, in the industry you're going to have to deal with intrinsics, even if they've been wrapped in classes that are provided for you.  This isn't just an exercise in reinventing the wheel for the sake of doing it - its a reality of modern day game architecture we all have (or want) to use.

But I Don't Know Assembly
You don't have too.  Although a background in MIPS/x86 ASM definitely helps with understanding how SIMD math really ticks, functions are provided that do most of it for you.  This is a very convenient and more maintainable way to do things as opposed to resorting to assembly instructions you have to roll yourself. 

Specific Uses In Games
SIMD types can be used for a number of things, but since the __m128 data type holds 4 floating point values (which must be aligned on a 16 byte boundary), they are particularly good for data sets which can be treated the same.  Things like vectors (x, y, z, w), matrices (4 vectors), colors (r, g, b, a) and more perform very well and are easily adapted to SIMD types.  You can use intrinsics to represent 4 float values which may not all be treated homogeneously as well.  For example, a bounding sphere may use the last value to represent the radius of the sphere, but then you have to be careful not to treat that scalar the way you treat the position vector.  The same holds true for quaternions.  And with crazy functionality, such as a random number generator that internally uses intrinsics, the downside is that code becomes much more complex and difficult to understand.

Challenges/Advice
The Alignment Problem: SIMD types have to be aligned on a 16 byte boundary.  A number of game developers know that the default allocators in C++ make no guarantee, and as such, are used to overriding the new/delete operators for certain things that need them.  Console developers, which may very well use custom memory allocators for everything already, should have no problem making the adjustment (if one is needed at all).  Students and hobbyists on the PC might find their program crashing due to unaligned access and be left scratching their head.  To start, I recommend just using _align_malloc or posix_memalign to get things going.  Consider making a macro like DECLARE_ALIGNED_ALLOCATED_OBJ(ALIGNMENT) and using that everywhere you need a static overload, so it can eventually be changed later from one place.

Pipeline Flushes: This one is tricky, because your code will run fine, but might actually be slower than if you were still using standard floating point types!  You have to try very hard to avoid mixing floating point math and SIMD math or the pipeline will experience significant stalls.  If this sounds hard, it's because it is.  A lot of game developers are used to using floats everywhere for everything.  As a result, a large codebase is a huge, huge pain to refactor to accommodate for SIMD math in as many places as possible.

I recommend wrapping SIMD types into a class called something like SimdFloat, which, for all intents and purposes, acts just like a float.  However, internally it actually holds a four float value to avoid those costly pipeline flushes.  The implications of this are significant: now things like dot products, length squared functions, and others are actually returning quad data types.  This will take some getting used to.  You can help alleviate it by writing a conversion operator that converts to a regular float and back, but overuse carries the potential for abuse.  If the additional memory space is significant, consider creating a SIMD float on the stack as soon as possible and using it for the rest of the function.  


Code Clarity: This can be alleviated, but is ultimately going to take a hit somewhere.  SIMD math typically involves a lot more temporary values on the stack.  This will seem to avoid going directly "at the problem" at times.  For example, consider a typical, trivial computation of the dot product:

inline float32 dot3(const Vector4& v) const {
    return x*v.x + y*v.y + z*v.z; 
}


Now becomes something like:

inline SimdFloat dot3(const Vector4& v) const {
   SimdFloat sf;
   quad128 p = _mm_mul_ps(quad, v.quad); // p[0] = (x * v.x), p[1] = (y * v.y), p[2] = (z * v.z) 
   quad128 sumxy = _mm_add_ss(_mm_shuffle_ps(p, p, _MM_SHUFFLE(1,1,1,1)), p); // check reference on shuffling
    quad128 sumxyz = _mm_add_ss(_mm_shuffle_ps(sumxy, sumxy, _MM_SHUFFLE(2, 2, 2, 2)), sumxy);
   sf.quad = sumxyz;
   return sf;
}


Don't say I didn't warn you :P.  Since writing this by hand every time can be quite error prone, most implementations use the vector4 SIMD type, or its equivalent as frequently as possible.  If the functions are inline anyways you will usually be just fine.

On the topic of code clarity, some implementations actually forbid operator overloads.  Before doing this, I recommend making sure you really need to do it.  It's true, the operators need to return a reference, and this has a cost.  Successful SIMD libraries, like the ones used in Havok, do not allow this.  It is justified as being speed critical.  Default construction doesn't set a zero vector, and the assignment operator, while provided, doesn't return a reference either.  This does indeed avoid computational costs, but it comes at the cost of clarity.  I highly recommend providing both operators and the equivalent functions, that way things that really need it can avoid the operators and those that don't can stay as clear as possible.  Consider for example, a calculation of a LERP value:

SimdFloat t; // assume its value is between 0 and 1 so we don't need to clamp it
Vector4 lerp = a + ((b - a) * t);


And again without operator overloading:

SimdFloat t;
Vector4 lerp;
lerp.setSub(b, a);
lerp.mul(t);
lerp.add(a);


Again, it's all about finding the right balance with your team.  Expect resistance if you want to forbid operator overloading.  And remember, you can always write a comment specifying the equivalent if operator overloading was available right above the computations themselves.


Portability: Math libraries have the wonderful combination of being incredibly speed critical, pervasive, and in need of portability.  Some platforms don't support these SIMD types at all, others differ in their implementation (e.g. AltiVec and SSE2).  This makes the development of a common interface considerably challenging.  Read up on certain articles and consider referring to the one used in Havok by downloading their free trial (even if you have no interest in using their Physics/Animation) libraries.