1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Pages

Sunday, December 25, 2011

Prim's Maze Algorithm in Minecraft

Overview
I previously gave a sneak peak of this mini project here.  For an A.I. class we were given an assignment that challenged us to generate random, procedurally generated worlds in MineCraft.  There was more to later versions of the project, but I'm stripping it down to just the terrain generation in the given source code for now.  I believe this is what Minecraft modders will find the most useful (compared to the other, more application specific code I wrote in the project).  Hopefully the source code is understandable enough that other modders will be able to plug this functionality into their own projects.  The handful of java files are well commented and is complemented by a description in this post.  Here's a video demo showing off the algorithm, created via a chat command in the game:



Randomized Prim's Algorithm
A random variant of Prim's Algorithm, is best viewed in this instance not as a graph or tree, but a grid of walls.  We start with a full grid of walls, open one cell in the maze, and then proceed to carve our way out.  We do this by adding all of its "out" neighbors to a "wall list".  While there are still walls in this list, we pick a random one.  If the cell opposite of the wall is not in the maze we make a passage and add its "out" neighbors from the wall list.  Otherwise, we remove it from the list, and repeat this process until the wall list is empty.

Using the Plugin
I'm assuming the users already know how to compile Bukkit plugins themselves.  If not, I've provided a working jar file which can be directly placed into the plugins/ directory of an existing CraftBukkit Server.  The given archive also provides the plugin.yml file, which you need when building for CraftBukkit to load it properly.  The /gen command has the following usage:


/gen [name] [mazeSize] [mazeWallHeight] [mazeSpawnZoneSize] [mazeBorderHeight] [autoTeleport] [iterations]
Default Name: OptimusPrim
Default Maze Size: Random (32, 64, 96, 128, 256, 512).  If you input one yourself it must be a non zero multiple of 16.
Default Maze Wall Height: 3
Default Maze Spawn Zone Size: 16
Default Maze Border Height: 16
Default Auto Teleport: true (the way java parses this is "true"/"false" not 0/1)
Default Iterations: -1 (-1 iterates until done, custom positive values iterate until a specified cutoff.  Used for dev testing only to show "stepwise" increments of the algorithm.)


If you want to reload an existing world you can use the /visit command.

/visit worldname

This is very handy if you don't want to autoteleport to a new world.  Terrain generation can be cumbersome for very large mazes and there may be instances where you want to create a world with the intention of exploring it later (e.g. for comparative purposes).  AutoTeleport is on by default for the /gen command, but if you type the command to the length of that argument and set it to false, you'll need to use the visit command afterwards to spawn in your newly generated world.

The Plugin/Source
The plugin is compatible with the latest version of Minecraft at the time of writing (v1.0).  If you want to rebuild the source you'll need to reference the proper version of Bukkit and setup your own server.  Please refer to their wiki if you need help setting this up.




Monday, December 19, 2011

Learning A Functional Language

Overview
Paradigms Matter -- a lot.  If all you've used is an imperative language I recommend giving a functional language a spin.  You probably won't use it on your next major project and have no issues continuing to use the languages you already know and love (C/C++/C#).  I'm a member of this camp myself and am by no means an evangelist for any particular paradigm.  However, I sincerely believe you'll be a better programmer if you force yourself to think outside the box of the imperative/OOP paradigms once in a while.  In general, the more paradigms you know, the more ways you'll have at your disposal to solve various problems.  Since languages like C++ and Python are multiparadigm, in many languages there's nothing stopping you from adapting a more functional style than the one you already use.

If nothing else, spending a week or two using Haskell or Lisp will make you more mindful of the inherent problems with mutable states. I've seen some severe hyperbole thrown around that this predictability removes the need for debugging altogether.  This is completely false, but in many instances functional languages involve more predictable routines which can decrease the time spent scratching your head in the debugger.  Allow me to explain:

Referential Transparency
Referential Transparency is a property of functions/expressions that could be replaced by the known return value and have no change in the program's execution. In laymans terms, functions have no side effects, and there is no mutable state or variables. The benefit of this style is a significant increase in predictability. Given a function, if I pass the same arguments to that function I will always get the same result. It won't matter at which point the function is called and will never be dependent on potentially corrupted data.

How many times have you had to debug a function in C++ that caused a crash when the problem was really propagated elsewhere?  If the answer is: way too many, perhaps you should consider writing more self encapsulating and fault tolerant functions.

Easily Implemented Parallelism
When there are no global variables or states which could be simultaneously accessed by multiple threads, a lot of the pains inherent from multithreaded programming can be avoided from the get-go.  Since everything is headed in this direction, it doesn't hurt to brush up on it for a change.

Caveats
A lot of comparable programs in functional languages might feel handicapped by this purity. For example, certain applications which need to represent state (e.g. a atm machine which lets users withdraw/deposit funds) will likely be more  complex if processed as a concatenation of self encapsulating functions.  In addition, responsible imperative programming may have very good reasons to cache global variables, which, in many cases, greatly increase maintainability and speed.  The lazy evaluation frequently implemented in functional languages also makes predicting time and space costs very difficult.

Potential
What's interesting about functional languages is that they could make for a decent scripting language.  In my experience using Haskell, there are numerous cases where you merely specify what you want or are looking for, rather than how to get it.  Unfortunately, the extensive unfamiliarity with this style means most of the 'rapid prototyping' benefits sought by these languages will be lost learning a new one.  In addition, virtually every game script is heavily reliant on object state.  That's why it goes into script in the first place.

I'm a few days into my adventures with Haskell and hope to produce another post with code and examples explaining how curried functions, lambdas, and folds can be very useful for any kind of programming.

Additional Reading
Real World Haskell (Online Book)
Learn You a Haskell (Online Tutorial)
Functional Programming In Games (PDF Thesis Paper)
Why Functional Programming Matters (PDF Explanation)
Why learning Haskell/Python makes you a worse programmer (counter argument)

Sunday, November 27, 2011

Speaking of Battlefield

During the recent release of a certain FPS game, I remembered that 4 years ago when I was still in high school I whipped together a map using the BF2 Editor.  It certainly wasn't the best toolset around, but it was a very fun multiplayer game that did a lot of things right.  The intensity of certain maps, from Karkand to Wake Island, was very difficult to match.  Since there is some disappointment over the lack of mod tools given with BF3, it's worth remembering that the previous game sported some very, very good modifications.

In their defense, there are legitimate security concerns that will need to be addressed.  In addition, based on what I've read about the Frostbite Engine (ppt), it is very heavily threaded and uses no scripting language of any kind.  I don't doubt that modding such an environment will be more difficult, but given the raw persistence and skill we've seen from modders in the past, are threads and intrinsics really just too hard to justify a lack of tools?

Before I spiral into a totally incoherent rant let me just present this old project of mine.  I was 16/17 at the time and tried to make a genuinely balanced map with no aerial vehicles of any kind.  Before I entered college (for a Game Design and Development Degree) my programming knowledge was rather minimal and almost entirely self taught.  Now I'm a much better programmer, a much better designer, and occasionally making my own tools from the ground up for my own projects.  I think I could handle it.  But the broader point is, even if I couldn't, it still wouldn't be a waste to release the tools.  Circa ye olden days of 2007, where I had none of these honed skills, and I was still able to produce stuff like this:




This map had virtually no significance to the modding community but studios sometimes forget that it doesn't have to.  At the time, projects like this were for me.  Becoming a professional game developer is a long journey that involves failing a number of times before you get something right, let alone make something significant that other people will care about.  Nonetheless, this journey is very important for the industry.  It's a process the guys at DICE understand all too well.  So if you remember what it was like trying to break into the games industry, you'll know how critical the modding community is to aspiring developers.  It probably wasn't too long ago that you had a portfolio site like this one too.


Wednesday, November 23, 2011

Update to CDS

Doxygen Documentation Makes Learning the API Much Easier
With new found (but short lived) free time over the break, I got around to committing a major new release of the CDS Library.  In addition to a host of new functions and finished structures, I made everything compatible with ANSI C and implemented Doxygen Documentation.  Other annoyances, such as const correctness have been ironed out, and all of the examples have been rewritten in a way that tries to teach new users how everything works.  CDS is still relatively new, and could benefit from more iterations and tweaking.  However, at this time it is pretty solid given the time constraints.  Functionality, style, and typenames are much more consistent now.  As usual, you can always find the latest version of CDS on GitHub. 



You can refer to my first post on CDS for more information here. Enjoy!

Below is a simple example program demonstrating the usage of a cds_dynamic_array:

int main(void) {
     /* create the array */
     cds_dynamic_array *array = NULL;
     unsigned int size = CDS_DEFAULT_DYNAMIC_ARRAY_SIZE; /* 4 by default */
     cds_result cr = cds_dynamic_array_create(&array, size); /* notice the address of the pointer is passed in */
    
     /* checks for an error, prints and returns 1 if something failed
      * we do this (in this example) repeatedly for safety
      */
     if (cds_error_check(cr)) return 1;
 
     /* add the strings to the container */
     char * presidents[] = {"washington", "jefferson", "roosevelt", "reagan", "clinton", "obama"};
     unsigned int numPresidents = 6;
     unsigned int i;
     for (i = 0; i < numPresidents; ++i) {
          cr = cds_dynamic_array_push_back(array, presidents[i]); /* will need to reallocate when i == 4 */
          if (cds_error_check(cr)) return 1;
     }
 
     /* print out the array */
     cds_log("Displaying the contents of the dynamic array...\n");
     logArray(array, 1);
 
     /* try adding past the current size of the array */
     void *curValue;
     cds_log("Adding some more strings to the container...\n");
     char *otherPresidents[] = {"adams", "bush", "coolidge", "kennedy"};
     unsigned int numOtherPresidents = 4;
     /* since the default size is 4, we've grown once, 4 * 1.5 = 6 */
     /* therefore, on the next insertion, the array will need to grow */
     /* we can use a custom growth function, to resize things differently */
     for (i = 0; i < numOtherPresidents; ++i) {
          cr = cds_dynamic_array_push_back_gf(array, otherPresidents[i], &arrayDoubleGrowth);
          if (cds_error_check(cr)) return 1;
     }
 
     /* lets try printing the array using an iterate function */
     cds_log("Printing the contents of the array using the iterate function...\n");
     cr = cds_dynamic_array_iterate(array, &printString2);
     if (cds_error_check(cr)) return 1;
 
     /* accessing the last element first just for the sake of printing it */
     unsigned int count = cds_dynamic_array_count(array);
     cr = cds_dynamic_array_get(array, count - 1, &curValue);
     cds_log("Removing the last element in the collection: %s", (char *)curValue);
 
     /* remove the element at the end of the array */
     cr = cds_dynamic_array_pop_back(array);
     if (cds_error_check(cr)) return 1;
 
     /* this will remove the element from the container, but give it as a pointer */
     cr = cds_dynamic_array_pop_back_data(array, &curValue);
     if (cds_error_check(cr)) return 1;
     cds_log("Removed: %s from the back of the array\n", (char *) curValue);
 
     /* remove functions will rely on pointer addresses by default
      * we know that roosevelt's address is in the array so this will work */
     cr = cds_dynamic_array_remove(array, presidents[2]);
     if (cds_error_check(cr)) return 1;
 
     /* if we want to remove with value equality, we must use the cmp functions 
      * so while "adams" is in the array, this is a new address
      */
     char adamsStr[] = "adams";
     cr = cds_dynamic_array_remove(array, adamsStr);
     char resultString[CDS_RESULT_MAX_STR_LEN];
     cds_result_string(cr, resultString);
     cds_log("Attempt to remove the same string at a different address: %s\n", resultString);
 
     /* if we use our strcmp equivalent, this can be done with the proper function */
     cr = cds_dynamic_array_remove_cmp(array, adamsStr, &cmpStr);
     cds_result_string(cr, resultString);
     cds_log("Attempt to remove the same string using the comparison function: %s\n", resultString);
 
     logArray(array, 0);
 
     /* you can also remove elements by index (with safe bounds checking) */
     cds_log("Removing from index 2...\n");
     cr = cds_dynamic_array_remove_at(array, 2);
     logArray(array, 0);
 
     /* The dynamic array supports multiple removal behavoirs
      * By default it is shift down, which maintains the relative order of elements
      * if remove_at(2) CDS_SHIFT DOWN: a b c d e f becomes a b d e f
      * It also supports replacing the removed element with the element at the end
      * This disrupts the relative order of elements, but is more efficient than shifting large arrays
      * if remove_at(2) CDS_REPLACE_WITH_LAST a b c d e f becomes a b f d e
      */
     cds_log("Removing from index 2 using CDS_REPLACE_WITH_LAST\n");
     cr = cds_dynamic_array_remove_at_rb(array, 2, CDS_REPLACE_WITH_LAST);
     logArray(array, 0);
 
     /* Reverse the relative order of elements */
     cds_log("Reversing the order of elements in the dynamic array...\n");
     cr = cds_dynamic_array_reverse(array);
     logArray(array, 0);
 
     cds_log("Deleting the dynamic array...\n");
     cr = cds_dynamic_array_delete(&array);
     if (cds_error_check(cr)) return 1;
     cds_log("Deletion successful...\n");
     return 0;
}

Friday, October 28, 2011

Blasteroids

Blasteroids is a reinterpretation of the popular game Asteroids.  It was built as a Flash project for school.  It controls like the original: you propel in your pointed direction, you wrap around the screen, and you press spacebar to shoot a projectile.  It features new artwork, and special seeker asteroids that explode and deal extra damage on impact.  Shooting them early can trigger an explosion which will damage any surrounding asteroids.  The game also features variable levels of difficulty selectable before the beginning of the game.

A remake of an old classic




Flash embedding and Blogger don't always seem to get along.  On top of that many browsers are dropping support for Flash.  It was more relevant when I was in college almost a decade ago.  To deal with this I've added a zip of the .fla and associated files to dropbox.  You can download it here to run it locally.

Resonance

My sophmore year I spent ~9 weeks working on a project called Resonance.  Our lead's pitch was green lit, and we quickly went to work on the project.  I was one of the two lead programmers on the game, working on low level rendering, animation, physics, and the like.  My fellow programmer also dedicated a significant amount of time helping me with these things and writing the game logic for all the game objects.  You can find more details about the game, including our design document and first playable on the lead's blog here.  Here's some footage of the first level in the game, which was mostly designed by myself (with feedback from the group and our professor).

 

Said level was built with the game editor I created.  Of course, I owe a lot of credit to the designers and artists who helped make it possible.  Here's some footage of the editor, with a voice over by yours truly explaining some details about the way it operates and the lessons I learned from creating it.
NOTE: I apologize for the audio quality.  You will likely have to turn up your volume to hear me clearly in these two videos.

 

And here's part two of the editor related videos showing the completed level used for the gameplay footage above, only this time loaded into the game editor.


A lot of the lessons learned from this project have directly influenced my XNA Game Engine (which I may introduce here sometime in the near future).

Tuesday, October 25, 2011

Randomized Prim's Algorithm and MineCraft

I recently implemented a procedural maze generation algorithm for MineCraft.  Using the Bukkit/CraftBukkit API with my own Chunk Generator I was able to create some awfully vast and daunting mazes inside the game.  Due to the way the API likes to write things one chunk at a time, I had to create the maze on the first chunk, store it statically, and convert the units of later chunks to create the blocks correctly.  The results were pretty encouraging:

A Bird's Eye View of a Procedurally Generated Maze

I used a variant of the Randomized Prim's Algorithm.  The basic algorithm, as described on Wikipedia is actually one of the most clear and concise explanations I have found.  The problem I had with most algorithms I found was that they used lines for walls rather than blocks themselves, which simply won't work in a 3D world of this sort (but are fine for things like online Java Applets).  Nonetheless, resources like Think Labyrinth and Daedalus are excellent references for exploring this line of algorithms.

Sunday, September 25, 2011

Introducing CDS (The C Data Structures Library)

CDS was a side project to demonstrate I can write efficient data structures in a procedural language like C.  It currently supports all types (with void pointers) with the following data structures:
  • dynamic array (cds_dynamic_array)
  • singly linked list (cds_slist)
  • doubly linked list (cds_dlist)
  • stack (cds_stack)
  • queue (cds_queue)
  • binary search tree (cds_binary_tree)
  • hash table (cds_hash_table) 
CDS tries to use consistent practices amongst the entire code base.  Every custom type, function, and macro starts with the cds_ or CDS_ prefix.  Data structure functions are typically structured in the following way: cds_datastructurename_function().  Therefore, to push_back an element onto a dynamic array you would call cds_dynamic_array_push_back(array, data);  This tends to make function calls and type names long, but is good practice, keeps things consistent, and avoids polluting the namespace.

Every data structure function returns a cds_result enum.  Every error code that could be thrown in any of the provided functions is a cds_result.  This keeps results more informative, consistent, and performance friendly.  Rather than returning -1 or NULL when an error occurs, it returns more specific information (e.g. CDS_BAD_ALLOC).  This is better practice because a number of things can potentially go wrong with a given operation, and this tries to ease the amount of time spent guessing.

Common operations include:
    create: allocates the given structure
    clear: removes the elements from the given structure
    delete: deletes the given structure (but not its elements)
    delete_all: deletes the given structure and its pointers to other elements
    add/push_back/insert/etc: adds the pointer to the structure
    remove/pop_back/pop/etc: removes a pointer from the structure
    count: get the current count of the structure
    find: search for a pointer address or do a value comparison via a cds_cmp_func
    iterate: passes every data type in the container safely to the given cds_visit_func

Assertions are given but not fully apparent in all of the code.  They can be compiled out of runtime code since they use optional macros.  There is support for custom memory allocators and log functions that can be interchanged easily at run/compile time so long as they match the function format of their stdlib equivalents.  Other intricacies include custom growth behavoirs, type specific comparisons, and more through function pointers, enums, and macros.

This code is still unfinished and in a relatively early state.  Since school and work keep getting in the way I'm releasing it now but intend on improving it later.  The code is free for non commercial use and commercially with permission (contact email given in the README).  Use at your own risk!


Here's a sample program example demonstrating cds_stack.  You'll see similar semantics for the rest of the data structures too.

UPDATE: I've revisited the library and am no longer so strict on the "every function returns an error result" philosophy.  Sometimes it is excessive and  actually makes code more error prone.  However, a vast majority of the functions still follow the old format.
UPDATE 2: CDS has received a significant new release, with full ANSI C compatibility, Doxygen Documentation, new functions, rewritten examples, and a host of bug fixes.  You can read about it here.

/* Create the stack */
cds_log("Running the stack test...\n");
cds_stack *stack = NULL;
cds_result cr = cds_stack_create(&stack);
if (cds_error_check(cr))
        return 1;
 
/* Add the values to the stack */
cds_log("Adding the usual values...\n");
int values[] = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 };
unsigned int n = 10;
unsigned int i;
for (i = 0; i < n; ++i) {
        cr = cds_stack_push(stack, values + i);
        if (cds_error_check(cr))
                return 1;
}
 
logStack(stack, 1);
 
/* Clean up the stack */
cds_log("Popping everything off the stack...\n");
int *top = NULL;
/* if you want to allow null values in your stack 
 * use cds_stack_count() instead */
while ((top = cds_stack_top(stack)) != NULL) {
        cds_log("Pop: %d\n", *(int *)top);
        cr = cds_stack_pop(stack);
        if (cds_error_check(cr))
                return 1;
}
 
/* if you call pop on an empty stack it returns CDS_UNDERFLOW */
cds_log("The stack is now clear.  Count: %u\n", cds_stack_count(stack));
cds_log("If you try to pop an empty stack, it will return: ");
cr = cds_stack_pop(stack);
char resultString[CDS_RESULT_MAX_STR_LEN];
cds_result_string(cr, resultString);
cds_log("%s\n", resultString);
 
/* you can also iterate on a stack like all the other data structures 
 * but we don't do that here
 */
cds_log("Deleting the stack...\n");
cr = cds_stack_delete(&stack);
if (cds_error_check(cr))
        return 1;
cds_log("Deletion successful...\n");

Tuesday, September 20, 2011

Procedural Terrain Generation

Windows Forms Launcher Window
I've often been interested in groups of algorithms that offer procedural generation of gameplay environments.  There is a rather wide range for these algorithms, both in computational time and complexity.  Some algorithms, such as Diamond Square are rather trivial to implement.  Others, such as the algorithms behind the MMO Love, are obviously much more in depth.  Since I'm about to explore a hybrid of these algorithms (Perlin Noise and others) I figured I'd resort back to an old project I worked on concerning terrain generation.  The idea of the project was to act as a tool to demonstrate how certain algorithms worked and compare them (as well as get some more experience with OpenGL 4).  The final launcher looked something like that on the right.  A drop-down of the algorithm, a radio select rendering mode, and algorithm specific settings.  The user could launch multiple windows and compare the differences.  This served as a rather effective tool for differentiating different methods and the effect of algorithm variables on the output (e.g. generating a terrain map with a decay rate of 0.3 instead of 0.7).  With a simple rotating camera and variable zoom levels, the user could inspect the final results of their work.
Diamond Square with high variation and low decay

It was nice to have effective proof of the weaknesses these algorithms have.  For example, Diamond Square in it's normal form, is prone to artifacts (especially with typical linear congruential number generation) and frequently builds mountains.  We also found sequential random additions was decent at generating rolling hills, but not smooth ones that often resembled cones.

All in all, procedural worlds won't replace the mapmaker anytime soon, but they are a powerful tool, and can serve as a nice base that can be tweaked and remodeled later.  I'd recommend game programmers become familiar with Diamond Square and Perlin Noise at the very least.







With a greater decay and lower variation
Wireframe View



Monday, September 19, 2011

SRT Moved to GitHub

Since Git as a VCS has become more prominent and the popularity of the service GitHub has (rightfully) skyrocketed, I finally got around to putting the Simple SDL RayTracer onto GitHub.  Not everyone in the industry uses SVN so it's high time I got comfortable with something else too.  If you've been holding off on Git because the clients have been bad I don't really blame you.  But they have gotten much better since I first tried them out, especially on Windows, and are frequently being offered as plugins for IDEs if that's the sorta thing you're into.

Sunday, August 14, 2011

Introducing SRT

A few weeks ago I finished an early version of a ray tracer written in completely standard ANSI C.  I undertook this small project for a number of reasons.  It dawned on me that most of the work I had been doing involved an environment with an IDE and an object oriented language.  While this isn't an inherently bad thing, it is healthy to stay proficient in as many paradigms as possible.  Therefore, I decided I was going to do a project in straight C with MinGW and Notepad++ to remind myself that you can do everything without polymorphism, inheritance, and encapsulation.  The end goal was a simple raytracer with an interface that mimicked Open GL.  Since I decided to use SDL for software pixel operations I decided to call the project SRT, for "SDL Ray Tracer."  The acronym is just as viable for "Simple Ray Tracer."  Feel free to browse the project in its early form on Google Code at the link below.  Hopefully the project will be seen on GitHub someday too.




Basic features of SRT include the following:
  • Raycasting support for multiple shapes, including spheres, 3D planar polygons, and oriented bounding boxes (OBBs)
  • Support for typical 3D transformations, vector, and matrix operations.  The "Projection" and "ModelView" matrices can be translated, rotated, and scaled.
  • Material and lighting support, with full RGBA channels for ambient, diffuse, and specular colors.  Materials also include scalar values for reflectivity and transmission (transparency).  Phong illumination included by default.
  • Optional support for super sampling, control over recursive depth and other parameters for scene lighting.
  • Although only tested on Windows, should support all the platforms and resolutions that SDL supports.
  • Typedefs and macros provide support for single or double floating point precision, and assertions which can be omitted from "release" code.
  • Future releases will hopefully support multiple light sources, orthographic projections, manipulation of the matrix stack, texture mapping, and thread support.
A typical program for SRT might resemble the following.  Note the resemblance to a typical Open GL program.
-- Init Function --
srtCreateWindow(800, 600, "SRT: SDL Ray Tracer");
srtPerspective(45.0, 800.0 / 600.0, 1.0);

-- Draw Function--
/* clear buffer */
srtClearColor3(0.05f, 0.05f, 0.05f);
srtClear();
srtBeginScene();

/* move camera 4.5 units backs */
srtMatrixMode(SRT_PROJECTION);
srtTranslate(0.0, 0.0, 4.5);

/* load identity modelview */
srtMatrixMode(SRT_MODELVIEW);
srtLoadIdentity();

/* setup a light */
srtLightAmbient4(0.065f, 0.065f, 0.065f, 0.15f);
srtLightDiffuse4(0.35, 0.35, 0.35, 0.35);
srtLightSpecular4(0.25, 0.25, 0.25, 0.85);
srtLightPosition3(-1.0, 1.1, -0.75);

/* setup a material */
srtMatAmbient4(1.0, 0.0, 0.0, 1.0);
srtMatDiffuse4(0.45, 0.45, 0.45, 0.75);
srtMatSpecular4(0.3, 0.3, 0.3, 0.35);
srtMatShininess(2.8);
srtMatReflectivity(0.45);

/* sphere: x, y, z, radius */
srtCreateSphere4(1.1, -0.85, -4.5, 0.65);

/* finish the scene */
srtEndScene();
srtPaintBuffer();
srtFlip();

Tuesday, June 7, 2011

The Importance of Databases in Game Development

Early in my game development core sequence classes I had to take an introductory database class.  It focused mostly on writing robust queries and the SQL Language itself.  There was still some talk about database management to understand normalization and the like in MySQL, but it mostly focused on queries.  At the time, I honestly didn't see the benefit in the class and I certainly wasn't alone in this analysis.  This was very short sighted for a number of reasons.  Let me digress:

First, everything, and I mean everything is moving onto the cloud.  Developers are pushing a lot more than just player profiles and the online stats of games 10 years ago.  Data as complex as custom game content means more developers, in more areas of a game's code need to be comfortable writing SQL queries which are reasonably efficient.  One poorly written query can be devastating to a server if it is hit enough times.

Secondly, social networking games is a vastly expanding market.  In these games, typically every piece of persistent data has to be put onto a database somewhere.  Therefore, when prototyping, developers have to understand good database design principles and be familiar with libraries used on their platform.

Last but not least, databases are being utilized in very cool ways to accomplish robust logging and resource management.

So why might a database be a worthwhile log listener for your logging system?  What makes it any better than a text or html file?  Technically a text file can be revisioned properly to store all the latest information, but this can be troublesome when large teams are handling the same content at once.  A database alleviates a lot of these problems and head aches.  In addition, SQL queries can serve as an excellent debugging tool.  Sure, a text file can store the same data but it is unreasonable for a programmer to hack together new parsing code every time they are looking for shared issues.  A well designed database will allow programmers to use joins and subqueries to find meaningful relationships (and problems) in their application.  Tom Niwinski discusses how they used databases in such a manner in his excellent feature on Gamasutra.  He describes how they had an AI controlled character roam their game world and log errors to the database.

Picture this scenario.  A large team installs the latest nightly build.  They all have an automated bot roam their world overnight.  In the morning they check for new entries in the database.  All these separate instances can now be joined and compared for common issues.  Log entries might specify an error, such as a player falling through the world, and give the world position.  Level designers can proceed to go to that position in the map and resolve the issue.

In his excellent book, Jason Gregory explains how databases can be used as resource managers.  They are more than capable of storing unique global identifiers, meta-data for game content, and a revision history for an object's life cycle.  Primary keys and heavily normalized tables lend themselves well to cross referenced integrity and truly singular identifiers

So...in summary.  Databases matter, in more ways than you might think.

Thursday, April 28, 2011

Recursive Ray Tracing

Adding real recursion to the mix (and not just basic shadow rays) gives the added benefit of effects like transmission and reflection.  By building off our phong model, we were able to recursively reflect a ray up to a given cutoff depth and modify the final color with a "reflective" float between zero and one.  Reflections is one area of 3D graphics where a ray traced model really excels.  With just a few more lines of code we were able to get results like those shown in the picture.

Friday, April 15, 2011

Ray Tracing with Phong Shading

Phong shading is very popular, supported by most graphics libraries by default, and can be implemented very easily and efficiently. That's because the model rests on a relatively simple equation. Compare this to my last post on Ray Tracing (which had no lighting model at all) and you'll see the difference is quite profound.

The polygonal floor just uses a procedural shader that verifies the row and column of the intersection point.  Hopefully we'll get around to real texture mapping some time in the future.

Friday, April 8, 2011

Update on Polygon RayCasting

My previous post on 2D Polygon Raycasting had an error at the very end of it. It turns out that it is much more useful to return the surface normal of the object which is hit and not the reflection off this normal. Finding the reflection after wards is extremely easy to find later if you still need it by merely reflecting the ray direction off this normal. However, finding the edge normal given a resulting reflection and a ray direction is much more annoying to calculate.

The final code then turns out to be the following:

public static bool RayCast(Ray ray, Polygon polygon, float tmax, out float t, out Vector2 pt, out Vector2 normal)
        {
            t = float.MaxValue;
            pt = ray.Origin;
            normal = ray.Direction;
            
            // temp holder for segment distance
            float distance;
            int crossings = 0;

            for (int j = polygon.NumVertices - 1, i = 0; i < polygon.NumVertices; j = i, i++)
            {
                if (RayIntersectsSegment(ray, polygon.v[j], polygon.v[i], float.MaxValue, out distance))
                {
                    crossings++;
                    if (distance < t && distance <= tmax)
                    {
                        t = distance;
                        pt = ray.GetPoint(t);

                        Vector2 edge = polygon.v[j] - polygon.v[i];
                        normal = Vector2.Normalize(RightPerp(edge));
                        // no reflection here
                    }
                }
            }
            return crossings > 0 && crossings % 2 == 0;
        }

Tuesday, April 5, 2011

A Glimpse At Shape Tesselation

Tessellation, sometimes referred to as triangulation, is an important concept in computer graphics that subdivides triangles for finer detail. I plan to write up a full post on this later. For now, here's some pictures of an old assignment I completed last quarter. You can see support for the cube,
cone, cylinder, and sphere.








Monday, April 4, 2011

A Trip Down Memory Lane


 
 
An old project I made freshmen year in a group of 4 aspiring developers. Not bad for beginners with a mere 10 weeks of time from start to finish.

Game Engines -- Not For The Squeamish

Game engines are a beautiful area of study and have always been an interest of mine. Currently, as an independent study I am writing a full 2D game engine on top of XNA. It's a lot of work, I like it, and I'm happy to be pursuing such a challenge. However, in most cases you shouldn't be writing game engines. There is a lot of general purpose code lying around out there that hasn't been used in any meaningful implementation ever. If you are looking to write your first game, going into it with the idea that you will write an engine first is very misguided for a number of reasons. For a post which nicely summarizes why see ScientificNinja's thoughts on the matter.

Write Games, Not Engines

2D Polygon - Raycasting

Synopsis
Ever write a 2D game that supported convex polygons? Ever struggle to find good sources for ray casting onto polygons in 2D space?

Ironically, in 3D you'll find a number of different sources for Ray-Polygon intersection. Unless BSP trees are in use it usually involves some procedure resembling the following:

1) Derive a plane for the polygon, generally using the surface normal.
2) Find the intersection point of the ray and the plane (if any)
3) See if the polygon contains the point. There are a number of ways to do this, most notably using a binary search and seeing if the new point constructs a CCW triangle.

In 2D, the test can't really work this way. Instead we need to test the ray against each segment of the polygon. You might assume that if any intersection with a segment is found, we can conclude there is a valid intersection. However, this is not the case. There are two reasons we can't do an "early out" test in which the algorithm terminates with a true result if just one of the segments is intersected by the ray.

1) If a polygon contains the ray origin, there is no intersection. A usual PolygonContainsPoint() implementation can serve as a reasonably fast prerequisite test. However, it would be faster if we used the ray casting algorithm. If you read up on the algorithm you'll see it involves casting onto every segment and counting the number of crossings. With an even number, we can conclude the origin is outside the polygon, otherwise it resides inside of it.
2) When ray casting against a polygon we'll usually want far more information than just a boolean result. We'll usually want the distance of the ray, the contact point, and the normal of the ray intersection. In all these cases, we'll always want the closest intersection point. To definitively find it we'll need to test against every segment anyways. Running a contains method won't result in twice as many iterations (since it uses a binary search), but it will result in more than just one sweep of the polygon.

Ray-Segment Test
The ray segment test in 2D can be solved parametrically, as explained here. First we check if the ray direction and the segment are parallel, in which case we can quickly cancel out the possibility of a collision*. Then we solve for s and t. S must be a value between 0 and 1 to lie on the segment. T must be between zero (or some other user defined min value) and the max t value.

* This is false for collinear segments and rays. I have not found a clean and efficient way to deal with this case. If you have a suggestion, comment below. Using distance checks will work, but is an ugly way to check if a point is within the range of tmax. The normal in this case would probably have to be the ray direction reflected off the vertex normal.

And of course, the code for the test. Please note that LeftPerp(v) = (v.Y, -v.X). The special equals call simply checks if the value is within epsilon distance from zero. Please note that Epsilon is rarely the most appropriate tolerance value, but what is varies widely depending on your units and scaling and I found it was sufficient towards understanding the algorithm itself.  Still: numerical robustness matters!

public static bool RayIntersectsSegment(Ray ray, Vector2 pt0, Vector2 pt1, float tmax, out float t) {
    Vector2 seg = pt1 - pt0;
    Vector2 segPerp = LeftPerp(seg);
    float perpDotd = Vector2.Dot(ray.Direction, segPerp);
    if (Equals(perpDotd, 0.0f, float.Epsilon))
    {
        t = float.MaxValue;
        return false;
    }

    Vector2 d = pt0 - ray.Origin;

    t = Vector2.Dot(segPerp, d) / perpDotd;
    float s = Vector2.Dot(LeftPerp(ray.Direction), d) / perpDotd;

    return t >= 0.0f && t <= tmax && s >= 0.0f && s <= 1.0f;
}
Ray-Polygon Test
The final ray polygon test brings all of this together in a relatively simple way. As we iterate through the polygon we test every edge. If there is an intersection we increment the number of crossings. If the distance is less than the current minimum distance we now have a new closest intersection, and we proceed to calculate the normals and contact point for this edge. If the number of crossings is even and greater than zero at the end of the test, a true result is returned and the current distance, contact point, and normal are used.
Contact point shown in light green, ray reflection normal in purple
Update: The purple line above is actually drawing the reflection of the ray off the normal, not the edge normal itself.  The below code has been corrected to return the edge normal.  In practice, it is much more useful than the reflection (which can always be calculated afterwards using the normal if you need it).

public static bool RayCast(Ray ray, Polygon polygon, float tmax, out float t, out Vector2 pt, out Vector2 normal)
        {
            t = float.MaxValue;
            pt = ray.Origin;
            normal = ray.Direction;
            
            // temp holder for segment distance
            float distance;
            int crossings = 0;

            for (int j = polygon.NumVertices - 1, i = 0; i < polygon.NumVertices; j = i, i++)
            {
                if (RayIntersectsSegment(ray, polygon.v[j], polygon.v[i], float.MaxValue, out distance))
                {
                    crossings++;
                    if (distance < t && distance <= tmax)
                    {
                        t = distance;
                        pt = ray.GetPoint(t);

                        Vector2 edge = polygon.v[j] - polygon.v[i];
                        // We would use LeftPerp() if the polygon was
                        // in clock wise order
                        normal = Vector2.Normalize(RightPerp(edge));
                    }
                }
            }
            return crossings > 0 && crossings % 2 == 0;
        }
In this implementation, it is assumed that the polygon is a convex array of vertices in Counter Clockwise Order. This is why we reflect off the right perpendicular of the edge. If your implementation uses Clockwise polygons you'll want to use LeftPerp() instead. Here's a reference of the helper functions:
public static Vector2 LeftPerp(Vector2 v)
        {
            return new Vector2(v.Y, -v.X);
        }

        public static Vector2 RightPerp(Vector2 v)
        {
            return new Vector2(-v.Y, v.X);
        }

Happy raycasting!

Ray Tracing

For my Computer Graphics class this quarter I've been tasked with building a relatively robust ray tracer. This has involved math for transformations, setting up an image plane, ray-casting onto certain shapes, and much, much more. The requirements for the project build up as we progress throughout the quarter. Here's our latest early checkpoint with no lighting and no recursive ray casting.

It's worth noting that super sampling is used in the screenshot on the right. Super sampling basically involves casting multiple, close but still marginally displaced rays, and averaging the results. Click here to see the scene without super sampling enabled.

Our next checkpoint will involve implementing Phong shading. I'll try to post screenshots and give tips on the implementation when it is finished.

Thursday, March 31, 2011

Drawing Lines in XNA With SpriteBatch

Lines are a vital primitive that can greatly aid one's work during development. They can be used to draw the actual geometry of an object, show the distance between them, and let testers spot bugs earlier in development.

If you have a drawing interface that only relies on SpriteBatch and don't feel like messing with Vertex Buffers, you can cheat drawing lines using a single texture. By using a 1x1, white pixel, we can create a simple interface to draw lines of any size and color.

First you'll need to create the texture. Having one static copy may be preferred over creating the texture inside every instance. A current project of mine keeps a copy of this texture in a "PrimitiveManager", which oversees a number of different needs. We'll just keep a copy of this pixel in the Line class here for simplicity's sake.

public class Line {
    public static Texture2D Pixel;

    public static Initialize(GraphicsDevice graphics) {
        Pixel = new Texture2D(graphicsDevice, 1, 1, false, SurfaceFormat.Color);
    }
}

This will create our simple 1x1 pixel. By using a Color Overlay, we can have it render in any color we like. It will support transparency too.

What kind of parameters would we like to use when drawing lines? Most lines are defined by a start point, an endpoint, a color, and a line thickness.

Since we are using a 1x1 pixel, the distance between the start and endpoint will be the scale in one direction, and the line thickness in the other. The rotation passed into SpriteBatch will be the Atan2() of the x and y values from this difference.

To draw lines we are going to set our Origin to be {0.0f, 0.5f}. This is the center left of the pixel, and will thereby apply scaling in one direction.

public class Line {
    public static Texture2D Pixel;
    private static Vector2 Origin = new Vector2(0.0f, 0.5f);
    
    public Color color;
    public Vector2 start;
    public Vector2 end;
    public float lineThickness;
    // following is not needed unless you are doing front-to-back
    // or back-to-front drawing
    public float layer;
    
    public static InitializeLineDrawing(GraphicsDevice graphics) {
        Pixel = new Texture2D(graphicsDevice, 1, 1, false, SurfaceFormat.Color);
    }
    
    public void Draw(SpriteBatch spriteBatch) {
        Vector2 d = end - start;
        float angle = Math.Atan2(d.Y, d.X);
        // add 1 for single points and due to the way 
        // the origin is set up
        float distance = d.Length() + 1.0f;
        spriteBatch.Draw(Pixel, start, null, color, angle, Origin, 
            new Vector2(distance, lineThickness), SpriteEffects.None, layer);
    }
}

Why do we add 1 to our distance calculation? There are two reasons:
1) A single point, with a start and end vector in the same space, should draw one pixel on screen. If we don't add 1 to our existing result, single point lines will not be drawn at all because the distance between them will be zero.
2) The origin is offset from the center, and our single pixel texture already occupies a known distance of one. If you look carefully, without this addition, lines will always be drawn one unit short.

In general, there are a number of different ways you can go about implementing this. My code is really just a bare skeleton. The origin can just be passed in every time in the draw method with a hard coded new Vector2(0.0f, 0.5f) constructor call. You might want to write an unload method that disposes of the texture. You could precompute the distance and angle between two points and save them, perhaps as private variables. Properties could work wonderfully to this effect. When either Start or End is changed, simply compute these values and avoid running an expensive square root operation on every frame. Atan2() isn't that expensive, but if you are going to save the distance as a field you might as well do the same with the angle.