1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Pages

Sunday, September 2, 2012

Gameplay Architecture Part 2: Message Passing

Rationale
Games are very event driven applications which will sometimes (inevitably) end up resembling a web of complex states. In a project of any considerable size it simply isn't feasible to understand every component of the codebase.  This is especially true of at higher levels of gameplay code built on top of numerous parallel subsystems written by many different people over an extended period of time.

Messages which either inform pieces of code what has happened (e.g. DOOR_OPENED) or what to do (e.g. DISABLE_RENDERING) are part of the solution towards an architecturally sound and flexible codebase.  I had a few goals when implementing this system:

1) Integrates into the component/entity model system I wrote previously
2) Keeps class hierarchies flat, or avoids them altogether
3) Can queue messages efficiently and dispatch them later (e.g. send this message 10 seconds from now)
4) Components/Entities can listen in on these queues by their message type

Based on previous experience and these goals, this is what I came up with.

Simple Example
Message passing might be relatively simple if we could do everything on the stack.  As we will explore later, it is highly desirable to register components to listen in on messages by type, store them in a priority queue and dispatch them later.  This adds considerable amounts of difficulty to the problem, which will be addressed here in time.  First, lets explore the simplest case of a message sent on the stack to a known component.  Consistent with the component-entity model explained previously, here are the main interfaces for components/entities to send/receive messages.

const U32 DID_NOT_RECEIVE_MESSAGE = 0;
const U32 RECEIVED_MESSAGE = 1;

class jlMessage {
public:
      jlMessageType m_type;  // hashed message type
      void * m_data;         // untyped message data pointer
      U32 m_dataAndEntitySendMode;  // sizeof(*m_data) and flags
      F32 m_timeUntilDispatch; // when <= zero, dispatched from the queue
};

class jlComponent {
public:
    // Defers call to virtual receiveMessage function, non virtual base behavior for its benefits 
    // (e.g. tracing with certain debug builds that prints every message that is being sent)
    U32 sendMessage(jlMessage& msg);
protected:
     // Returns 1 if the message is received, 0 if not
     virtual U32 receiveMessage(jlMessage& msg);
}; 

class jlEntity {
public:
       // Iterates over attached components, calls sendMessage on them
       // Uses message entity send mode to determine which is called below
    U32 sendMessage(jlMessage& message, U32 minReceivers = DO_NOT_REQUIRE_RECEIVER);    
        // attached components  
 U32 sendMessageToComponents(jlMessage& message, U32 minReceivers = DO_NOT_REQUIRE_RECEIVER); 
        // attached components + all children
 U32 sendMessageToChildren(jlMessage& message, U32 minReceivers = DO_NOT_REQUIRE_RECEIVER);  
        // attached components + all ancestors 
 U32 sendMessageToAncestors(jlMessage& message, U32 minReceivers = DO_NOT_REQUIRE_RECEIVER);  
};



The non-virtual base behavior has numerous benefits for debugging
Lets take a look at a practical example, inside some script that detected a collision, we might wish to deal damage to an entity.


// inside some script which deals damage to an entity
void jlPhysicalCharacter::processCollisionWithFirePhantom() {
    jlDamageData dmgData;
    dmgData.m_amount = 5.0f;
    dmgData.m_damageType = FIRE;
    jlMessage damageMessage(JL_DAMAGE_MSG);
    damageMessage.setData(&dmgData, sizeof(dmgData));
    getHealthComponent()->sendMessage(damageMessage);
}

// inside the health component receiveMessage() function
U32 jlHealthComponent::receiveMessage(jlMessage& msg) {
    if (msg.getType() == JL_DAMAGE_MSG) {
         jlDamageData *data = static_cast<jlDamageData *>(msg.getData());
         JL_ASSERT(data);
         m_health -= data->m_damage;
         return RECEIVED_MESSAGE;
    } else {
        return DID_NOT_RECEIVE_MESSAGE;
    }
}

Message Data
The message is structured like so for a few reasons.  A unique type id makes message handling convenient in a switch statement, lets us know what to cast to and how to interpret the data.  In my implementation I am using an untyped void pointer so this is not something we can really ignore.  One alternative is to use a giant union of message data for all of your needs like so:

struct AlternativeMessageData {
 union {
  struct {
   Matrix4 m_matrix;
  };
  struct {
   char m_string[64];
  };
  struct {
   Point3 m_point;
   Vector3 m_vector;
   Quaternion4 m_quat;
   void* m_ptr;
  };
  // etc etc...
 };
};

The advantage of this approach is that you never have a dangling reference and queuing messages becomes as simple as queuing the message itself.  They have a consistent size and built in copy semantics.  Defining new types will add to compile times while the union works in all cases.


However, the unioned data approach has a number of downsides.  A giant union adds considerably to the size of the message struct.  This makes them less efficient to update and move around in the queue itself.  In certain instances, the data itself is not needed.  If a component receives a DISABLE_RENDERING message they likely don't need any more data to know what to do.  In these instances the unioned data can be excessive.  Such a system is also less readable in my opinion.  I found myself constantly going back and forth between code trying to find out what was really encoded where in the union.


With a pointer and knowledge of its size, you can define your own types and never have to wonder if the damage value was really encoded in m_floats[2] or m_floats[3] ever again.  Admittedly this requires jumping through a number of additional hoops, but it is a better solution in my opinion.

Entity Processing
Entities send the same messages, just with different kinds of iteration
The way the entity processes messages is pretty predictable given the signature.  We iterate through all components attached and call sendMessage to each one.  We accumulate the amount received and compare it to the minimum expected receivers.

In our messages we specify an EntitySendMode.  This is useful, since it allows us to queue messages without forcing the user to conform to one type of call.  If they want to send the message up or down the tree they merely need to specify the desired mode before queuing the message.

Queuing Messages
Don't Overcomplicate Things, It Is Just A Heap
So far things have been kept simple.  All we've really done is take a simple struct with a void pointer to some data, passed it to a consistent interface, casted it out and processed it appropriately.  All of this is still done when messages are queued, but we need to do a considerable amount of more behind the scenes to provide a safe and efficient priority queue of messages.

The easy part is providing the priority queue.  You can simply use an array heap and sort the messages by their remaining time.  You can keep popping messages off the heap until you find one that hasn't run out of time.  At that point, the following messages have not exceeded their delay yet and should not be dispatched just yet.  Any simple resource on heaps, where the data is sorted by the time until dispatch should be sufficient.

Except It Is Pool Allocated

In addition to your standard heap, we need to ensure our message data is safe.  To do this, when pushing our message we need to make a deep copy of our message data.  My implementation achieves this with a pool allocator.  The pool allocates space, the message data on the stack is memcopied to the newly allocated space, and the message struct which is copied into the queue has its data pointer readjusted to point to the data in the pool.  Popping the message does the usual heap manipulation and deallocates the message data from the pool.

Writing A Pool Allocator
Pool allocation might scare some people but it is one of the easiest custom memory allocators to roll yourself.    The idea is simple.  We have blocks of the same size kept in a buffer.  On init, every block goes into a "free list".  This is just a linked list of places in the pool that are currently unused.  When making an allocation we take the address (or offset) stored at the head of the "free list" and remove it from the free list.  When making a deallocation, we take the memory address/offset of the freed memory and add it to the free list.  This makes allocation and deallocation as fast as a linked list add/remove.  Of course, if we malloc these free list nodes it defeats the whole point of a custom memory allocator.  Therefore, we need to keep these nodes in the buffer itself.  This is not dangerous, since the free blocks are unused anyways. 

Listening For Queued Messages By Type
With the message queuing solved, we can somewhat trivially provide an interface that lets us register components/entities to listen for messages of a given type.  In addition to our existing queue, we can store a map that lets us efficiently look up all the listeners for a given message type.  Then when popping a message off the queue, we simply look up the linked list of listeners by its type, walk said list, and send the message to  every item in it.

Since I wanted to avoid virtual functions, I stored the type I needed to cast to in the nodes.  And yes, the nodes in the map should be pool allocated too.  

Other Implementations/Alternatives
- If you want to make things easier on yourself, you can use the unioned messaged data struct.  This will allow you to write an implementation that doesn't have to worry about any custom memory allocation.
-If you are less concerned about performance and more concerned about flexibility, you can have both components and entities derive from a "message receiver" class that has a virtual sendMessage function.  That way you don't need to store the receiver type yourself and worry about your type casts.
-You can forego the usage of an interface entirely and just use delegates/function pointers.
-You don't really need the message data size and time delay outside of the message queue itself.  This could be passed directly when queuing is needed and stored separately.  SoA would greatly improve data cache utilization when updating the remaining time on the message.

No comments:

Post a Comment

Post a Comment