1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

Pages

Sunday, April 26, 2020

Doom: Eternal Recap

I started working on Doom: Eternal in January of 2019 until it shipped in March 2020 and I am still actively supporting it post launch today.  The commercial and critical feedback for Doom: Eternal has been very positive despite a pandemic forcing the closure of many retail outlets (yes people still buy games that way).  Some reviews have called it "one of the most intense shooters ever" and others have even given the game perfect 10 out 10 scores!

My personal experience differed with the one I had with Red Dead Redemption 2 for numerous reasons.  The team was much smaller, the genre was a first person shooter rather than an open world game, and rather than being on the project from the very start I joined relatively late into the development cycle of the project.  This meant I had to carry existing features across the finish line more often than architect them from the ground up and I had to become acquainted with these systems very quickly.  The project was originally scheduled to ship in November of the same year so this "whatever needs fixing fix it" attitude was more or less my experience working on the game.

Rather than linking to other videos I captured and edited some footage myself this time.  The compilation below does a good job of quickly demonstrating features I contributed to on the project.

You will likely want to  watch on YouTube and manually set the quality to 720p to get an acceptable resolution compared to the embedded video.  I captured the footage on an old 1680x1050 display and apologize for the inconvenience.

 




Weapons
I spent a lot of time working on the player’s Ballista weapon.  This included work on all 3 fire modes, the AI gore wounds used when hit by it, damage related issues, and the impact VFX.  This resulted in adding a few collision filtering features to our physics system, special shapetests for the primary fire, adding support for ‘sliced’ full body gore wounds, and splitting up the VFX into multiple instances for different positions and normal.

I did a lot of work on various weapons in the game aside from the Ballista.  This ranged from various damage issues, queueing up the equipment launcher, hit scan logic, smoke bombs for the playable mancubus, and many issues related to the upgrades for each weapon.



Gore
I fixed numerous features for gore systems and added a pattern for the destroyer blade gore cut behavior that could be re-used to replace demons with custom rigs, split ragdoll constraints, and shoot the pieces off with a random range of speed and direction.  I also worked on other gore behaviors like the blood punch, syncing wounds for multiplayer, and the deferred blood pools left behind by a corpse.


Multiplayer
My previous industry experience at Rockstar Games always used a peer to peer networking system which would migrate ownership of entities to different player machines during active gameplay.  Doom: Eternal was a fun learning and growth experience for me because it instead used a server authoritative method.  This meant I needed to learn new methods to deal with things that needed to be client authoritative such as hitscan weapon hits and how to validate it on the server.  Animation issues also could be much more difficult.  Previously during something like a ledge grab with a peer to peer setup we could simply disable all network blending but with a server authoritative setup this got much murkier and complicated than that.

I supported many gameplay features specific to our PVP gametype known as Battlemode.  This included demon character work (e.g. the revenant jetpack or the marauder shotgun), summon wheel abilities, 3rd person ledge grab animations, 1st person spectating, and a considerable amount of work adding features to our visual scripting system for the Battlemode tutorials.


Misc
  • Soft body repulsors that prevented characters from visually clipping into each other
  • Contributions to a heat map performance tool to find bottlenecks in a level
  • Contributions to player customization from getting skins to appear in cutscenes, animated deaths, and on remotely controlled clones in multiplayer
  • Gore nest secret encounter systems support
  • Boss voice over audio for the doom hunter boss fight
  • Numerous bug fixes for VFX, damage, stats and milestones not working as intended
  • Rune prototyping
  • Various UI and menu support


Sunday, August 11, 2019

QuakeCon 2019

Went to my first QuakeCon in July and had a great time meeting fans and being a part of the largest LAN party in North America.  The keynote for Doom: Eternal can be seen below.


Wednesday, June 19, 2019

Doom Eternal E3 Content

We released 2 trailers and 1 extended piece of gameplay footage at the Bethesda E3 Conference.  Looking forward to shipping this game in November!


Single Player Trailer




Battlemode Trailer






Full E3 Presentation

If you want to see just the gameplay you can skip to here.

Sunday, June 2, 2019

Console Command Variables & Functions Using Tagged Variants

An in game console and the ability to modify parts of your application from a file of command line arguments can be very useful for development.  I've written a fairly simple C++ solution that allows you to set any static variable using the interface and invoke functions from a file.  An actual in game console would be the next step after this and admittedly quite a bit of work.  You can go pretty far with this stuff with features like autocomplete, resetting back to default values, etc.  To use it yourself if you are using visual studio you can right click your project and go to Properties > Debugging and insert the path to the file in the Command Arguments.

It's got a GitHub repository you can find here:




Saturday, January 26, 2019

Joining iD Software as a Gameplay Programmer

After over 6 years of working at Rockstar Games I'm excited to announce that I have accepted an offer with iD Software. We've moved to the Dallas area and I will begin working on Doom Eternal on Monday!







Sunday, November 25, 2018

Red Dead Redemption 2 Recap

I worked on Red Dead Redemption 2 since milestone 1 in June of 2012. It has been 6.5 years and it's surreal to see the game has finally been shipped and well received by critics. Here are some of the features I made major contributions to during the course of the project.

The embedded videos have been resized to fit the Blogger post so they may not be as high quality as the actual video on YouTube itself.  Compared to the full YouTube videos these embeds have start and end times appropriate for quickly explaining the features.

Human Body Looting

Red Dead Redemption 2 features realistic and seamless dead body looting that lifts up the corpse slightly and picks at appropriate spots (waist, chest, etc) to get items.  We had lots of challenges to deal with related to ragdolls, slopes, and custom animation implementations to get such a feature to work.  Support for fallback swipes uses a generic asynchronous helper which does probes and route checks.





Animal Skinning

One of the most well received features I worked on is the animal skinning.  Compared to the previous entry in the series animal skinning in this game is seamless, gritty, and realistic.  Very small animals (e.g. rats) can be put straight into your satchel.  Slightly larger animals like rabbits need to be skinned with a pelt you put away like so.  Doing this seamlessly presented many challenges dealing with dynamic terrain, obstacles, target poses, slopes, and foot sliding.



Medium sized animals like deer have a carriable pelt after the skinning that you can put onto your horse.  We also had to support this in first person which presented some challenges with cameras and forced us to deliver on a high fidelity for such a feature that lets the player get this close to the action.




For the very large animals you can only get a large pelt and we do allow fade transitions to speed it along but the entries into it are still seamless.



Hogtie

A major tool for bounty hunting, missions, and ambient fun is the ability to hogtie (tie up) a character to carry them around without killing them.  Consistent with other improvements from the first Red Dead Redemption the hogtie, cut free, pickup, drop, stow, and unstow actions are seamless.  Compared to the other carriables like dead humans and objects hogtied characters are alive and thus support directional idles for various situations like being in shallow water or gun shots whizzing by.





Lasso

I spent a good deal of time getting the lasso working in a new codebase.  The basic blend trees, casting out, and reaction behaviors.  We have a fully physically simulated rope for the lasso in Redemption 2 so I'm really happy with the quality of it.




Carrying

You can pickup, drop, throw, stow, and unstow a lot of different things in Red Dead Redemption 2.  This presented complicated problems related to IK, reservations, mover fixups on sloped surfaces, and syncing it in multiplayer.  The final product supports pickup actions with humans (dead, knocked out, hogtied, dead and hogtied), animals (dead), and objects (moneybags, pelts, etc).  Fallbacks are supported for messy situations such as the corpse being under a wagon.

Another challenge we had to deal with was seamless transitions to and from cutscenes to carrying gameplay (see below).



Plant Picking

As part of our collection and looting mechanics we also have plant picking which features separate prompts for eating and storing in the player's satchel.  Similar to the other interactions we don't have camera cuts like in the first Red Dead Redemption and it's all seamless.  The player can walk away and continue this action with an upperbody performance when appropriate.




If you have time you can watch a video of all the herbs you can pick below.





Vehicle Locks

A minor but fun ambient feature in Red Dead Redemption 2 involves saving prisoners from the back of wagons and looting coaches with locks on the back.  To avoid making noise you can sometimes use a lockbreaker item to do it silently.  I worked on the basic vehicle extension for this which would spawn and attach the right lock object models and correctly sync this to the vehicle door state when broken.




Scenario Script Interface (Ambient AI)

Our scenario system in Red Dead Redemption 2 is used to give believable schedules and behaviors to NPCs in our world.  Typically we have points that are placed by designers and saved out into a binary file.  The script interface I added allowed scripters to add, remove, and modify scenario points on the fly which was very handy for missions and ambient scripts.

Volume Editor

We have an in-house tool that allows designers to specify volumes in their scripts to be used for queries and triggers.  I provided some support for this editor early in development to make it easier to use and add some features they needed.

Horse Saddlebag Searching

A relatively minor but still fun feature allows the player to loot the horse saddlebags of horses not owned by the player or the gang.



Dead Horse Saddlebag Searching

Searching a dead horse saddlebag is quite a bit different than our other dead interactions because we only animate the saddlebag itself.  This presented a very tricky penetration and animation problem which required a convex hull, swept circle, a static frame pose, and a lot of IK to alleviate.




Damage Tracking

Part of the challenge with hunting in Red Dead Redemption 2 is not just finding, killing, and skinning animals but retrieving high quality pelts and corpses.  This means the way the animal is killed matters in terms of where they are shot, the weapon used, and the ammo used.  From a programming perspective this meant we had to classify damage for an animal and then correlate that damage to the item received based on that damage quality.




Other

Minor contributions to early dueling prototypes, weapon prototypes, loot tables, blocking areas, ransacking, etc.

Saturday, September 22, 2018

Almost done

I've been working on this project for over 6 years and I'm happy to say it's set to ship in less than 40 days!




Saturday, April 13, 2013

Using Templates to Prevent/Catch Mistakes

I made a forum post on gamedev.net about templates and how they can be used to catch and prevent mistakes programmers inevitably make all the time. I felt it was worth linking to it here.

Sunday, September 2, 2012

Gameplay Architecture Part 2: Message Passing

Rationale
Games are very event driven applications which will sometimes (inevitably) end up resembling a web of complex states. In a project of any considerable size it simply isn't feasible to understand every component of the codebase.  This is especially true of at higher levels of gameplay code built on top of numerous parallel subsystems written by many different people over an extended period of time.

Messages which either inform pieces of code what has happened (e.g. DOOR_OPENED) or what to do (e.g. DISABLE_RENDERING) are part of the solution towards an architecturally sound and flexible codebase.  I had a few goals when implementing this system:

1) Integrates into the component/entity model system I wrote previously
2) Keeps class hierarchies flat, or avoids them altogether
3) Can queue messages efficiently and dispatch them later (e.g. send this message 10 seconds from now)
4) Components/Entities can listen in on these queues by their message type

Based on previous experience and these goals, this is what I came up with.

Simple Example
Message passing might be relatively simple if we could do everything on the stack.  As we will explore later, it is highly desirable to register components to listen in on messages by type, store them in a priority queue and dispatch them later.  This adds considerable amounts of difficulty to the problem, which will be addressed here in time.  First, lets explore the simplest case of a message sent on the stack to a known component.  Consistent with the component-entity model explained previously, here are the main interfaces for components/entities to send/receive messages.

const U32 DID_NOT_RECEIVE_MESSAGE = 0;
const U32 RECEIVED_MESSAGE = 1;

class jlMessage {
public:
      jlMessageType m_type;  // hashed message type
      void * m_data;         // untyped message data pointer
      U32 m_dataAndEntitySendMode;  // sizeof(*m_data) and flags
      F32 m_timeUntilDispatch; // when <= zero, dispatched from the queue
};

class jlComponent {
public:
    // Defers call to virtual receiveMessage function, non virtual base behavior for its benefits 
    // (e.g. tracing with certain debug builds that prints every message that is being sent)
    U32 sendMessage(jlMessage& msg);
protected:
     // Returns 1 if the message is received, 0 if not
     virtual U32 receiveMessage(jlMessage& msg);
}; 

class jlEntity {
public:
       // Iterates over attached components, calls sendMessage on them
       // Uses message entity send mode to determine which is called below
    U32 sendMessage(jlMessage& message, U32 minReceivers = DO_NOT_REQUIRE_RECEIVER);    
        // attached components  
 U32 sendMessageToComponents(jlMessage& message, U32 minReceivers = DO_NOT_REQUIRE_RECEIVER); 
        // attached components + all children
 U32 sendMessageToChildren(jlMessage& message, U32 minReceivers = DO_NOT_REQUIRE_RECEIVER);  
        // attached components + all ancestors 
 U32 sendMessageToAncestors(jlMessage& message, U32 minReceivers = DO_NOT_REQUIRE_RECEIVER);  
};



The non-virtual base behavior has numerous benefits for debugging
Lets take a look at a practical example, inside some script that detected a collision, we might wish to deal damage to an entity.


// inside some script which deals damage to an entity
void jlPhysicalCharacter::processCollisionWithFirePhantom() {
    jlDamageData dmgData;
    dmgData.m_amount = 5.0f;
    dmgData.m_damageType = FIRE;
    jlMessage damageMessage(JL_DAMAGE_MSG);
    damageMessage.setData(&dmgData, sizeof(dmgData));
    getHealthComponent()->sendMessage(damageMessage);
}

// inside the health component receiveMessage() function
U32 jlHealthComponent::receiveMessage(jlMessage& msg) {
    if (msg.getType() == JL_DAMAGE_MSG) {
         jlDamageData *data = static_cast<jlDamageData *>(msg.getData());
         JL_ASSERT(data);
         m_health -= data->m_damage;
         return RECEIVED_MESSAGE;
    } else {
        return DID_NOT_RECEIVE_MESSAGE;
    }
}

Message Data
The message is structured like so for a few reasons.  A unique type id makes message handling convenient in a switch statement, lets us know what to cast to and how to interpret the data.  In my implementation I am using an untyped void pointer so this is not something we can really ignore.  One alternative is to use a giant union of message data for all of your needs like so:

struct AlternativeMessageData {
 union {
  struct {
   Matrix4 m_matrix;
  };
  struct {
   char m_string[64];
  };
  struct {
   Point3 m_point;
   Vector3 m_vector;
   Quaternion4 m_quat;
   void* m_ptr;
  };
  // etc etc...
 };
};

The advantage of this approach is that you never have a dangling reference and queuing messages becomes as simple as queuing the message itself.  They have a consistent size and built in copy semantics.  Defining new types will add to compile times while the union works in all cases.


However, the unioned data approach has a number of downsides.  A giant union adds considerably to the size of the message struct.  This makes them less efficient to update and move around in the queue itself.  In certain instances, the data itself is not needed.  If a component receives a DISABLE_RENDERING message they likely don't need any more data to know what to do.  In these instances the unioned data can be excessive.  Such a system is also less readable in my opinion.  I found myself constantly going back and forth between code trying to find out what was really encoded where in the union.


With a pointer and knowledge of its size, you can define your own types and never have to wonder if the damage value was really encoded in m_floats[2] or m_floats[3] ever again.  Admittedly this requires jumping through a number of additional hoops, but it is a better solution in my opinion.

Entity Processing
Entities send the same messages, just with different kinds of iteration
The way the entity processes messages is pretty predictable given the signature.  We iterate through all components attached and call sendMessage to each one.  We accumulate the amount received and compare it to the minimum expected receivers.

In our messages we specify an EntitySendMode.  This is useful, since it allows us to queue messages without forcing the user to conform to one type of call.  If they want to send the message up or down the tree they merely need to specify the desired mode before queuing the message.

Queuing Messages
Don't Overcomplicate Things, It Is Just A Heap
So far things have been kept simple.  All we've really done is take a simple struct with a void pointer to some data, passed it to a consistent interface, casted it out and processed it appropriately.  All of this is still done when messages are queued, but we need to do a considerable amount of more behind the scenes to provide a safe and efficient priority queue of messages.

The easy part is providing the priority queue.  You can simply use an array heap and sort the messages by their remaining time.  You can keep popping messages off the heap until you find one that hasn't run out of time.  At that point, the following messages have not exceeded their delay yet and should not be dispatched just yet.  Any simple resource on heaps, where the data is sorted by the time until dispatch should be sufficient.

Except It Is Pool Allocated

In addition to your standard heap, we need to ensure our message data is safe.  To do this, when pushing our message we need to make a deep copy of our message data.  My implementation achieves this with a pool allocator.  The pool allocates space, the message data on the stack is memcopied to the newly allocated space, and the message struct which is copied into the queue has its data pointer readjusted to point to the data in the pool.  Popping the message does the usual heap manipulation and deallocates the message data from the pool.

Writing A Pool Allocator
Pool allocation might scare some people but it is one of the easiest custom memory allocators to roll yourself.    The idea is simple.  We have blocks of the same size kept in a buffer.  On init, every block goes into a "free list".  This is just a linked list of places in the pool that are currently unused.  When making an allocation we take the address (or offset) stored at the head of the "free list" and remove it from the free list.  When making a deallocation, we take the memory address/offset of the freed memory and add it to the free list.  This makes allocation and deallocation as fast as a linked list add/remove.  Of course, if we malloc these free list nodes it defeats the whole point of a custom memory allocator.  Therefore, we need to keep these nodes in the buffer itself.  This is not dangerous, since the free blocks are unused anyways. 

Listening For Queued Messages By Type
With the message queuing solved, we can somewhat trivially provide an interface that lets us register components/entities to listen for messages of a given type.  In addition to our existing queue, we can store a map that lets us efficiently look up all the listeners for a given message type.  Then when popping a message off the queue, we simply look up the linked list of listeners by its type, walk said list, and send the message to  every item in it.

Since I wanted to avoid virtual functions, I stored the type I needed to cast to in the nodes.  And yes, the nodes in the map should be pool allocated too.  

Other Implementations/Alternatives
- If you want to make things easier on yourself, you can use the unioned messaged data struct.  This will allow you to write an implementation that doesn't have to worry about any custom memory allocation.
-If you are less concerned about performance and more concerned about flexibility, you can have both components and entities derive from a "message receiver" class that has a virtual sendMessage function.  That way you don't need to store the receiver type yourself and worry about your type casts.
-You can forego the usage of an interface entirely and just use delegates/function pointers.
-You don't really need the message data size and time delay outside of the message queue itself.  This could be passed directly when queuing is needed and stored separately.  SoA would greatly improve data cache utilization when updating the remaining time on the message.

Saturday, August 18, 2012

Gameplay Architecture Part 1: Component Entity Models in C++


Why Component Entity Models?
Rationale
In the interest of re-usability, separation of concerns, and data driven needs, developers are starting to move away from maintenance prone class hierarchies and towards Component Entity Models.  Conceptually, such models are simple to understand.  Functionality is split into components and game entities are defined by a composition of these components.  So at its core, entities themselves are just a container of their components or an identifier that allows its components to relate and communicate with eachother.

There are a number of different ways to go about implementing such a model, and a number of things about C++ can make certain aspects of the implementation rather tricky.  Be forewarned, that the implementation I'm presenting does use templates and does stick rather closely to the model used in Unity.  While a bit tricky, the implementation here is not overly complicated, and has considerable benefits once it is set up.  Users familiar with the script interface in Unity can speak of how intuitive it is to use.

Establishing Relationships Between Components
In more radical models, the idea of an entity is removed completely in favor of pure aggregation.  Although not without its benefits, in my opinion this makes things unnecessarily complicated.  In addition, forcing a component to belong to one and only Entity solves a lot of problems.  So in the interest of keeping things simple that is what I will be presenting here.  A one to many relationship between entities exist, and each component keeps a pointer to the entity.  This makes sending messages and modifying components significantly easier for a new component.  For example, a camera tracking behavior will know to search for and operate on camera components attached to the same entity.  Likewise, a health component will know which components to send a message on something like a TAKE_DAMAGE event.

Accessing Components
At its core an Entity is a container of components.  We don't want any game logic to seep into the entities themselves, but instead want funcionality to stay decoupled and seperated into individual components.  Therefore, we need some way to access components.  The implementation I am showing here presents a few including:

- By Type (using templates for type safety)
- By A Unique Instance Id (given by a static instance counter)
- By Name (using a hashed string id for faster comparisons, but still provides a system where designers can name parts of their level)

Why Names/Name Id's?
It is very handy for designers to identify things in the world by name, save such things to a file, and have them work the same in each build.  Hashed string id's minimize the cost of comparisons, and can still be accessed in debug builds that make use of a giant string table.  This is a godsend for debugging compared to some random enum type id that could easily be reordered when merged!

Why Instance Id's?
A GUID can be quite handy if you need a unique handle and your scriping language doesn't have pointers.  In certain cases, name ids may not cut it since designers will keep giving objects the same name.

Providing Your Own RTTI
Identifying objects by their runtime type is a bit trickier.  We could punt and just use dynamic_cast while iterating and rely on the non NULL result to indicate a valid cast.  This will work and is more reliable than a typeid that won't respect class hierarchies.  However, enabling RTTI is generally considered undesirable because it must be implemented everywhere in your program.

Observe, given a type id we have rolled ourselves and the use of a virtual function like the one below we can determine if an object is derived from a given type:
bool virtual isDerivedFromType(u32 typeId) {
    return (TYPE_ID == typeId) ? true : PARENT::isDerivedFromType(typeId); 
}


This will continue until it finds a matching type id or reaches the root class in the hierarchy.  In the root of the hierarchy our logic would have to be simplified to compare its object id and not go to any parent class (since it doesn't exist).

bool virtual isDerivedFromType(u32 typeId) {
    return (TYPE_ID == typId); // no parent  
}  


Hashed String Id's > Error Prone Enums
But how do we provide a unique type id?  We could roll an enum of all possible types, but enums do not scale well at all.  A better implementation, that would put less of a burden on its users, would use the hashed string id of the class name.  Since we need a unique isDerivedFrom() function anyways, this is a perfectly justifiable use of the preprocessor.  A good hash function like MD5 or CRC32 should let you use this simple pattern without any conflicts.

// goes in the public declaration of every component


#define DECLARE_COMPONENT(COMPONENT_CLASS, PARENT_COMPONENT_CLASS) \
static const U32 COMPONENT_TYPE_ID; \
static const char * COMPONENT_TYPE_NAME; \
virtual bool isDerivedFromComponent(U32 componentTypeId) const { \
    return (COMPONENT_TYPE_ID == componentTypeId) ? true : PARENT_COMPONENT_CLASS::isDerivedFromComponent(componentTypeId); \
}


// Goes in the cpp of every component
#define DEFINE_COMPONENT(COMPONENT_CLASS) \
const U32 COMPONENT_CLASS::COMPONENT_TYPE_ID = crc32(#COMPONENT_CLASS); \
const char * COMPONENT_CLASS::COMPONENT_TYPE_NAME = #COMPONENT_CLASS
So at this point, a minimalist implementation could have its logic work like so when searching for the first component of a given type:
 
template <typename COMPONENT_TYPE> COMPONENT_TYPE * getComponent() {
    foreach (component in componentList) {
        if (component->isDerivedFrom(COMPONENT_TYPE::GetComponentTypeId())) {
            return static_cast<COMPONENT_TYPE *>(component);
        }
    }
}

Storing Components
Data Structures
Part of the entity interface
There are a number of ways we could potentially store components.  The first solution you might go to would be a dynamic array (e.g. std::vector) of component pointers.  After some thought, I have created a very specialized structure that uses handles and takes advantage of  template specialization.  To keep things simple I will instead present a system that just uses an intrusive linked list.  Since components should only belong to one list anyways this isn't a terrible choice, but in general if you want something that is concurrent and cache friendly you will probably want to use something else.

class Component {
    // the RTTI type id and name are stored statically
    Entity *m_entity;         // component container entity
    Component *m_next;        // intrusive next pointer
    Component *m_prev;        // intrusive prev pointer
    U32 m_nameId;             // hashed name id
    U32 m_instanceId;         // unique instance id
    ComponentEntityFlags m_componentEntityFlags;  // flags like enabled/disabled, etc etc
};

class Entity {
    Transform m_transform;     // head of the component linked list
    Component *m_tail;         // tail of intrusive component linked list
    U32 m_layers;              // bitwise layers
    U32 m_nameId;              // hashed name id
    U32 m_instanceId;          // unique object instance id
    ComponentEntityFlags m_componentEntityFlags; // entity flags
};
Therefore, we can get a component by type using the following code:
template <typename COMPONENT>
const COMPONENT * jlEntity::getComponent() const {
    const COMPONENT *ptr = NULL;
    for (const COMPONENT * cur = getComponents(); cur != NULL; cur = cur->getNext()) {
        if (cur->isDerivedFromComponent(COMPONENT::GetComponentTypeId())) {
            ptr = static_cast<const COMPONENT *>(cur);
            break;
        }
    }
    return ptr;
}




Getting multiple components by type also becomes trivial:

template <typename COMPONENT>
U32 jlEntity::getComponents(const COMPONENT *componentList[], U32 maxToAdd) const {
    U32 n = 0;
    for (const jlComponent *cur = getComponents(); cur && n < maxToAdd; cur = cur->getNext()) {
       if ( cur->isDerivedFromComponent(COMPONENT::GetComponentTypeId())) {
           componentList[n++] = static_cast<const COMPONENT *>(cur);
        }
    }    
    return n;
}

It could also be expanded using the bitwise filtering I posted about previously here.  That way you can search for all enabled or disabled components of a given type.

Scenegraph Integration
Operations can be expanded to trees of entities/components
Parent-Child Relationships
One of the great things about Unity is that its parent child relationships carry over to the object model itself.  This lets designers and programmers define objects in ways that are very beneficial and still have ways for high level code to intuitively communicate with related data.  For example, if the wheel of a car needs to inherit rotation and translation, why not take this relationship a bit further?  If we have links to the scene nodes, and they have links back to their entities, we can perform operations on components and entities while traversing down the tree.  This not only means we can search for components in a subtree, but also send messages down them (or up them).  Therefore, if the car gets an important event, it can quickly and easily send it down to the wheels

A Consistent Message Passing System
If functionality is split into components, than a virtual function for sending and receiving messages is easy to implement.  If we have scene graph integration we can not only send messages to every component attached the given entity, but to every component in the entity subtree.  This can be rather slow, but extremely useful when you need it.  It is also very flexible for expansion (e.g. a component isn't doing anything with a message now, but could later without changing any other code),

As I will explore in another post, the nitty gritty details of implementing such a system is actually rather involved, especially if you want to queue messages with different data efficiently.  Hopefully this will get posted within a few weeks.

Performance Considerations/Concerns
Speeding Up Component Retrieval
At first glance the above might look a bit scary.  Sure it uses little memory, but a linear search that isn't cache coherent is scary.  Even with a dynamic array, we are still ultimately dealing with pointers and indirection anyways.  Also, since components do many different things, they will be allocated in many different ways.  This is one of the downsides of component entity models in general, but all is not lost.  Here are some of your options:

Exploiting Coupling
If every entity must have a transform this can serve as the head in your linked list.  If the Transform is really just an id into a procedural interface, you may be able to get away with storing the component itself inside the entity.  

Cache Frequently Used Components
In Unity, frequently used components are stored as members with separate references.  When combined with template specialization this can make operations on a type extremely fast.  For example, if we store a pointer to the transform our getComponent overload could resemble the following:


template <>
FORCE_INLINE const Transform * Entity::getComponent<Transform>() const {
    return m_transform;
} 

However, this comes at the price of extra space in every entity.  In addition, C++ templates do not respect class hierarchies, so if I make a subclass of Transform, I need to make specializations for every operation on that type (addComponent, removeComponent, etc etc).  This is a considerable pain and maintenance overhead.

Minimize Components To A Light Interface
Try to avoid providing Update, Render, and a bunch of other virtual functions for everything unless you really hate I-CACHE efficiency.  Keep things light, make components nothing more than a way to access existing information if that is all you need.  Avoid storing the actual data for components inside the interface if it presents too much of an overhead.  If you have an existing scene graph that uses some crazy struct of arrays scheme, make the Transform interface store nothing more than a handle into those arrays.  Components do not have to present crippling changes to every piece of code.  You can still keep your existing procedural implementations if you want.  Components need only be an interface.

Allocate Entities Into A Pool
Since entities should not have subclasses and all share the same size, they are a perfect candidate for pool allocation.  We cannot make the same assumptions with components, since they will obviously have subclasses of varying sizes.  If the range isn't significant, you can allocate blocks capable of storing the largest component, or use separate pools for components of different sizes.

Questions?