Aller au contenu principal

MissingLink (+ source)

10/05/2012

MissingLink is the result of 10-members teamwork including 8 Norwegian artists and 2 French programmers (at the beginning we were 4 programmers but 2 of us left the project just before the start). Great experience to work like a team but it was not so easy.

Introduction

MissingLink was a really ambitious game and we spent some weeks at the beginning of the year to accurately specify the game and fix our objectives. Some artists wanted to use the game engine UDK and let programmers create some scripts. We disagreed on this point because it was not interesting in our point of view, as programmers, to just write scripts and we were afraid of not having enough works to do: we were 4 programmers (2 English and 2 French). That’s why we have decided to create our own engine from scratch using C++ and DirectX. Also, it gives us the control on the engine and we could shape it as we wanted it to be. Adrian, our team leader, provides schedules and we were ready to jump in the project. Clément and I have started to work on the engine; one of the two English had to work on the AI as a lead programmer. We did not have any news about them from December until now, but after some tries to contact them, we have decided to kick them out of the group. As a result, Clément and I ended up with a lot of extra works. Thus, I worked on some piece of the engine with Clément; we have polished the map a lot and some works not really interesting to speak about in this report. I have decided to explain my two master pieces which are the physics engine and the artificial intelligence.

Physics

Something really important with an engine is how it will manage the physics. What we have learnt last year is how to manually handle collisions and cancel the movements, which is something really basic and not realistic at all. Moreover, there is different kind of algorithms to handle collisions, some more effective than the others but often more expensive in time (slower). We started with one of them at the very beginning of the project to have a prototype to play with quickly. The principle of collision handling is to check if a 3D model overlaps another one. In order to do it properly, the game has to check every vertices of the entity’s 3D model against every vertices of every entities in the game. This is inconceivable, even for a powerful computer, because we end up with millions calculations. Actually, if we check every vertex, certain calculations are useless; indeed, only vertices which represent the shape of the entity are relevant, the others (inside the model for example) are not. Besides being very expensive, it is not efficient. Some techniques are available to optimise this process.

Axis-Aligned Bounding Box (AABB)

The easiest algorithm about collisions handling is based on axis-aligned bounding boxes. For each entity, one algorithm checks every vertices of the model and compares them against each other in order to find the smallest and biggest vertex (basically, the bottom near left co-ordinate and the top far right co-ordinate). With these two vertices, we can create a box which encloses completely the 3D model (figure 1.1).

Figure 1.1: AABB in model space (toymaker)

This process is done once at the beginning of the game and then, during the runtime, we just need few checks. For two given entities A and B (from toymaker.info):

  • If the max x position of A is less than the min x position of B they do not collide
  • If the min x position of A is greater than the max x position of B they do not collide
  • and the same goes for y and z

This is a cheap process; unfortunately, there are some drawbacks. First, it can be spotted on the figure 1.1, if we fire a bullet near the head of the robot, it will pass through the box and a collision will be detected even if it does not touch the model. It is even worse when we transpose the box in the world space, where axes are not the same. The bounding box could look like the figure 1.2.

Figure 1.2: AABB in world space (toymaker)

 

Transposing in the world space consists in applying translation, rotation and scaling transformations onto the 3D model in the model space. The process is still cheap but we have lost the little accuracy we could have because the entity became unaligned. That’s why we have moved on oriented bounding boxes.

Oriented Bounding Box (OBB)

In order to keep the box aligned with the model, we just need to transpose its 8 corners in the world space. It will align the box along the model and allow us to carrying out the collision detection in a cheap way. Obviously, we still have the low accuracy we would have with the AABB. The figure 1.3 displays an AABB converted in OBB.

Figure 1.3: OBB in world space (toymaker)

Once a collision is detected, we just cancelled the movement so that the player cannot go through walls, floor and any other objects. Some damages could be applied. The result was a static level and not as accurate as we wanted. It was so decided to switch for something more professional and impressive. The only physics engines I have heard about are PhysX and Havok. Most of the time, a physics engine is a part of a game engine and not delivered alone (Unity, Unreal Engine, and many more). We have opted for PhysX because we have played to the video game Trine which uses PhysX and the result was exactly what we were looking for.

PhysX

PhysX is a physics engine acquired by Ageia in 2004 and then itself was acquired by Nvidia in 2008. It is delivered for free as a SDK with documentation and some samples. It is also cross-platform and can run on MacOS, Windows, Linux, Wii, PlayStation 3 and Xbox 360. Because it is only a physics engine, it can be included in our own engine without re-write it from scratch.

I have spent some time to create a small demo and make it working properly. The documentation not being very well-written, I have done some researches on the Internet to find some help. Because it is a not-so-old and not a widely used technology, it is rare to find a solution on a website and I spent some hours to fix trivial bugs. In the end, the result is very enjoyable.
The way this engine works is described by the figure 1.4. Each updates, the PhysX engine simulates the scene and then entities can get back their new position and orientation.

Figure 1.4: PhysX design

It is necessary to create a scene and define its properties:

  • We can define its gravity in any direction (to be realistic or not)
  • A reference to the contact report which is called when two models collide
  • A reference to the trigger report which is called when a model collide with a trigger
  • The default material (some properties like the friction) for any shapes created by PhysX
  • A plane as a ground floor to not have entities falling forever

After that, each time we want an entity being ruled by PhysX, we just have to create an actor for this entity and define its properties.

Actor

An actor is abstract, it represents only a position; that’s why we need to attach to it a body and one or more shapes which describe how the entity would behave against others. A shape only needs a dimension to let PhysX knows how big it is. It could be a box, a sphere or a capsule. It is given to the actor along with the mass and the position and orientation. Then, it depends on the entity if it is static or not. The wall and the floor are static; the enemy, desk, cage, etc…, are not. It defines if the actor is movable or not. The only difference is that a static actor does not have a body while a non-static actor have; the body mainly defines the initial velocity and the angular damping which reduces the angular force a collision could generate. Finally the group (explained in the Collision section) is set and the actor added to the scene.

The player is a bit different because it is represented by a capsule (other entities use a box or a sphere), does not have friction (we do not want our player being blocked while walking alongside a wall) and finally, only its rotation on the Y axis is available, X and Z axis are frozen; it avoids to have the player falling down on the floor and keep it on his feet.

Finally, we just have to call the Simulate function and fetch the simulation results. Each entity gets back its position and orientation before being displayed onto the screen.

Collision

PhysX handles physical collisions automatically and we have nothing to do more. Nevertheless, we probably would be noticed when a specific collision occurs for various reasons such as apply damages, play a sound, etc… Once again, PhysX provides a really easy-to-use functionality: the contact report. First, we just have to create an instance of the ContactReport class and give it to the scene. Then, we have to say to PhysX which collisions we want to be aware of. This is here the group parameter is important. In MissingLink, all the objects are in the group 0, the player in the group 1, enemies in the group 2 and the plane in the group 3. In this way, we can enable the report for special interactions; in this case, the report is enabled for groups 1 and 2. So, each time the player will collide with an enemy or a bullet, a report will be generated and let us handle it properly. An important thing to notice is that we use the report only to know when the payer was hit by a bullet. If we do not use it for the collisions between the player and the other enemies is because if the player touches an enemy back-to-back, a report will be generated and we cannot know if we have to apply damages or not. To fix this, we have used triggers.

 

Trigger

As I said in the previous section, we have used triggers to detect if an entity is under attack but not only. There are also used by doors and the also for the Artificial Intelligence. A trigger is a shape we add to the actor. The only difference with the shape described above, is that we active a flag: NX_TRIGGER_ENABLE. This flag says to PhysX to use this shape as a trigger: no physics of any kind applied but a TriggerReport is generated. The purpose of this report is exactly the same as the ContactReport (a trigger report provides the two shapes colliding and the contact report provides the two actors) and let us handle this event.

In our game, we have put a trigger in front of each enemies and in front of the player so that if a trigger report is generated, we are sure the entity is facing the other one. About doors, we have created a big shape enclosing the door as a trigger; so, if an entity enters in this shape, we active the door and let it activated as long as there is an entity in the shape. Finally, enemies have one more trigger with a sphere shape. It is used to detect any object in the world such as walls or non-pushable entities (every objects heavier than them) in order to change the path to avoid them.

Figures 1.5 and 1.6 show off the physics applied.

Figure 1.5: The caveman is facing the locker.

Figure 1.6: The man collides with the locker which falls down


Artificial Intelligence

Because we have some enemies in the level, we have to animate them: « bring them to life. »
I have never used any algorithms for Artificial Intelligence, so it was a good time to learn more about this. The first feature implemented was the path finding: with a given destination, it is possible to compute a valid path. The second feature is how the entity will interact if it detects an object on the pathway.

A* Algorithm

As it was stated, the first feature implemented was the path finding. I had to choose which algorithm I will use, and only two was fitting what I was looking for: the Dijkstra Algorithm and the A* Algorithm. The first one permits to find the best solution, which means that this algorithm will find the shortest path to reach the destination. About the A*, it will find a path but not necessary the best one. Once again, quality has a cost and Dijkstra is, obviously, slower than A*; in this game, it is not mandatory to find the shortest path but we could have a lot of enemies with different paths: I have chosen the A* algorithm.

In both cases, these algorithms are graph-based; every possible destination is a node linked each other by one or more arcs describing the time or the distance between two nodes. So, the first step is to build a graph representing the level of the game. It could be hand-written in a file and loaded at the start-up of the game. Not convenient at all because it has to be done for each level (and updated if a level is edited), we would have lost any flexibility. What I have done, is that the level necessary has a floor mesh; so each time we load a floor entity it means we have a new possible node. In the game, the centre of the entity is the node (or summit) and the distance between two summits is an arc. The floor mesh being a square and the only mesh used, the distance between two nodes is always the same. When we have finished loading the floor entities, we end up with a vector containing every summit of the level and so, our graph.

Once the graph is built, we can compute a path for each enemy. A* works with two sets: an open set and a closed set. The closed set contains the current valid nodes and the open set contains all the nodes that could be valid or not but worth to be kept in consideration for later. The algorithm processes each summits of the graph and checks if the current summit is adjacent to the last valid node. Then, the cost of this summit is computed and the node is added to the open set; if the node is already in the open set, we compare costs to choose the shortest path to this node. A node has two costs: first, the gCost which is the gCost of the previous node plus the distance between these two nodes. In other words, it represents the cost between the start point and this node. Second, the hCost is the distance “as the crow flies” between this node and the destination. Last, the fCost is the addition of the gCost and the hCost. Once we have added all the adjacent summits in the open set, we take out of it the best node: we just compare the fCost of each node in the open set which will give us the closest node to the destination (which is not necessarily the best solution). The node selected is then added to the closed set and the process is repeated until it reaches the destination or it ends up with no solution. If there is a solution, the path is the inverse of the closed set.

For each update, the algorithm set the entity to the start node and set the destination as the next node in the path. When the destination is reached, the destination becomes the following node, etc…, until the entity reaches the final destination. Once reached, the entity goes back to the start node (the path is re-computed because it could be wrong for the way back. More details in the next section) and it is repeated forever.

Obstacle detection

Another thing important with the artificial intelligence is the capacity of the program to adapt itself when facing difficulties. What happened if the entity encounters a non-pushable obstacle? First we have to detect such an obstacle. This is here that the trigger shape described in the physics chapter is useful: when another entity collides with the trigger shape, a report is generated and provides us some information about the entity encountered. If this entity cannot be thrown away, it has to be avoided (basically, we just check if it is static or heavier than the entity).

The obstacle keeping the enemy from reaching the destination, we just need to add this destination into the “toAvoid” vector of this entity (it is not removed from the summit vector because the obstacle can obstruct an enemy but not another one coming from a different path. So, it is not possible to use this path for the way back because the start node is no more the same). Then, the entity goes back to the previous destination and then re-computes a path from this node towards the final destination.

In the other hand, if the obstacle can be pushed, the entity keeps going straight forward. If it is the player, the enemy attacks.

Conclusion

In conclusion, this game was the first I have done with a team and it was a great experience. Of course we have had some difficulties and “bad blood” with other teammates but I guess this is always the case in a team. With the loss of the two other programmers, there were only Clément and I remaining. Fortunately, we know each other since the last year and we can work together without problems; each week we agreed about tasks to be sorted out during the current week until the next meeting and I am very glad about what we did. In a personal point of view, I have successfully integrated the PhysX engine used by some professional video games (Mirror’s Edge, Trine) and implemented a working artificial intelligence.

Obviously, there are some things which can be improved or added but the aim was to develop a working demo and I think we have reached our goal. As future plans, I would improve my artificial intelligence algorithm and more specifically, the way the entity would interact when it detects another entity (obstacle, player, etc…). The first feature I was thinking about is to compute a path towards the player when it is detected and if the enemy loses it, just goes back to the path.

It was really pleasant to work with Clément (I will not speak about the creative part because it is out of my scope) and Adrian did the job as a team leader pretty well.

Includes

  • C++
  • Visual Studio 2010
  • NVIDIA PhysX
  • A* algorithm (path finding for all enemies)
  • DirectX9
  • Map Editor (XML level) (not made by me)
  • XACT
  • Particle System

Before. All furnitures on the desk

After. All furnitures on the floor

YouTube Video – MissingLink Demo

Release date: May 2011
Download MissingLink (.exe)
Download Source Code (.zip)

DirectX runtimes
PhysX runtimes

Laissez un commentaire

Laisser un commentaire