Saturday, 28 March 2015

Game Review: Frozen Cortex

The rogue-like FTL: Faster Than Light was an uncompromisingly difficult game, where playing up to two hours at a time could be undone by one moment of chaos. So many times I ploughed through that game only to declare the universe as being intrinsically unfair as a simple mistake cost me a good hour of careful planning, only to have to start over again. The illusion is that next time I'd get it right. I'd shield the doors earlier, or make sure to stock up on extra missiles, or avoid saving the crew of a ship on fire.

The futility of the game combined with the (somewhat illusory) sense that I was in control kept me coming back. It was only when I could get through to the end of the game regularly that I finally lost interest.

Which brings me to Frozen Cortex.

Frozen Cortex is what you get when you cross American Football with Frozen Synapse - which is in turn what you get when you cross chess with guns. I really enjoyed the idea of Frozen Synapse, but I never could fully immerse myself in it (Hell is Internet multiplayer...) I haven't had that trouble with Frozen Cortex, already rocking up close to 30 hours of game time.

The funny thing is that there's really not a lot to Frozen Cortex. The entire game consists of short matches, with each player controlling 5 robots on rectangular arenas populated with various obstacles. It's simple enough to grasp, simple enough to control, and a tiny mistake can undo hours of meticulous planning. Any given match can be thrown away from incorrectly guessing what an opponent would do, and that makes knock-out mode all the more difficult.

Because the games are so short, minor mistakes have those major consequences. Screw up on your own offense, and it's virtually guaranteed to be game over. Misread your opponents and it can be similarly difficult to come back during your own. The game's annoying habit to "pause" the turn halfway through a touchdown pass can often be just rubbing it in that you didn't realise the opponent had that option open, and there's nothing to do but slam the PRIME button and hope there's still enough time to recover.

And there are many ways it can cost you. A slightly mistimed run means a key player is knocked out. Throwing to a player too close to an opponent gives them no time to act before they are tackled. Passes that get intercepted. Having your player slightly mistime an interception. Moving to the wrong side of an obstacle. And of course, all these are compounded with scoring pads littered throughout the arena. They are only worth 2 points (compared to a 7 point "touchdown"), but a good run can cross several pads at a time. One particularly bad example was a robot that could just sneak around an obstacle for an unguarded heavily-padded run to the end-zone.

It's that futility that like in FTL: Faster Than Light keeps me coming back. I didn't see that play. I'll be mindful of that next time. It's with that attitude I've been able to beat the game on season mode (and opened up interesting variations), but haven't quite gotten through a knockout comp yet. The game is cruel, uncompromising, punishing, and that's what makes it such a rewarding experience.

Monday, 15 December 2014

Slipping Back into Old Habits

I have a lot more time to think about coding issues than I have time to code. In some ways, this is quite handy as it allows me to plan through certain problems and foresee potential pitfalls of a particular solution. I think there are two key problems with this, however. First, even if I come up with a solution it's still only half the job done. Second, it gets be bogged down in coding problems that often are beyond my coding abilities. This all became apparent in the last month when I became bogged down in a particular design problem. A month later, I'm still no closer to a solution, and worse still I'm not making any progress in any other area. And instead of just coding a workaround for the time being, I became fixated on solving the problem.

(I needed compile-time polymorphism where I previously had run-time polymorphism. I read through books and websites on templating, tried rudimentary attempts at adjusting my code, but always I was trying to code up a perfect solution in my head, weighing up potential considerations and pitfalls. And at the end of it, all I have is the lament I just didn't code around it.)

What I find so disappointing is I know all this. There's nothing particularly revelatory in that I dwell on problems instead of working around them, but it is disappointing that I still haven't moved on from the problems I had when I was at university learning to code. I get too hung up on solutions that I haven't learnt how to code around it.

This game coding happens in the limited spare time I have. It's a hobby, and comes after other aspects of work and life. So maintaining motivation is always a struggle, especially in the face of seeming blockers. And from what I've come to know about the role pressure plays in inhibiting the creative process, the limited time often translates into fixating on particular solutions instead of just getting on with something else.

Given that I'm still adjusting back into C++, it's easy for me to think of failures as a lack of knowledge of the language itself. Sometimes that's apparent, and I know as I'm reading through various C++ material that I'm skimming for a particular solution context rather than internalising the language feature as a potential design tool in the future. But I think it would be a cop-out to blame my current understanding of C++ when the language is so flexible in the kinds of solutions that are permitted.

It's always, I think, going to be a question of motivation. That is, am I able to code something that's good enough for the task at hand? One thing that's problematic is that my initial scope is far too ambitious – that is, I'm trying to create a framework that could be used to make any number of 2D games. So in that respect I'm stuck trying to figure out generic solutions.

And therein lies the problem. How do I know I'm going to get to use any of the design work I've put in so far? For example, all the time I spent writing wrapper objects for SDL code, as far as I can tell, has had very little benefit so far. Indeed, the main reason I did that was to have a sort of platform agnosticism – the theory was that I could refactor out the core features of the engine to a different library without having to permeate those decisions down through game code. Yet if I'm never going to do that, then as happy as I am with some of the wrapper code I wrote, then a lot of that effort was wasted.

(Funnily enough, as I was trying to learn SDL, one thing I got frustrated with was that the tutorials were so direct in their use of the libraries. I took it as an act of clarity on account of the tutorial writers that they didn't try to muddle things up by enclosing their code in some simple patterns. But as I was coding up my input system, I wondered just what benefit I was getting out of mapping SDL structures with my own.)

This too, I think, highlights the perils of solo development. As I'm working on this project, I'm accountable to one person: myself. What I design, what I build, what my desired end-product is – all that is my decision. And since I'm starting from scratch and working completely unguided, it’s inevitable that I'm going to not only make bad decisions, but those bad decisions will lead t o a lot of wasted time. You live, you (hopefully) learn.

What I think I've learnt is something about the limitations of myself. If I have a certain disposition to coding a certain way, then it would be madness to think I can just overcome it by saying "don't". What I hoped with prototype-driven design was that I could better focus on what needs to be done, rather than what I would think might need to be done. That, now, seems like only addressing half of the equation, and what's missing are techniques for build-and-refactor so I don’t get bogged down where an elegant solution is not immediately apparent.

I know at times I need to get out of my head – I have notebooks that are filling up with ideas faster than I'm adding lines of code to the codebase. What will remain to be seen is if I can work with my own personal practices and habits. Like losing weight, if you try to do it on willpower, then you’re going to fail miserably.

Sunday, 23 November 2014

The How of A*

(For the Why of A*, go here)

Here is a basic implementation of A*. Note that this isn’t necessarily an optimal solution, but it is a working one.

For my solution, I have two classes: Node and PathfindingAStar. The two classes are tightly coupled, with Pathfinding depending on Node to have certain properties in order to work. Node itself is the bridge between the pathfinding subsystem and the wider system.

Here is Node.h:

typedef struct
 unsigned int x;
 unsigned int y;

class Node
 Node(unsigned int p_x, unsigned int p_y);
 NodePos getPosition();
 bool isSameNode(Node* p_node);
 void setNeighbourAt(Direction p_dir, Node* p_neighbour, double p_cost = 1.0);
 Node* getNeighbourAt(Direction p_dir);
 unsigned int getMovementCost(Direction p_dir);
 std::vector getAllAvailableDirections();
 bool isOccupied();
 void setOccupied(bool p_occupied);
 void setParent(Node* p_parent);
 Node* getParent();
 double getG();
 void setG(double p_distanceTravelled);
 double getH();
 void setH(double p_heuristic);
 double getF();
 unsigned int getX();
 unsigned int getY();
 std::map m_movementCost;
 NodePos m_pos;
 std::map m_neighbours;
 Node *m_parent;
 bool m_occupied;
 double m_h;
 double m_g;
The Node caters for two purposes. Firstly, it holds information about the world itself that are vital for Pathfinding to operate. The position of the world, the neighbours of the node, the costs, and whether the node is occupied are all created and maintained by the game code. Neighbours cannot be populated upon creation because neighbours won’t have necessarily have been created themselves. Whether the node is currently occupied will be need to be maintained by the game code.

Secondly, the node holds information relevant to the cycle of the path. m_g and m_h are used by Pathfinding for the node’s distance from the start node and to the goal node. Similarly, m_parent is used to keep track of where the navigation came from.

As far as methods in Node.cpp go, there are two methods that are worth exploring, both to do with the available directions.

void Node::setNeighbourAt(Direction p_dir, Node* p_neighbour, double p_cost)
 m_neighbours[p_dir] = p_neighbour;
 m_movementCost[p_dir] = p_cost;
std::vector Node::getAllAvailableDirections()
 std::vector directions;
 std::map::iterator itr;
 std::pair pair;
 for(itr = m_movementCost.begin(); itr != m_movementCost.end(); itr++)
  pair = *itr;
 return directions;
The Pathfinding algorithm will need to know what directions are available to move in, and how much it will “cost” to move in that direction. An alternative would be to simply have a cost associated with a node, but this way gives more flexibility to the game code. So there are two lists, one for the cost of moving in a direction, and one for the neighbour Node in that direction.

Since directions are set when neighbours are set, there’s never any need to worry about the two maps going out of sync. So either map could have been iterated over to build a list for Pathfinding to use. This returned list could just as easily have been a private member of Node, generated as part of the setting of neighbours, and perhaps could be a point of efficiency to do so in the future.
Here is the source code for PathfindingAStar.h:

#include "Node.h"

class PathfindingAStar
    static PathfindingAStar* getInstance();
 std::list findPath(Node* p_startNode, Node* p_goalNode);
 std::list finalisePath(Node* p_currentNode); 
 static PathfindingAStar *singleton;
Pathfinding has one public method – to find the path. The rest is just for housekeeping. There only ever needs to be one instance of the class, so the Singleton pattern is implemented. There’s nothing particularly special about the class from an OO point of view, as all of the work is done in the single method search.

std::list PathfindingAStar::findPath(Node* p_startNode, Node* p_goalNode) 
 std::vector m_openList; //Nodes that are active
 std::vector m_closedList; //Nodes that have been used
 //Clear the list
 //Reset the node to parent for the current cycle
 //and reset all values of f, g & h
 double h = (double) abs(p_goalNode->getX() - p_startNode->getX());
 h += (double) abs(p_goalNode->getY() - p_startNode->getY());

 while (!m_openList.empty()) //If there are still nodes to explore
  //Get the node with the smallest f value
  Node* currentNode = m_openList.front();
  //Remove node from the open list
  std::vector::iterator itr = m_openList.begin();

  if (currentNode->isSameNode(p_goalNode)) 
   std::list goalList = finalisePath(currentNode);
   //return out of the program
   return goalList;

  std::vector neighbours = currentNode->getAllAvailableDirections();

  bool isFound;

  for (unsigned int i = 0; i < neighbours.size(); i++) {
   isFound = false;
   Node *n = currentNode->getNeighbourAt(neighbours[i]);
   if (n->isOccupied()) {
   //Make sure node isn't in open list
   for (unsigned int j = 0; j < m_openList.size(); j++) {
    Node *n2 =;
    if (n2 == n) 
     double g = currentNode->getG() + currentNode->getMovementCost(neighbours[i]);
     if (g < n2->getG()) 
      std::vector::iterator itr2 = m_openList.begin();
      m_openList.erase(itr2 + j);
      for (; itr2 != m_openList.end(); itr2++) 
       Node* n2 = *itr2;
       if (n->getF() <= n2->getF()) 
      m_openList.insert(itr2, n);

     isFound = true;
   if (isFound)
   for (unsigned int j = 0; j < m_closedList.size(); j++) 
    Node *n2 =;
    if (n2 == n) 
     isFound = true;
   if (isFound)

   //work out g
   n->setG(currentNode->getG() + currentNode->getMovementCost(neighbours[i]));
   //work out h
   h = (double) abs(p_goalNode->getX() - n->getX());
   h += (double) abs(p_goalNode->getY() - n->getY());

   //add to open list in order
   std::vector::iterator itr2 = m_openList.begin();
   for (; itr2 != m_openList.end(); itr2++) 
    Node* n2 = *itr2;
    if (n->getF() <= n2->getF()) 
   m_openList.insert(itr2, n);

 std::list dummylist;
 return dummylist;
The first few lines of code are setting up the world. Since the start node has no parent, it’s set as 0. The start node is then pushed onto the frontier list. The reason there are two lists is so that once a node is fully explored, it never needs to be explored again. It subsequently shifts from the frontier list to the closed list.

The main algorithm is a while loop, which will run until either the goal Node is found, or the frontier list is exhausted. The Node at the front of the frontier list (sorted by f) is checked for whether it is in-fact the goal Node, in which case the algorithm constructs the list of Nodes and returns it. Otherwise, the Node goes through each available direction on the current Node, removes it from the frontier list, and adds it to the closed list.

For each neighbour Node, it needs to be checked whether it’s already a) on the frontier, or b) on the closed list. In either of these cases, the Node needs no further exploration except in one special case of a Node on the frontier. If the distance travelled to the Node in this iteration is less than the distance travelled to the Node, then the Node and frontier list need to be updated. i.e. we have found a shorter path to the same position. If the neighbour Node is not yet explored, its g and h are calculated (in this case, h is calculated using the Manhattan heuristic), its parent is set as the current Node, then it is put onto the frontier list sorted by f. Finally, if the frontier list is exhausted, this means there was no path from the start Node to the goal Node, so an empty list is created and returned. Thus the empty() function on list acts as the indicator of whether there was path successfully generated.

The code for building the path list is as follows:

std::list PathfindingAStar::finalisePath(Node* p_currentNode) 
 std::list goalList;
 while (p_currentNode != NULL) 
  p_currentNode = p_currentNode->getParent();
 //don't need the first node
 return goalList;
The code is fairly straight forward. Since each Node knows its parent, it’s simply a matter of traversing back to the first Node where the parent was set as NULL.
And there it is – a basic implementation of the A* algorithm. It’s by no means the most efficient possible version of the algorithm, nor does it do any of the C++ things one really ought to do when using the language. The important thing is that it works, and it’s reasonably clear how.

Saturday, 22 November 2014

The Why of A*

For pathfinding, breadth-first search (BFS) is almost always preferable to depth-first search (DFS). The general problem of a breadth-first search is that it is resource-intensive. Depth-first follows a single path that grows as the path is explored. Breadth-first explores multiple paths at once, with the amount of paths growing exponentially.

Both DFS and BFS are brute force attempts at finding a path, but they express themselves in different ways. A depth-first will in all likelihood find a path that won’t directly go from A to B, but follow the rules which govern how to choose the next position to explore. A breadth-first search might find a more direct path from A to B, but the result will take much more time to compute – something that’s unforgivable in a game which requires immediate feedback.

There are ways of directing a BFS so that it doesn’t take as much computational power. A* is the best of these, but it’s worth at least glancing over why this is the case. A* works on the following equation:
f = g + h
Where g is the total distance travelled, h is the estimated distance left to travel, and the smallest f is chosen next. An undirected BFS would be the equivalent of ignoring h altogether, so it’s worth exploring the role of h.

One strategy with a BFS is to just take into account the shortest distance to the goal. So whatever node is “closest” (and there are various strategies for working this out) should be the next node to search. This strategy works well if there are no obstacles directly between the start node and the goal node, as the closest node will always bring the path closer to the goal. The strategy fails, however, if there are obstacles, because the path will inevitably explore those dead ends like a DFS would.

A* gets around this problem by also taking into account just how far the path has travelled. In other words, what makes a path the most fruitful to explore is the estimated total of the journey. A path that goes down a dead end will have an overall larger journey than a path that took a deviation before the dead end path.

With the A* algorithm, the final output would be a path that always has the least number of moves between A and B calculated with exploring as little of the search tree as necessary. The algorithm, like with any algorithm, has its limitations, but ought to be used where the start and end goals are known and need to be computed.

Monday, 3 November 2014

Phase 3b: Flip Squares

With Snake, I could contain the notion of a Level within the screen itself, starting at (0,0) for both the top left coordinate and the offset for rendering the game objects. Going forward, this was not sustainable. Furthermore, I had no way of capturing or processing mouse input on the screen – limiting the engine in effect to whatever could be done solely with keyboard. Finally, there was no way of putting text onto the screen.

Since Snake wasn't really suited to these tasks, I decided to make a new prototype. I remembered an old puzzle from Black & White whereby you had to make a set of randomised trees all the same type. The rule was fairly simple – you had a 3x3 set of trees. If you pulled on a tree, it changed the tree and changed every neighbour of the tree. So there were patterns of 3, 4, or 5 changing depending on the location. Simple enough to code such that the game code itself wouldn't take up much of the development, yet would be useful to get all three aspects missing from the system coded up.

The Camera was a crucial class to get right, and if I was being honest, working out the logic for it has been something that I've been trying to get right in my head for months. Indeed, it was the bane of phase 1 and phase 2. With Level finally sorted out, it was more a matter of getting Camera right. The Level introduced the idea of WorldCoordinate – a simple structure that holds two doubles. So Camera could have a position relative to the Level by holding a world coordinate and the tile dimensions.

Camera needed to do a number of other things. For one, Camera had to be accessible by every AnimationObject so that the destination rect could be relative to the Camera rather than the level. This way textures could be updated out of sight of the game code. The other major thing was to keep track of the mouse, to keep track of where it is on screen and to be able to translate back into a WorldCoordinate for use in-game.

When designing my Input class in the abstract, I thought all I would need to do was keep track of the relative movement of the mouse. This proved to have a limited utility when running in Windowed mode – with the absolute position of the cursor sometimes moving out of the window altogether while not being able to reach certain objects in game. It’s this kind of phenomenon that has helped me understand how important prototypes are to getting the engine running right.

In order to get the prototype up and running, I avoided working out how to display a cursor, and instead used an overlay object for the squares themselves. It worked well enough, and well enough to force me to revisit the equations in the Camera class for getting the translation of WorldCoordinate right. By this stage, I had a working prototype, as seen here.

The last thing I wanted to do was to get text rendering on the screen. For the prototype, I wanted to display something simple – the number of moves in trying to solve the puzzle. With TextureObject (my wrapper of SDL_Texture), the textures were meant to live for the lifetime of the application, so it could be managed by TextureManager. With text, its creation and destruction needed to be with regards to its use. Obviously I don’t want this to be manual, so when I created TextObject (the text equivalent of AnimationObject) I handled the deletion of text objects in code.

It turned out getting text on screen was mostly straightforward. The only problem was the difference between relative and absolute positioning of text. Obviously I would want the option of having both, but I haven’t come up with a good solution yet. For now I’ve created two objects: TextObject and TextGameObject with game objects being camera-relative while TextObject has an absolute position. When I get to working on the HUD, I might encounter this problem with AnimationObject, so there’s a refactor issue.

In the absence of another prototype idea, I refactored the game itself with an algorithm to brute-force a solution. Taking a purely random approach to the game meant that occasionally really easy puzzles would be thrown up (I coded to stop 0-step puzzles). So I wrote an algorithm that would brute-force a solution - to work out just what are the moves needed to solve the puzzle from any given position. This didn't advance the engine in any way, but was a good coding exercise nonetheless.