Sunday, 23 November 2014

The How of A*

(For the Why of A*, go here)

Here is a basic implementation of A*. Note that this isn’t necessarily an optimal solution, but it is a working one.

For my solution, I have two classes: Node and PathfindingAStar. The two classes are tightly coupled, with Pathfinding depending on Node to have certain properties in order to work. Node itself is the bridge between the pathfinding subsystem and the wider system.

Here is Node.h:

typedef struct
{
 unsigned int x;
 unsigned int y;
}NodePos;

class Node
{
public:
 Node(unsigned int p_x, unsigned int p_y);
 ~Node();
 NodePos getPosition();
 bool isSameNode(Node* p_node);
 void setNeighbourAt(Direction p_dir, Node* p_neighbour, double p_cost = 1.0);
 Node* getNeighbourAt(Direction p_dir);
 unsigned int getMovementCost(Direction p_dir);
 std::vector getAllAvailableDirections();
 bool isOccupied();
 void setOccupied(bool p_occupied);
 void setParent(Node* p_parent);
 Node* getParent();
 double getG();
 void setG(double p_distanceTravelled);
 double getH();
 void setH(double p_heuristic);
 double getF();
 unsigned int getX();
 unsigned int getY();
private:
 std::map m_movementCost;
 NodePos m_pos;
 std::map m_neighbours;
 Node *m_parent;
 bool m_occupied;
 double m_h;
 double m_g;
};
The Node caters for two purposes. Firstly, it holds information about the world itself that are vital for Pathfinding to operate. The position of the world, the neighbours of the node, the costs, and whether the node is occupied are all created and maintained by the game code. Neighbours cannot be populated upon creation because neighbours won’t have necessarily have been created themselves. Whether the node is currently occupied will be need to be maintained by the game code.

Secondly, the node holds information relevant to the cycle of the path. m_g and m_h are used by Pathfinding for the node’s distance from the start node and to the goal node. Similarly, m_parent is used to keep track of where the navigation came from.

As far as methods in Node.cpp go, there are two methods that are worth exploring, both to do with the available directions.

void Node::setNeighbourAt(Direction p_dir, Node* p_neighbour, double p_cost)
{
 m_neighbours[p_dir] = p_neighbour;
 m_movementCost[p_dir] = p_cost;
}
std::vector Node::getAllAvailableDirections()
{
 std::vector directions;
 directions.clear();
 std::map::iterator itr;
 std::pair pair;
 for(itr = m_movementCost.begin(); itr != m_movementCost.end(); itr++)
 {
  pair = *itr;
  directions.push_back(pair.first);
 }
 return directions;
}
The Pathfinding algorithm will need to know what directions are available to move in, and how much it will “cost” to move in that direction. An alternative would be to simply have a cost associated with a node, but this way gives more flexibility to the game code. So there are two lists, one for the cost of moving in a direction, and one for the neighbour Node in that direction.

Since directions are set when neighbours are set, there’s never any need to worry about the two maps going out of sync. So either map could have been iterated over to build a list for Pathfinding to use. This returned list could just as easily have been a private member of Node, generated as part of the setting of neighbours, and perhaps could be a point of efficiency to do so in the future.
Here is the source code for PathfindingAStar.h:

#include "Node.h"

class PathfindingAStar
{
public:
    static PathfindingAStar* getInstance();
    ~PathfindingAStar();
 std::list findPath(Node* p_startNode, Node* p_goalNode);
private:
 std::list finalisePath(Node* p_currentNode); 
 PathfindingAStar();
 static PathfindingAStar *singleton;
};
Pathfinding has one public method – to find the path. The rest is just for housekeeping. There only ever needs to be one instance of the class, so the Singleton pattern is implemented. There’s nothing particularly special about the class from an OO point of view, as all of the work is done in the single method search.

std::list PathfindingAStar::findPath(Node* p_startNode, Node* p_goalNode) 
{
 std::vector m_openList; //Nodes that are active
 std::vector m_closedList; //Nodes that have been used
 //Clear the list
 m_openList.clear();
 m_closedList.clear();
 //Reset the node to parent for the current cycle
 p_startNode->setParent(0);
 //and reset all values of f, g & h
 p_startNode->setG(0.0);
 double h = (double) abs(p_goalNode->getX() - p_startNode->getX());
 h += (double) abs(p_goalNode->getY() - p_startNode->getY());
 p_startNode->setH(h);
 m_openList.push_back(p_startNode);

 while (!m_openList.empty()) //If there are still nodes to explore
 {
  //Get the node with the smallest f value
  Node* currentNode = m_openList.front();
  //Remove node from the open list
  std::vector::iterator itr = m_openList.begin();
  m_openList.erase(itr);


  if (currentNode->isSameNode(p_goalNode)) 
  {
   std::list goalList = finalisePath(currentNode);
   //return out of the program
   return goalList;
  }

  std::vector neighbours = currentNode->getAllAvailableDirections();

  bool isFound;

  for (unsigned int i = 0; i < neighbours.size(); i++) {
   isFound = false;
   Node *n = currentNode->getNeighbourAt(neighbours[i]);
   if (n->isOccupied()) {
    continue;
   }
   //Make sure node isn't in open list
   for (unsigned int j = 0; j < m_openList.size(); j++) {
    Node *n2 = m_openList.at(j);
    if (n2 == n) 
    {
     double g = currentNode->getG() + currentNode->getMovementCost(neighbours[i]);
     if (g < n2->getG()) 
     {
      std::vector::iterator itr2 = m_openList.begin();
      m_openList.erase(itr2 + j);
      for (; itr2 != m_openList.end(); itr2++) 
      {
       Node* n2 = *itr2;
       if (n->getF() <= n2->getF()) 
       {
        break;
       }
      }
      m_openList.insert(itr2, n);

      n2->setG(g);
      n2->setParent(currentNode);
     }
     isFound = true;
     break;
    }
   }
   if (isFound)
    continue;
   for (unsigned int j = 0; j < m_closedList.size(); j++) 
   {
    Node *n2 = m_closedList.at(j);
    if (n2 == n) 
    {
     isFound = true;
     break;
    }
   }
   if (isFound)
    continue;

   n->setParent(currentNode);
   //work out g
   n->setG(currentNode->getG() + currentNode->getMovementCost(neighbours[i]));
   //work out h
   h = (double) abs(p_goalNode->getX() - n->getX());
   h += (double) abs(p_goalNode->getY() - n->getY());
   n->setH(h);

   //add to open list in order
   std::vector::iterator itr2 = m_openList.begin();
   for (; itr2 != m_openList.end(); itr2++) 
   {
    Node* n2 = *itr2;
    if (n->getF() <= n2->getF()) 
    {
     break;
    }
   }
   m_openList.insert(itr2, n);
  }

  m_closedList.push_back(currentNode);
 }
 std::list dummylist;
 return dummylist;
}
The first few lines of code are setting up the world. Since the start node has no parent, it’s set as 0. The start node is then pushed onto the frontier list. The reason there are two lists is so that once a node is fully explored, it never needs to be explored again. It subsequently shifts from the frontier list to the closed list.

The main algorithm is a while loop, which will run until either the goal Node is found, or the frontier list is exhausted. The Node at the front of the frontier list (sorted by f) is checked for whether it is in-fact the goal Node, in which case the algorithm constructs the list of Nodes and returns it. Otherwise, the Node goes through each available direction on the current Node, removes it from the frontier list, and adds it to the closed list.

For each neighbour Node, it needs to be checked whether it’s already a) on the frontier, or b) on the closed list. In either of these cases, the Node needs no further exploration except in one special case of a Node on the frontier. If the distance travelled to the Node in this iteration is less than the distance travelled to the Node, then the Node and frontier list need to be updated. i.e. we have found a shorter path to the same position. If the neighbour Node is not yet explored, its g and h are calculated (in this case, h is calculated using the Manhattan heuristic), its parent is set as the current Node, then it is put onto the frontier list sorted by f. Finally, if the frontier list is exhausted, this means there was no path from the start Node to the goal Node, so an empty list is created and returned. Thus the empty() function on list acts as the indicator of whether there was path successfully generated.

The code for building the path list is as follows:

std::list PathfindingAStar::finalisePath(Node* p_currentNode) 
{
 std::list goalList;
 while (p_currentNode != NULL) 
 {
  goalList.push_front(p_currentNode);
  p_currentNode = p_currentNode->getParent();
 }
 //don't need the first node
 goalList.pop_front();
 return goalList;
}
The code is fairly straight forward. Since each Node knows its parent, it’s simply a matter of traversing back to the first Node where the parent was set as NULL.
And there it is – a basic implementation of the A* algorithm. It’s by no means the most efficient possible version of the algorithm, nor does it do any of the C++ things one really ought to do when using the language. The important thing is that it works, and it’s reasonably clear how.

Saturday, 22 November 2014

The Why of A*

For pathfinding, breadth-first search (BFS) is almost always preferable to depth-first search (DFS). The general problem of a breadth-first search is that it is resource-intensive. Depth-first follows a single path that grows as the path is explored. Breadth-first explores multiple paths at once, with the amount of paths growing exponentially.

Both DFS and BFS are brute force attempts at finding a path, but they express themselves in different ways. A depth-first will in all likelihood find a path that won’t directly go from A to B, but follow the rules which govern how to choose the next position to explore. A breadth-first search might find a more direct path from A to B, but the result will take much more time to compute – something that’s unforgivable in a game which requires immediate feedback.

There are ways of directing a BFS so that it doesn’t take as much computational power. A* is the best of these, but it’s worth at least glancing over why this is the case. A* works on the following equation:
f = g + h
Where g is the total distance travelled, h is the estimated distance left to travel, and the smallest f is chosen next. An undirected BFS would be the equivalent of ignoring h altogether, so it’s worth exploring the role of h.

One strategy with a BFS is to just take into account the shortest distance to the goal. So whatever node is “closest” (and there are various strategies for working this out) should be the next node to search. This strategy works well if there are no obstacles directly between the start node and the goal node, as the closest node will always bring the path closer to the goal. The strategy fails, however, if there are obstacles, because the path will inevitably explore those dead ends like a DFS would.

A* gets around this problem by also taking into account just how far the path has travelled. In other words, what makes a path the most fruitful to explore is the estimated total of the journey. A path that goes down a dead end will have an overall larger journey than a path that took a deviation before the dead end path.

With the A* algorithm, the final output would be a path that always has the least number of moves between A and B calculated with exploring as little of the search tree as necessary. The algorithm, like with any algorithm, has its limitations, but ought to be used where the start and end goals are known and need to be computed.

Monday, 3 November 2014

Phase 3b: Flip Squares

With Snake, I could contain the notion of a Level within the screen itself, starting at (0,0) for both the top left coordinate and the offset for rendering the game objects. Going forward, this was not sustainable. Furthermore, I had no way of capturing or processing mouse input on the screen – limiting the engine in effect to whatever could be done solely with keyboard. Finally, there was no way of putting text onto the screen.

Since Snake wasn't really suited to these tasks, I decided to make a new prototype. I remembered an old puzzle from Black & White whereby you had to make a set of randomised trees all the same type. The rule was fairly simple – you had a 3x3 set of trees. If you pulled on a tree, it changed the tree and changed every neighbour of the tree. So there were patterns of 3, 4, or 5 changing depending on the location. Simple enough to code such that the game code itself wouldn't take up much of the development, yet would be useful to get all three aspects missing from the system coded up.

The Camera was a crucial class to get right, and if I was being honest, working out the logic for it has been something that I've been trying to get right in my head for months. Indeed, it was the bane of phase 1 and phase 2. With Level finally sorted out, it was more a matter of getting Camera right. The Level introduced the idea of WorldCoordinate – a simple structure that holds two doubles. So Camera could have a position relative to the Level by holding a world coordinate and the tile dimensions.

Camera needed to do a number of other things. For one, Camera had to be accessible by every AnimationObject so that the destination rect could be relative to the Camera rather than the level. This way textures could be updated out of sight of the game code. The other major thing was to keep track of the mouse, to keep track of where it is on screen and to be able to translate back into a WorldCoordinate for use in-game.

When designing my Input class in the abstract, I thought all I would need to do was keep track of the relative movement of the mouse. This proved to have a limited utility when running in Windowed mode – with the absolute position of the cursor sometimes moving out of the window altogether while not being able to reach certain objects in game. It’s this kind of phenomenon that has helped me understand how important prototypes are to getting the engine running right.

In order to get the prototype up and running, I avoided working out how to display a cursor, and instead used an overlay object for the squares themselves. It worked well enough, and well enough to force me to revisit the equations in the Camera class for getting the translation of WorldCoordinate right. By this stage, I had a working prototype, as seen here.

The last thing I wanted to do was to get text rendering on the screen. For the prototype, I wanted to display something simple – the number of moves in trying to solve the puzzle. With TextureObject (my wrapper of SDL_Texture), the textures were meant to live for the lifetime of the application, so it could be managed by TextureManager. With text, its creation and destruction needed to be with regards to its use. Obviously I don’t want this to be manual, so when I created TextObject (the text equivalent of AnimationObject) I handled the deletion of text objects in code.

It turned out getting text on screen was mostly straightforward. The only problem was the difference between relative and absolute positioning of text. Obviously I would want the option of having both, but I haven’t come up with a good solution yet. For now I’ve created two objects: TextObject and TextGameObject with game objects being camera-relative while TextObject has an absolute position. When I get to working on the HUD, I might encounter this problem with AnimationObject, so there’s a refactor issue.

In the absence of another prototype idea, I refactored the game itself with an algorithm to brute-force a solution. Taking a purely random approach to the game meant that occasionally really easy puzzles would be thrown up (I coded to stop 0-step puzzles). So I wrote an algorithm that would brute-force a solution - to work out just what are the moves needed to solve the puzzle from any given position. This didn't advance the engine in any way, but was a good coding exercise nonetheless.

Saturday, 1 November 2014

Internet Debating: The Complete Idiot Hypothesis

If you argue a point and someone doesn't get it, then it must be asserted that the reason they didn't get it is they are a complete idiot.