Monday, 15 December 2014

Slipping Back into Old Habits

I have a lot more time to think about coding issues than I have time to code. In some ways, this is quite handy as it allows me to plan through certain problems and foresee potential pitfalls of a particular solution. I think there are two key problems with this, however. First, even if I come up with a solution it's still only half the job done. Second, it gets be bogged down in coding problems that often are beyond my coding abilities. This all became apparent in the last month when I became bogged down in a particular design problem. A month later, I'm still no closer to a solution, and worse still I'm not making any progress in any other area. And instead of just coding a workaround for the time being, I became fixated on solving the problem.

(I needed compile-time polymorphism where I previously had run-time polymorphism. I read through books and websites on templating, tried rudimentary attempts at adjusting my code, but always I was trying to code up a perfect solution in my head, weighing up potential considerations and pitfalls. And at the end of it, all I have is the lament I just didn't code around it.)

What I find so disappointing is I know all this. There's nothing particularly revelatory in that I dwell on problems instead of working around them, but it is disappointing that I still haven't moved on from the problems I had when I was at university learning to code. I get too hung up on solutions that I haven't learnt how to code around it.

This game coding happens in the limited spare time I have. It's a hobby, and comes after other aspects of work and life. So maintaining motivation is always a struggle, especially in the face of seeming blockers. And from what I've come to know about the role pressure plays in inhibiting the creative process, the limited time often translates into fixating on particular solutions instead of just getting on with something else.

Given that I'm still adjusting back into C++, it's easy for me to think of failures as a lack of knowledge of the language itself. Sometimes that's apparent, and I know as I'm reading through various C++ material that I'm skimming for a particular solution context rather than internalising the language feature as a potential design tool in the future. But I think it would be a cop-out to blame my current understanding of C++ when the language is so flexible in the kinds of solutions that are permitted.

It's always, I think, going to be a question of motivation. That is, am I able to code something that's good enough for the task at hand? One thing that's problematic is that my initial scope is far too ambitious – that is, I'm trying to create a framework that could be used to make any number of 2D games. So in that respect I'm stuck trying to figure out generic solutions.

And therein lies the problem. How do I know I'm going to get to use any of the design work I've put in so far? For example, all the time I spent writing wrapper objects for SDL code, as far as I can tell, has had very little benefit so far. Indeed, the main reason I did that was to have a sort of platform agnosticism – the theory was that I could refactor out the core features of the engine to a different library without having to permeate those decisions down through game code. Yet if I'm never going to do that, then as happy as I am with some of the wrapper code I wrote, then a lot of that effort was wasted.

(Funnily enough, as I was trying to learn SDL, one thing I got frustrated with was that the tutorials were so direct in their use of the libraries. I took it as an act of clarity on account of the tutorial writers that they didn't try to muddle things up by enclosing their code in some simple patterns. But as I was coding up my input system, I wondered just what benefit I was getting out of mapping SDL structures with my own.)

This too, I think, highlights the perils of solo development. As I'm working on this project, I'm accountable to one person: myself. What I design, what I build, what my desired end-product is – all that is my decision. And since I'm starting from scratch and working completely unguided, it’s inevitable that I'm going to not only make bad decisions, but those bad decisions will lead t o a lot of wasted time. You live, you (hopefully) learn.

What I think I've learnt is something about the limitations of myself. If I have a certain disposition to coding a certain way, then it would be madness to think I can just overcome it by saying "don't". What I hoped with prototype-driven design was that I could better focus on what needs to be done, rather than what I would think might need to be done. That, now, seems like only addressing half of the equation, and what's missing are techniques for build-and-refactor so I don’t get bogged down where an elegant solution is not immediately apparent.

I know at times I need to get out of my head – I have notebooks that are filling up with ideas faster than I'm adding lines of code to the codebase. What will remain to be seen is if I can work with my own personal practices and habits. Like losing weight, if you try to do it on willpower, then you’re going to fail miserably.

Sunday, 23 November 2014

The How of A*

(For the Why of A*, go here)

Here is a basic implementation of A*. Note that this isn’t necessarily an optimal solution, but it is a working one.

For my solution, I have two classes: Node and PathfindingAStar. The two classes are tightly coupled, with Pathfinding depending on Node to have certain properties in order to work. Node itself is the bridge between the pathfinding subsystem and the wider system.

Here is Node.h:

typedef struct
 unsigned int x;
 unsigned int y;

class Node
 Node(unsigned int p_x, unsigned int p_y);
 NodePos getPosition();
 bool isSameNode(Node* p_node);
 void setNeighbourAt(Direction p_dir, Node* p_neighbour, double p_cost = 1.0);
 Node* getNeighbourAt(Direction p_dir);
 unsigned int getMovementCost(Direction p_dir);
 std::vector getAllAvailableDirections();
 bool isOccupied();
 void setOccupied(bool p_occupied);
 void setParent(Node* p_parent);
 Node* getParent();
 double getG();
 void setG(double p_distanceTravelled);
 double getH();
 void setH(double p_heuristic);
 double getF();
 unsigned int getX();
 unsigned int getY();
 std::map m_movementCost;
 NodePos m_pos;
 std::map m_neighbours;
 Node *m_parent;
 bool m_occupied;
 double m_h;
 double m_g;
The Node caters for two purposes. Firstly, it holds information about the world itself that are vital for Pathfinding to operate. The position of the world, the neighbours of the node, the costs, and whether the node is occupied are all created and maintained by the game code. Neighbours cannot be populated upon creation because neighbours won’t have necessarily have been created themselves. Whether the node is currently occupied will be need to be maintained by the game code.

Secondly, the node holds information relevant to the cycle of the path. m_g and m_h are used by Pathfinding for the node’s distance from the start node and to the goal node. Similarly, m_parent is used to keep track of where the navigation came from.

As far as methods in Node.cpp go, there are two methods that are worth exploring, both to do with the available directions.

void Node::setNeighbourAt(Direction p_dir, Node* p_neighbour, double p_cost)
 m_neighbours[p_dir] = p_neighbour;
 m_movementCost[p_dir] = p_cost;
std::vector Node::getAllAvailableDirections()
 std::vector directions;
 std::map::iterator itr;
 std::pair pair;
 for(itr = m_movementCost.begin(); itr != m_movementCost.end(); itr++)
  pair = *itr;
 return directions;
The Pathfinding algorithm will need to know what directions are available to move in, and how much it will “cost” to move in that direction. An alternative would be to simply have a cost associated with a node, but this way gives more flexibility to the game code. So there are two lists, one for the cost of moving in a direction, and one for the neighbour Node in that direction.

Since directions are set when neighbours are set, there’s never any need to worry about the two maps going out of sync. So either map could have been iterated over to build a list for Pathfinding to use. This returned list could just as easily have been a private member of Node, generated as part of the setting of neighbours, and perhaps could be a point of efficiency to do so in the future.
Here is the source code for PathfindingAStar.h:

#include "Node.h"

class PathfindingAStar
    static PathfindingAStar* getInstance();
 std::list findPath(Node* p_startNode, Node* p_goalNode);
 std::list finalisePath(Node* p_currentNode); 
 static PathfindingAStar *singleton;
Pathfinding has one public method – to find the path. The rest is just for housekeeping. There only ever needs to be one instance of the class, so the Singleton pattern is implemented. There’s nothing particularly special about the class from an OO point of view, as all of the work is done in the single method search.

std::list PathfindingAStar::findPath(Node* p_startNode, Node* p_goalNode) 
 std::vector m_openList; //Nodes that are active
 std::vector m_closedList; //Nodes that have been used
 //Clear the list
 //Reset the node to parent for the current cycle
 //and reset all values of f, g & h
 double h = (double) abs(p_goalNode->getX() - p_startNode->getX());
 h += (double) abs(p_goalNode->getY() - p_startNode->getY());

 while (!m_openList.empty()) //If there are still nodes to explore
  //Get the node with the smallest f value
  Node* currentNode = m_openList.front();
  //Remove node from the open list
  std::vector::iterator itr = m_openList.begin();

  if (currentNode->isSameNode(p_goalNode)) 
   std::list goalList = finalisePath(currentNode);
   //return out of the program
   return goalList;

  std::vector neighbours = currentNode->getAllAvailableDirections();

  bool isFound;

  for (unsigned int i = 0; i < neighbours.size(); i++) {
   isFound = false;
   Node *n = currentNode->getNeighbourAt(neighbours[i]);
   if (n->isOccupied()) {
   //Make sure node isn't in open list
   for (unsigned int j = 0; j < m_openList.size(); j++) {
    Node *n2 =;
    if (n2 == n) 
     double g = currentNode->getG() + currentNode->getMovementCost(neighbours[i]);
     if (g < n2->getG()) 
      std::vector::iterator itr2 = m_openList.begin();
      m_openList.erase(itr2 + j);
      for (; itr2 != m_openList.end(); itr2++) 
       Node* n2 = *itr2;
       if (n->getF() <= n2->getF()) 
      m_openList.insert(itr2, n);

     isFound = true;
   if (isFound)
   for (unsigned int j = 0; j < m_closedList.size(); j++) 
    Node *n2 =;
    if (n2 == n) 
     isFound = true;
   if (isFound)

   //work out g
   n->setG(currentNode->getG() + currentNode->getMovementCost(neighbours[i]));
   //work out h
   h = (double) abs(p_goalNode->getX() - n->getX());
   h += (double) abs(p_goalNode->getY() - n->getY());

   //add to open list in order
   std::vector::iterator itr2 = m_openList.begin();
   for (; itr2 != m_openList.end(); itr2++) 
    Node* n2 = *itr2;
    if (n->getF() <= n2->getF()) 
   m_openList.insert(itr2, n);

 std::list dummylist;
 return dummylist;
The first few lines of code are setting up the world. Since the start node has no parent, it’s set as 0. The start node is then pushed onto the frontier list. The reason there are two lists is so that once a node is fully explored, it never needs to be explored again. It subsequently shifts from the frontier list to the closed list.

The main algorithm is a while loop, which will run until either the goal Node is found, or the frontier list is exhausted. The Node at the front of the frontier list (sorted by f) is checked for whether it is in-fact the goal Node, in which case the algorithm constructs the list of Nodes and returns it. Otherwise, the Node goes through each available direction on the current Node, removes it from the frontier list, and adds it to the closed list.

For each neighbour Node, it needs to be checked whether it’s already a) on the frontier, or b) on the closed list. In either of these cases, the Node needs no further exploration except in one special case of a Node on the frontier. If the distance travelled to the Node in this iteration is less than the distance travelled to the Node, then the Node and frontier list need to be updated. i.e. we have found a shorter path to the same position. If the neighbour Node is not yet explored, its g and h are calculated (in this case, h is calculated using the Manhattan heuristic), its parent is set as the current Node, then it is put onto the frontier list sorted by f. Finally, if the frontier list is exhausted, this means there was no path from the start Node to the goal Node, so an empty list is created and returned. Thus the empty() function on list acts as the indicator of whether there was path successfully generated.

The code for building the path list is as follows:

std::list PathfindingAStar::finalisePath(Node* p_currentNode) 
 std::list goalList;
 while (p_currentNode != NULL) 
  p_currentNode = p_currentNode->getParent();
 //don't need the first node
 return goalList;
The code is fairly straight forward. Since each Node knows its parent, it’s simply a matter of traversing back to the first Node where the parent was set as NULL.
And there it is – a basic implementation of the A* algorithm. It’s by no means the most efficient possible version of the algorithm, nor does it do any of the C++ things one really ought to do when using the language. The important thing is that it works, and it’s reasonably clear how.

Saturday, 22 November 2014

The Why of A*

For pathfinding, breadth-first search (BFS) is almost always preferable to depth-first search (DFS). The general problem of a breadth-first search is that it is resource-intensive. Depth-first follows a single path that grows as the path is explored. Breadth-first explores multiple paths at once, with the amount of paths growing exponentially.

Both DFS and BFS are brute force attempts at finding a path, but they express themselves in different ways. A depth-first will in all likelihood find a path that won’t directly go from A to B, but follow the rules which govern how to choose the next position to explore. A breadth-first search might find a more direct path from A to B, but the result will take much more time to compute – something that’s unforgivable in a game which requires immediate feedback.

There are ways of directing a BFS so that it doesn’t take as much computational power. A* is the best of these, but it’s worth at least glancing over why this is the case. A* works on the following equation:
f = g + h
Where g is the total distance travelled, h is the estimated distance left to travel, and the smallest f is chosen next. An undirected BFS would be the equivalent of ignoring h altogether, so it’s worth exploring the role of h.

One strategy with a BFS is to just take into account the shortest distance to the goal. So whatever node is “closest” (and there are various strategies for working this out) should be the next node to search. This strategy works well if there are no obstacles directly between the start node and the goal node, as the closest node will always bring the path closer to the goal. The strategy fails, however, if there are obstacles, because the path will inevitably explore those dead ends like a DFS would.

A* gets around this problem by also taking into account just how far the path has travelled. In other words, what makes a path the most fruitful to explore is the estimated total of the journey. A path that goes down a dead end will have an overall larger journey than a path that took a deviation before the dead end path.

With the A* algorithm, the final output would be a path that always has the least number of moves between A and B calculated with exploring as little of the search tree as necessary. The algorithm, like with any algorithm, has its limitations, but ought to be used where the start and end goals are known and need to be computed.

Monday, 3 November 2014

Phase 3b: Flip Squares

With Snake, I could contain the notion of a Level within the screen itself, starting at (0,0) for both the top left coordinate and the offset for rendering the game objects. Going forward, this was not sustainable. Furthermore, I had no way of capturing or processing mouse input on the screen – limiting the engine in effect to whatever could be done solely with keyboard. Finally, there was no way of putting text onto the screen.

Since Snake wasn't really suited to these tasks, I decided to make a new prototype. I remembered an old puzzle from Black & White whereby you had to make a set of randomised trees all the same type. The rule was fairly simple – you had a 3x3 set of trees. If you pulled on a tree, it changed the tree and changed every neighbour of the tree. So there were patterns of 3, 4, or 5 changing depending on the location. Simple enough to code such that the game code itself wouldn't take up much of the development, yet would be useful to get all three aspects missing from the system coded up.

The Camera was a crucial class to get right, and if I was being honest, working out the logic for it has been something that I've been trying to get right in my head for months. Indeed, it was the bane of phase 1 and phase 2. With Level finally sorted out, it was more a matter of getting Camera right. The Level introduced the idea of WorldCoordinate – a simple structure that holds two doubles. So Camera could have a position relative to the Level by holding a world coordinate and the tile dimensions.

Camera needed to do a number of other things. For one, Camera had to be accessible by every AnimationObject so that the destination rect could be relative to the Camera rather than the level. This way textures could be updated out of sight of the game code. The other major thing was to keep track of the mouse, to keep track of where it is on screen and to be able to translate back into a WorldCoordinate for use in-game.

When designing my Input class in the abstract, I thought all I would need to do was keep track of the relative movement of the mouse. This proved to have a limited utility when running in Windowed mode – with the absolute position of the cursor sometimes moving out of the window altogether while not being able to reach certain objects in game. It’s this kind of phenomenon that has helped me understand how important prototypes are to getting the engine running right.

In order to get the prototype up and running, I avoided working out how to display a cursor, and instead used an overlay object for the squares themselves. It worked well enough, and well enough to force me to revisit the equations in the Camera class for getting the translation of WorldCoordinate right. By this stage, I had a working prototype, as seen here.

The last thing I wanted to do was to get text rendering on the screen. For the prototype, I wanted to display something simple – the number of moves in trying to solve the puzzle. With TextureObject (my wrapper of SDL_Texture), the textures were meant to live for the lifetime of the application, so it could be managed by TextureManager. With text, its creation and destruction needed to be with regards to its use. Obviously I don’t want this to be manual, so when I created TextObject (the text equivalent of AnimationObject) I handled the deletion of text objects in code.

It turned out getting text on screen was mostly straightforward. The only problem was the difference between relative and absolute positioning of text. Obviously I would want the option of having both, but I haven’t come up with a good solution yet. For now I’ve created two objects: TextObject and TextGameObject with game objects being camera-relative while TextObject has an absolute position. When I get to working on the HUD, I might encounter this problem with AnimationObject, so there’s a refactor issue.

In the absence of another prototype idea, I refactored the game itself with an algorithm to brute-force a solution. Taking a purely random approach to the game meant that occasionally really easy puzzles would be thrown up (I coded to stop 0-step puzzles). So I wrote an algorithm that would brute-force a solution - to work out just what are the moves needed to solve the puzzle from any given position. This didn't advance the engine in any way, but was a good coding exercise nonetheless.

Saturday, 1 November 2014

Internet Debating: The Complete Idiot Hypothesis

If you argue a point and someone doesn't get it, then it must be asserted that the reason they didn't get it is they are a complete idiot.

Friday, 31 October 2014

Phase 3a: Snake Enhanced

The last couple of weeks has seen a continuation of the approach I took developing Snake. Since there is still more to be done getting the Engine to a functional state, enhancing the existing game seemed like the quickest way of those enhancements in.

The two major enhancements I had for the system were to get animation working, and to get level as its own concept. For the first cut of snake, these were two areas where there was a lot of manual code.

The need for an AnimationObject arose out of the unsuitability of RenderObject and TextureObject to be complete for use as the game object. Each game object was loading its correct texture, then using that to populate the RenderObject associated with rendering. That was fine except for the snake itself, which changed whether the body part was a head, body part, or tail. So the snake had to keep track of both the texture and the render object so it could manually shift the frame corresponding to the right body part.

AnimationObject was my solution. By taking the texture, there was no need for the game object to have continual reference to the texture object (or even retrieve it) in order to change frames. Any animation in the original version was simply the illusion of movement. With AnimationObject, I’d have actual animation at the speed I desired simply by creating the object. It also meant that if the object moved, I could update the game coordinates without having to write the code to manually update the destination rect on the RenderObject.

Instead of trying to put absolutely every scenario into the single AnimationObject, I instead made use of inheritance and polymorphism to get specific functions. I made a RunOnceAnimationObject and a LoopAnimationObject – both of which override the render() method to allow for timer-based animation. The base AnimationObject would be for pointing to a single frame of a texture, while RunOnce would go from start to finish and hold on the final frame, and Loop would loop indefinitely.

The satisfying thing about the AnimationObject classes was that it took no more than an evening to write the code and refactor the game code. The whole process was seamless, and worked with very little effort. Building the level code was more effort, and one of those points where I had to remind myself to design for the game rather than design for some abstract ideal of what I wanted to achieve. The problem is this: Level ideally loads from an XML file (in my case, I wanted those XML files generated by Tiled!), and what is in the XML file on any given tile conforms to a game object.

To solve this problem, I implemented the abstract factory pattern. Level takes a TileFactory (an abstract type) that has a generateTile(int id) method which returns a Tile pointer (also an abstract). Level populates a Vector, where tiles can be retrieved with an x and y coordinate. So what kinds of tiles are produced depends on the concrete TileFactory, and all the SnakeGameObjects that could be Tile had to do was inherit from Tile. This approach does have a limitation, using Level like this means accessing any Tile will only have the methods that exist on the Tile contract. For now, I put the method bool isCollision() onto Tile to allow for collision detection within the game object. In the future, enhancing Tile to cater for all iterations of what Level could do might prove to be unwieldy. For now, the class does enough to work.

Here’s the enhanced game.

It might not look like much, but there are those enhancements that went on in the backing code – animation was handled automatically, and the world itself was controlled by an XML generated by the Tiled! application. In the spirit of prototype-driven design, I feel I was able to accomplish a number of vital tasks in a short period of time. Of course, there is still more to go, but that requires a different kind of prototype.

Saturday, 25 October 2014

Jonathan Blow on Programming Practices

Since the game programming is a hobby of mine – something I do in my spare time – I tend to have a lot more time to think about what I do than to actually do it. I have filled many pages of notebooks and have made many documents with notes ranging from design material, to specific solutions, and everything in between.

So it’s with that in mind that the following video from the Braid developer Jonathan Blow intrigued me, for it advocates effectively the opposite approach to what I've been taking.

One immediate thing that occurs to me is that I’m no Jonathan Blow. Just being able to sit down and write without inhibitions, I'm fairly certain, would get me a mess that doesn't really do much of anything.

The second thing that occurs to me is that it's taken me a year to get half an engine. So the approach I'm taking now is realistically-speaking unsustainable – at least for the goal I'm trying to work towards.

My hope by this stage of my career is that I've absorbed enough of the good design patterns such that the hard work is done for me. What I seemed to have absorbed, however, is knowledge of the existence of patterns minus the knowledge of their application on low-level functions. Since I've had no professional game development experience, this should come as no surprise – why would I have needed to understand patterns outside of their application in Java EE?

Same goes for my stagnated C++ skills, whereby I'm digging through Scott Meyer’s Effective C++ in the hope I can mitigate a lot of the 'gotchas' on areas where Java manages it for you (such as dealing with pointers). My intuitive grasp of class design and function design is rooted in a Java mindset, and C++ is just far enough away from it that my coding style has to be more deliberate. The lack of private functions, especially, is testing my coding sensibilities.

Blow's list of do’s and don'ts seems on the face of it a good one, and especially pertinent if I replace his use of optimisation with "open design" to reflect the kind of design considerations I'm currently working with. I've spent so much time trying to get things just right – of trying to come up with a perfect class order that will enable extensible code. It’s great when it works, but the cost in time and effort has been considerable for that. As Blow pointed out, this kind of approach won’t pay off in the long term.

The other stand-out idea in his talk was the idea of writing specific code over general code. I know I'm especially guilty of doing this. Some of the time there is good reason to write general code, and to my mind it’s especially important to get it right in the core. But it’s something to reflect on that there is good that is fit for purposes without being so flexible it fits every purpose I desire. My Input class comes to mind, and part of that was I think a rationalisation of adding an abstraction atop of SDL's abstraction of the interface. I got what I wanted, but it’s easy to forget the cost of that.

One final lesson that was worth highlighting was the idea of writing code in a single block. This goes dead against the coding principles outline in Robert C. Martin’s Clean Code. In one sense, I can understand this in C++ because of how annoying it is to write function headers for private functions. But Blow’s suggestion about encapsulation is a point well made.

Friday, 24 October 2014

Prototype-Driven Design

This is designed to be a normative approach based off the three [1, 2, 3] development approaches I wrote about earlier. Namely, how to take the use cases of a simple game to develop and refactor an extensible and versatile underlying code. This is a note to myself for next time.

Bootstrapping the framework
A game is a loop, and the loop has an exit condition. Hitting the exit condition exits the game. To be interesting, while the game is looping, the game has to do stuff. All that should be met with “duh”. What this means is that we have to organise an order of things to do in the loop. You need to create a window, load images, display, capture input, change game code, etc. Organising this basic pattern needs to come first..

The first thing is to get the libraries loading successfully. This may take more time than the task sounds like it would, and doing something slightly unusual (for example, choosing to develop on Windows using MinGW) might not have a single point of clear instructions. Don’t fret if this takes longer, it’s important to get this right.

There are a number of discrete systems that in concert make up a game. Many of these are “core”, and will be used every time in very nearly the same way. As such, each of these systems can be given a basic function (such as render updating the screen each frame), and each function placed as part of a list of functions that are always called. In mine, I called this the Engine façade, where the update() loop took care of everything the engine needed to do with a single call from the game loop.

At this stage, it’s okay to use scenario-based development. “A screen should appear on start-up”, “a button press should quit the game”, etc. Indeed, there’s little more one can do at this stage. The important point is not to overdo this, however, because you a) cannot account for every scenario, and b) what you think will be right scenarios may not be the case. Overdo it at this stage, and prepare to refactor.

The prototype approach
Take a simple game like Snake. It’s a fairly simple game so it won’t take much to work out how to translate what happens on screen into code. It’s also a fairly-well defined game, so there’s not a lot of effort in setting the win / loss conditions and how that translates to a loop. It also tests out what you need in a game – to capture input, to update the game world accordingly, to trigger off sounds, play music, and to update the screen.

The general approach goes like this.
  1. Try to build the game artefact to do what you want it to do.
  2. If the engine can’t support the functionality, add the functionality.
  3. If the engine only partially supports the functionality, enhance what’s there.
  4. If the engine encourages a lot of writing the same code multiple times, refactor.

Such a system will also feed back into itself, where later development will override earlier development. The important point is not to get hung up on a decision made earlier if a better decision comes along, after a short while you should have a working game and hence a platform to immediately test out how well the system works.

Like the bootstrap framework, this might take some time to set up. There’s a lot to do going from nothing at all to even a simple working game (without taking every possible shortcut). But what it allows for is very quick refactoring and enhancement. Once you have a fully functioning product, it should be a matter of hours or even minutes to enhance it for the better.

The development phase is similar to Test-Driven Design, though in this case a piece of game functionality is the unit test. You write game code to do something, but it won’t work until you then write the engine enhancement that will make it work.

The allure of design
The prototype approach requires a continual awareness of the allure of design. When making a component, it’s easy to come up with potential uses for that component – all of which takes time to design, develop, and debug. Furthermore, enhancing the system in such a way doesn't mean the work will actually achieve anything. The prototype-driven design in this sense is a way of keeping the developer honest and focused, and not wasting time on the mere potentiality of a function.

This is not a call, however, for the abandonment of design. It’s a call to prevent overdesign, to limit areas where splurges in design can lose sight of the bigger picture. The idea of a chaotic mess of code is what you end up with even with the best of intentions, so there’s no need to add to it if it can be avoided. What I am talking about here is a different kind of chaotic mess – a bloat of code of which only some applies to the end result.

The allure of design is part of the creativity that enables development in the first place, and to some extent it’s necessary for consideration for any potential solution. Otherwise we hit the other extreme – a constant rewriting of the same basic code whenever a new requirement exists. It’s a balancing act, yes, and how the prototype-driven design approach works should be to limit the potential for overdesign.

I suppose the difference I'm trying to articulate is the difference between extensible and bulletproof.

Scope-creep is a good indication that you are moving away from the PD-D and being drawn back into the allure of design. If a problem is getting unwieldy, it’s worth reminding yourself of what the component was meant to achieve in the first place. If it wasn't something with a game-related end product, then that should be a strong indication of overdesign and grounds to stop.

A guideline
The approach sketch above isn't much of a methodology, but a guideline to stay focused. It’s a way to catch that intuitive aesthetic about how code ought to be done and give it a focal point. Furthermore, it’s a guideline that as far as I can tell applies to one person: me. Or to put it another way, this is a sketch of an approach that keeps my distracted-self focused on why I'm doing this in the first place. The desire to “get it right” is coming at the expense of “getting it done”, and this approach seems promising as a way of somewhat alleviating that.

Thursday, 23 October 2014

Toward a 2D Game Engine: Reflections

Two things that have become really obvious through this process are that a) my C++ skills are really rusty, and b) my expectations are grossly unrealistic. The idea of how long something should take should be calculated only by my present abilities, not by ideal circumstances if only I were in possession of all the relevant knowledge and an industrious disposition.

This is not the first time I've tried doing this, and even back as far as university I tried and failed to do similar. I've read a lot of books and articles trying to find the right answers (or meta-answers) to the questions I've had, but I've failed to be satisfied by what is written. Though well-meaning, the tutorials are especially bad precisely because they break basic design principles for the sake of illustrating a point. The closest to what I wanted was the book Game Architecture and Design (Dave Morris and Andrew Rollings), which had a chapter on engine design. But, again, it’s a sketch of the problem – a 9000ft overview that has to somehow be distilled into discrete complementary units.

That, to me, is where the difficulty lies. It presses the question of why I should bother to begin with. After all, there are plenty of other people much smarter and more educated than I who have put the tools out there to take care of it all for me. To name two examples, there’s RPG Maker, as well as GameMaker Studio, which have a track record. But the trade-off is whether to spend the time learning how to use those tools over designing a system I know how to use. If I were more creative, perhaps those options would be better. But since I'm not overflowing with game ideas (I have 2.5 ideas that could be prototyped at this stage), building the engine itself is what I would consider an endpoint.

What I will consider the end of phase 3 – and I'm hoping this phase will be a completion of code rather than a period of enthusiasm – will be to get a few more core engine features developed such as getting in-game text, the level/tile/camera system, and menus. This seems weeks away, where I understand weeks away in a more realistic rather than pessimistic outlook. Perhaps if I were more competent, I could say a matter of days. But it is a hobby, and a hobby has to work around the rest of life.

One concern, always, is the availability heuristic. That is memories of what I've done previously are going to be tainted by the most pressing issues that came to mind – which in my case is the failure to make headway. One hope in writing all this out is that next time I go through this; I have a record highlighting what I think worked and what didn't. Writing it out gives me a meta-plan for the future, with processes and structures of what I ought (and ought not) to do.

I think this time, though time will tell, I have reusable code. The extra time I spent trying to get the design right may have been a headache (and a significant de-motivator), but I haven’t taken the shortcuts that evidently and inevitably crept through my old code. There was nothing in the rendering class, for example, that was coupled to anything in Snake. Snake depended on the Engine objects, but the engine objects were not constrained that way. If nothing else, I can take that away as a victory of good design.

The graphics that were loaded came from an XML configuration, just as sound and music did. The Engine was flexible enough for the game objects to be used by ID, and that would apply just as well in Snake as in any other game. Even to make an animated food object in snake required little more than updating the PNG, the XML, and what type of object it was. The engine objects took care of the rest. And, at least by my reckoning, that’s the way it should be. To embellish or replace the graphics wouldn't even need a recompilation!

At the end of it all, I don’t know if I have better design principles than I did going in. I'm not even sure if what I did was right (though I'm very much of the opinion that what works is right), but from my perspective I don’t care. I'm not teaching anyone else. I'm not writing a textbook or a manual. This is to achieve what I set out to do, and I'm getting damn close to that goal!

Wednesday, 22 October 2014

Phase 3: Snake

Feeling somewhat dejected at my inability to move forward, I decided to give the Engine a test run by making a simple game: Snake. I had a bit of time off work, so I thought it could be something I’d do in a day or two and then build from there. To get it fully working ended up taking much longer, though that was partly because I spent my holidays having an actual break.

What it did teach me, however, was precisely what was easy and what was difficult to do with the Engine as it stood. One development constraint I imposed on myself, which ultimately made it take longer, was I tried to do the Snake game code as wholly separate from the rest of my engine code. So my Play Game State acted as a bridge between the Engine and a separate SnakeGame class that did the equivalent thing – complete with having to mess around with how input is processed.

One thing the process immediately confirmed for me was how inadequate the RenderObject objects were for any sort of game code. In the TextureObject, I had code for pulling out individual frames. In RenderObject, I had code for sorting out source and destination rects for rendering. Between them, both did enough for basic rendering purposes, but it was a pain to do things like change which frame of an animation block to point to. So in order to have my snake change from a head to a body part, and a body part to a tail, was to keep reference to both the snake texture and the render object for the part and do the frame shift in the code.

The reason the objects were the way they were is they served purposes for the function. I wanted an object that could encapsulate an SDL texture while also encapsulating how to break the image down into frames. I achieved what I wanted with the TextureObject, just as I achieved what I wanted with RenderObject, which was an object that could be thrown en masse at the Render class to blindly put onto the screen.

The TextureObject / RenderObject functionality was one of the concerns I had with The Grand Design approach I took earlier. Since SDL does a lot of the core functionality for me, what I'm doing is translating between those core SDL structures and my own Game structures. But because I’m working with the SDL structures as endpoints, I was never sure what value my own wrappers were adding. Furthermore, I was not sure how the wrappers would work in a game. Building Snake gave me a valuable insight into my initial design choices.

It was nice that after I had finally gotten Snake up and working was that I could refactor the code quite quickly. It took me an evening to replace RenderObject with AnimationObject as a game component, with AnimationObject doing what I had to do manually in each game object class when it came to rendering. It took me another evening to expand on AnimationObject so that animation would just work without any intervention on the class. Very quickly I had built up a Run Once and a Loop animation, both of which worked with very little tweaking. Sometimes you've got to love the power of OO!

What was a pleasant surprise was just how much of The Grand Design just worked. Aside from enhancing the RenderObject / Render classes to allow for rotation (so I could point the head in the direction of movement), almost all of the code I did for Snake was in its own set of classes. There was at least some vindication of my earlier approach.

In terms of a development strategy, I find this the most useful process so far. Mainly because now I have a target by which to aim for, and each point of refactoring is an enhancement of the Engine itself. With the AnimationObject, I was able to quickly address the inadequacies of both TextureObject and RenderObject for game code – both were far too low level for any game object to need to have to deal with, and the code general code that would have been useful to ally the two objects was being written multiple times.

I think going forward; this is the way to work. Next I want to get my Level / Camera / Tile combination finished, which I skipped over in Snake by manually generating a list of “Wall” objects that got iterated through every cycle. After that, it’s getting text writing to the screen (an easily achievable enhancement to snake – keeping track of the score and high score), and then I think it'll be enough to start working on a different game prototype to enhance the engine further.

Tuesday, 21 October 2014

Phase 2: Proof of the Pudding

After a busy phase of work (partly self-inflicted) in the first half of this year, I found myself again ready to get back to the project. I took stock of what I had done and what I still had to go. Then, like last time, I tried knocking off a TODO list.

This time, however, I tried to be a little more advanced in putting new features to the test. To have input fully working meant to be able to catch certain events in a particular way. It meant some substantial refactoring of my Engine façade at times, but I think the end result was worth it. Translating from SDL_Event to my own code may not have added much programmatically, but it was sufficient for what I wanted to do with it.

One thing I found when trying to learn how to use the SDL Input from websites, blogs, and vlogs, was that people would tightly couple the SDL_Input directly with their game. The very trap I was trying to avoid! Even the book I was using did it this way, which meant much of the implementation time was me trying to come up with the right patterns of implementation so that input would work the way I wanted it to.

What I started to do this time was analogous to test-driven development – as analogous as seeing things happen on a screen can be to explicit test cases anyway. I set myself particular outcomes I wanted to see, and used that as the driving point of design. To test my keypress feature, I wanted to be able to pause and resume music. This in turn exposed problems in my Audio class (not to mention what functions I exposed with the Engine façade), as well as problems with my Input class.

These use cases were a direct way of testing and expanding the capabilities of the engine. In the grand scheme of things, it wasn't that much different to what I was doing earlier, but it was more directed this way. For example, I wanted to know how to play around with the alpha channel, so I set the task of gradually fading in the splash screen. To get it working properly required tweaking some fundamental code in the Render class, but I was able to achieve the effect this way.

There comes a limit to this form of development, however. One of the pressing tasks for the engine, and something I’d been putting off until I had more work fleshed out, was the question of how to switch from game coordinates to screen coordinates. The basic logic behind it isn't too complicated, though getting one’s head around it in purely abstract terms was difficult for me.

What was complex about the Camera, though, is that it couldn't happen in isolation. I needed to make a concept of a level, something the Camera would translate from. To have a level meant having a system of Tiles – the fundamental units that made up the level. Tiles themselves needed to have certain properties such as knowing who its neighbours are, or whether the tiles were square or hexagonal. Again, I found myself falling into The Grand Design trap, getting very excited about accounting for the possibilities and trying to get it right the first time. Again, the enthusiasm soon waned.

I found myself a month later trying to take stock of where I was, looking back through my documentation for some hint of what to do next, but I couldn't get the motivation back. Between work and stuff going on at home, I just didn't have the inclination to put in the work to pick up where I left off.

Monday, 20 October 2014

Phase 1: The Grand Design

At its core, a game does the same thing. It has to render images, play sounds and music, capture input, and do things with that input. A game loops through those various responsibilities, so the more structured and separated those responsibilities are, then the more flexible this design is.

When looking through my old code, one of the things I noticed was how often I used a quick-fix in the absence of a good design pattern. What this means is ultimately the basic responsibilities like rendering to the screen are tightly coupled with game code. So changing one would mean changing the other. And if I wanted to do a new kind of design, I would have to effectively start from scratch.

It’s with all this in mind that my first attempt was to start with making an extensible and flexible engine, with the engine itself operating separately from the game code.

More specifically, I started with a game idea. I then wrote a very broad outline of an order of development, to first start with getting a working platform to build it on, then gradually building up the game artefacts until I finally had a “complete” project. That way, the list seemed fairly unambitious, just simply a matter of getting the engine working how I wanted it, then it would be a race for the finish.

After a little research, I settled on using SDL, found a textbook and a bunch of resources online, so I was able to start with little things. But at all times, I was conscious of The Grand Design – my conception that it was to be all loosely coupled. The Render class was an endpoint – things were added to RenderLists which the render cycle would systematically draw on a frame-by-frame basis. That, too, would be hidden behind an Engine façade, with the game controller simply telling the Engine where it wanted a RenderObject to go. RenderObject was my own wrapper of the SDL_Texture to take into account Source and Dest SDL_Rects (i.e. what coordinates to take from the original image, and where they would go on the screen), and that was in-turn loaded from TextureObject – my own wrapper for SDL_Texture so frames of a single file could be easily parsed. And so on.

The point isn't to recall everything I did, but to analyse the purpose of it. At each point, I had a rough idea of what I wanted in terms of architecture and function, then I set about building the specifics depending on the task. For example, since I knew that images needed to be loaded, I first wrote a class that’s purpose was to load images based on the directory. Then I expanded that to load from an XML. Then I expanded the XML to contain extra information that would be relevant such as rows and columns so it’s all there in a configuration file.

What I managed to achieve wasn't bad, but there was still a long way to go before it was usable as a game. I had lists of functionality that was built (psychological motivator to show how far I had come), as well as what yet to be built, with my goal being to check those specifics off. But it never really works out like that, and how I imagined progress would go in my planning phase, I continually failed to meet what I thought were modest targets. Weeks of this and my enthusiasm died away.

What I was left with in the end was half-completed. It loaded and rendered images, played sounds and music, took basic input, ran in different game states, yet there was very little to show for it. The sequence went roughly from a blue screen to a change transition from a splash screen to a play state that played a piece of music I wrote and would quit if Q was pressed. Worse still, my TODO list was whittling down, yet I was still a long way off from being able to make something. Eventually, I stopped working on it.

Sunday, 19 October 2014

Working Toward a 2D Game Engine

Going back 12 years as an 18 year old faced with a choice of what university course to do, I put down a bunch of courses at my local university, with computer science being the main preference. After my exams were over and I faced the wait to see how I did, I came across a computer science degree at a different university that specialised in game development. I switched preferences, thinking that it was a long shot to begin with. But somehow I did just well enough to make it in, and off I went to do a degree in Game Programming.

Yet when I finished my degree, I didn’t go into the games industry. All the companies I applied for weren't interested, and I was employed to develop Enterprise Java applications – something I've been doing ever since. In a way, from what I hear about the game industry, I don’t consider this a bad thing. But there is a part of me that wishes I was able to do something with that knowledge –at least make something of my own.

It seems every couple of years I have this urge, and I try hard to get something together, only to find I hit a stumbling block, lose interest, and then get on with something else. That cycle of drive, a half-arsed attempt, then ultimately giving up in futility makes me appreciate what people trying to lose weight go through. But clearly it wasn’t working, and like most people trying to lose weight, the exercise achieved little more than highlighting a sense of personal failure.

About a year ago, the urge once again hit me. Over the last 12 months, I’ve had three separate spurts of inspiration, which have had limited success. What I want to write about over the next few posts is what I tried at each point in time, and where I think were the merits and drawbacks of the approach I took.

Phase 1: The Grand Design
Phase 2: Proof of the Pudding
Phase 3: Snake

Phase 3a: Snake Enhanced
Phase 3b: Flip Squares

Wednesday, 25 June 2014

Amateurish Thoughts On Brazil 2014: Australia vs Spain

  • This was the kind of match I expected from the Socceroos before the tournament again, where nothing was particularly wrong with how they played, but that they were simply outclassed by a better team.
  • To their credit, Australia tried taking the game to Spain. But the attacks always seemed to break down in the final third, before there was even a chance on.
  • Without Cahill, Australia doesn't look to have any attacking option. Even the late addition of Bresciano showed that Australia might struggle for a while as they transition away from the "golden generation". At least for now, there are no obvious successors. Leckie had a good tournament
  • All three Spanish goals were well-taken, including Villa's wonderful back-heel. I think this was easily Australia's best defensive performance of the tournament even if it still leaked three goals. At least this time, all goals came from quality play to unpick the defence.
  • Matt Ryan doesn't fill me with any confidence as a keeper. His movement and positioning looks way too tentative and unconvincing. Is he really Australia's best choice?
  • Spain finally looked like the team that won the last world cup - strong in attack, resolute in defence.

Thursday, 19 June 2014

Amateurish Thoughts on Brazil 2014: Australia vs Netherlands

  • The Socceroos looked like a unified team from the start, putting together long chains of possession that resulted in attacking opportunities. 
  • On the balance of things, it would be fair to say the Socceroos had the best of the first half, with 3 clear-cut chances. Bresciano and Spiranovic could (should?) have done better.
  • Robben's break and goal showed his talent. A strong run and finish, though I didn't quite get the central defender hedging his bets for a potential cross. Robben had so much space.
  • Cahill's immediate reply would be goal of the tournament (so far - can it be surpassed?) for me if not for the Van Persie header in the Netherlands' last game. A sublime finish from Cahill, showing once again he deserves to be crowned Australia's Greatest Footballer Of All Time!
  • The penalty could have gone either way (at least according to my reading of law 12), but I was more than happy it was given. Those 4 minutes Australia had the lead was honestly something I was not expecting before the hame started. 
  • That said, Netherlands pretty much walked the ball into the Aussie net 4 minutes later. It was yet another time in the tournament so far that Australia looked utterly fragile in the defensive third.
  • The third Netherlands goal was the killer. Matt Ryan had good vision on the shot and it was hit almost directly at him, so all I can fathom is that the late swing on the shot made the difference. It's a shame that it came only seconds after Leckie squandered a golden chance to put Australia in front.
  • The selection of the Socceroos squad was aimed at building for the future, but it's hard to see past an Australian side without Tim Cahill and Mark Bresciano. I guess the game against Spain will show us our attacking prowess without Australia's Greatest Footballer Of All Time.

Sunday, 15 June 2014

Amateurish Thoughts On Brazil 2014: England vs Italy

  • England took a more direct approach than Italy, but they weren't really able to penetrate. Their speculative shooting from distance summed up England's strategy.
  • Italy's first goal was brilliant set piece work, with Pirlo's dummy perfect for opening the space. Marchisio's drive was very well placed.
  • England's immediate response was very well executed. Sterling's pass to Rooney, and Rooney's cross to Sturridge was very well executed.
  • Incredible skill by Balotelli with the chip that was cleared off the line.
  • Italy's attacking by crossing looked quite ineffective up until the moment a cross found the head of Balotelli.
  • Apart from Sterling, Italy had England's measure in attack. Nothing England did really troubled the Italian defence after Italy took the advantage.
  • Will Wayne Rooney ever score an international goal? Also, his corner kick was hilarious.
  • It's a shame Pirlo's free kick deep in injury time cannoned off the crossbar.
  • A fair result. England simply didn't have the attacking options to trouble the resilient Italian defence.

Saturday, 14 June 2014

Amateurish Thoughts On Brazil 2014: Australia vs Chile

[Note: Socceroos fan]
  • Australia seemed to have more than enough people behind the ball, just not in a very effective way. Chile's first goal was well taken, but the defence just wasn't up to the task.
  • Chile's second goal was just good team football on their part. Australia had no answer to their open attacking style of play.
  • It took 25 minutes for Australia to string together a play that looked like they were playing as a team. From then on, they were generally competitive.
  • Attacking-wise, Australia are a one-trick pony. Put it close to Cahill's head and hope for the best. Worked twice, though once was offside.
  • Good end-to-end football, exciting play from both teams.
  • 3-1 was probably flattering to Chile - those first 20 minutes that really hurt the Socceroos. Outside of that, neither team dominated, though Chile looked more deadly going forward.

Amateurish Thoughts On Brazil 2014: Spain vs Netherlands

  • The closing down strategy both teams employed at the beginning made for a scrappy opening exchange, bogged down in midfield. Barely any goalmouth action
  • Sneijder's initial break was so beautifully timed that it was deserving of a better finish.
  • Both sides used the offside trap, though it completely neutralised Spain while the Netherlands made it work.
  • Live, the penalty looked soft but understandable. The replay showed it to be a blatant dive - disappointing that the #1 team in the world needs to resort to such tactics. Though as the game went that was the only time Spain looked like scoring.
  • Van Persie's header (and the half-length cross) was unbelievable skill. An early contender for goal of the tournament?
  • Both of Robben's goals showed immaculate ball control. He was unlucky not to get the hat-trick with that superb volley from outside the box.
  • Odd to see Casillas make two crucial mistakes that led to goals.
  • The Dutch 3-4-3 formation was great for their counter-attack style of play. Spain's 4-3-3 didn't look like it troubled the Dutch much at all.
  • The second half especially had me grinning from ear to ear. What a display!

Friday, 13 June 2014

Amateurish Thoughts On Brazil 2014: Brazil vs Croatia

  • The counter-attack system looked really deadly for Croatia. Their goal may have had some luck, but it was created by how well they broke down the wing. 
  • It was refreshing to see a Brazillian side who kept their sense of flair in the game. That Brazil kept trying to create opportunities made the first half especially incredibly exciting. Brazil going a goal down did wonders for the attacking effort they put in. 
  • Neymar's first goal was pure class. I can see why he's been hyped as one of the potential stars of the tournament. If that's how he's going to play, I hope Brazil can go all the way to the final. 
  • The penalty looked soft on TV, though the angle shown made it look like there was something in it. No doubt Fred milked it. The angle I've seen in news reports makes it look a lot worse than how I remembered it live. 
  • The penalty itself was anticlimactic, at least as far as the spectacle goes, as the battle between creative attack and resilient defence lost all momentum. 
  • The game became really scrappy in the second half, lost the excitement and flair of the first half. Substitutions do that, I suppose. 
  • Oscar's goal at the death was so audacious, yet so brilliantly taken. I suppose that's the kind of shot you can do being a goal up in injury time. I hope the world cup has many more goals like this. 
  • Brazil were the better side on the day, and 3-1 seems like a fair reflection of the game. 
  • That both teams (at least nominally) used 4-2-3-1 was slightly odd. Though both teams seemed to have the quality of players to pull it off, with Brazil making better use of wing-backs (especially Alves) out wide to have more play in the middle of the park. I wonder how Australia will do with that tactic tomorrow.

Friday, 16 May 2014

Two "Solutions" For The Same Predicament

In the Australian budget released this week, Joe Hockey outlined two measures to try to get groups associated with higher unemployment into work. For those over 50 and unemployed, the government will provide a financial incentive to employers to hire them - up to $10,000 if they remain with the employer for 2 years. For those under 30, however, anyone who is unemployed will face 6 months a year without any benefit, including an initial 6 month waiting period.

The rationale, at least how it seems prima facie is this: When it comes to older workers, the problem is the free market discriminates unfairly. When it comes to younger workers, however, the problem is the youth themselves, who by the implication of the exercise are simply unwilling to do that it takes to have sustained employment.

Both solutions are recognitions of the failures of the current private sector model we have for employment. The idea of seeing unemployment benefits as some sort of entitlement misses the reality of the job market. If everyone who wanted to work was able to get a job by the sheer desire to have a job, we wouldn't need to address the issue. But there are biases, employers discriminate, and these patterns of discrimination can be hugely problematic for those caught up in them. There's a reason that people who become long-term unemployed stay long-term unemployed.

This is a failure of governance, and a perennial failure at that. There's no point in laying the blame on Joe Hockey and the Liberal Party of Australia just because they happen to be the government in charge, but it applied just as much to Wayne Swan and Labor before that, and will apply to whomever comes next. The free market, like any other solution, is a means to an end. We recognise the limits of the free markets by practices such as a social safety net, government incentives, discrimination laws, etc.

I'll say now that I'm very sympathetic to the idea that young people should be in training. I'm also sympathetic to the idea that people should move for work. Yet what policy came with the policy to cut off unemployed youth from social security? And for that matter, what policy is there to help those evicted when they lose their income? Or even to ensure that the youth have stable employment such that they will be able to get housing to begin with?

Framing youth unemployment as an entitlement issue, I think, is a mistake. It's a survival issue - that a group of vulnerable people who are having a hard time entering the workforce to begin with are now being threatened with ruin for circumstances that are largely beyond their control. To say it's politically-charged rhetoric that shirks the responsibility of government would be a dispassionate way of describing the policy.

Whether it's the best of all possible systems, we live in a system where we depend on money. Take that away and you take away the ability to survive. And when it comes to the poor, it's not like the money is disappearing from the system. People who live on the edge tend to redistribute that money back into the economy because they need to spend all their money just to survive.

Over on Crikey, I saw this described as class warfare, but it goes beyond that. Health and education cuts are class warfare, allowing universities to price education away from the poor is class warfare, raising the age one qualifies for Newstart is class warfare, introducing mandatory co-payments for medical access is class warfare, allowing people to salary sacrifice their lifestyle is class warfare. This goes well beyond that - putting people potentially into harm's way for the crime of being young and not being able to find a willing employer.

Monday, 21 April 2014

William Lane Craig on The Problem Of Evil

I've been reading through The Cambridge Companion to Atheism, where William Lane Craig is the voice critiquing atheistic arguments and promoting theistic arguments. Of what he wrote, it's his critique of the problem of evil I want to explore.

Craig frames the problem of evil like so:
  1. If God exists, gratuitous evil does not exist.
  2. Gratuitous evil exists.
  3. Therefore, God does not exist.
His contention is that (2) is the weak point of the argument. As Craig acknowledges "Everybody admits that the world is filled with apparently gratuitous suffering", but does Craig sufficiently deal with the problem? Here are his responses.
1. We are not in a good position to assess with confidence the probability that God lacks morally sufficient reasons for permitting the suffering in the world.
"Once we contemplate God’s providence over the whole of history, then it becomes evident how hopeless it is for limited observers to speculate on the probability that some evil we see is ultimately gratuitous."
2. Christian theism entails doctrines that increase the probability of the coexistence of God and evil.
(i) The chief purpose of life is not happiness, but the knowledge of God.
"Many evils occur in life that may be utterly pointless with respect to producing human happiness; but they may not be pointless with respect to producing a deeper, saving knowledge of God."
(ii) Mankind has been accorded significant moral freedom to rebel against God and his purpose.
"The horrendous moral evils in the world are testimony to man’s depravity in this state of spiritual alienation from God."
(iii) God’s purpose spills over into eternal life.
"Given the prospect of eternal life, we should not expect to see in this life God’s compensation for every evil we experience. Some may be justified only in light of eternity."
(iv) The knowledge of God is an incommensurable good.
"[T]he person who knows God, no matter what he or she suffers, no matter how awful his or her pain, can still truly say, “God is good to me!” simply in virtue of the fact that he or she knows God."
3. There is better warrant for believing that God exists than that the evil in the world is really gratuitous.
"[I]f God exists, then the evil in the world is not really gratuitous."

I would wonder just how viable each of those options are. (1) is an concession to our ignorance on matters at all. If the objection held, then we'd just as equally be able to say that the evidence of an all-evil God or a morally-indifferent God is just as likely as an omnibenevolent God. So whatever other reasons one would have to believe that there's a divine power, we'd have no reason to favour just what nature that divine power embodies. Would believers be comfortable in accepting that the universe is just as easily made by a malevolent deity as an omnibenevolent one?

(2) is a curious strategy, not least because it immediately conjures up the Epicurean objection: "Is he able, but not willing? Then he is malevolent." Even if knowledge of God was an immesurable good, why would we need to have gratuitous suffering alongside it? If it doesn't matter, then it's powerful evidence against God's benevolence. One might be able to make the case iff suffering increased the likelihood of knowledge of God, but then that would require good evidence in its favour. Knowledge of the Christian God is largely based on the actions of Christian evangelism rather than by suffering directly. Most people throughout our species' history have suffered (sometimes gratuitously) without there even being the idea of Christianity, let alone the exposure to it. So the premises (2i), (2ii), and (2iv) don't even make sense for most of the suffering we have.

Furthermore, most life evidentially can suffer, and there's no question of a chimpanzee or an octopus having knowledge of God. Other animals react much the same to pain as we do, so why would a God allow them to suffer when none of the four responses even begin to address animal suffering? Craig's answer from elsewhere is "God has shielded almost the entire animal kingdom throughout its history from an awareness of being in pain!" Though almost doesn't include all animals beside from human, even if Craig is correct in interpreting the evidence as other great apes have the structures, yet don't have knowledge of God. So at best Craig has merely reduced the scope of animal suffering but not eliminated the problem.

The idea of heaven (2iii) seems to work against the notion of a benevolent God. Far from a saving grace, it highlights exactly what the problem of evil says - this world doesn't look like it was created by a benevolent God. If God could have created the world without gratuitous suffering, then why do we have gratuitous suffering? Also, why would a child need to die slowly and painfully of cancer before heaven rather than just getting into heaven without experiencing that suffering at all? Similarly, (2ii) asks the question of why a benevolent God would create us in such a depraved way. Quite a lot of atheists, for example, are quite civilised and don't contribute to the gratuitous suffering of our fellow humans. Their spiritual alienation from God doesn't lead to total depravity. Meanwhile there are believers who tortured others in the name of their faith. Did they have spiritual alienation? Besides, most suffering in the world has nothing to do with the actions of humanity - spiritually alienated or not.

For (3) to work, we would need to have greater confidence in the evidence for God's existence than the evidence for gratuitous suffering. Since we have very good evidence for gratuitous suffering such that Craig acknowledges the apparent gratuitous suffering, so it's setting a really high bar for evidence for God. It's not helped by the problem that other arguments for God, as Stephen Law points out, are neutral on the moral characteristics for God. So even if the other arguments are very persuasive, they wouldn't be evidence against the actual gratuitous suffering in the world. Perhaps a morally-indifferent God or an omnimalevolent God would be a better fit for the data. But even if we ignore that issue (perhaps God is necessarily omnibenevolent), that the morally-indifferent god or omnimalevolent god would be a better fit for the data would suggest that the evidence really doesn't favour God over the fact of gratuitous suffering.

Saturday, 12 April 2014

The Appearance of Legitimacy

Japan have suffered a setback in putting whale meat on the table with their "scientific program" being labelled a ruse by an international court. Of course, the Japanese knew it was a ruse too (their disappointment was expressed in the denial of tradition, not of what they could have learnt from slaughtering whales), yet it was a ruse they needed to keep up for international obligations.

This same kind of legitimacy is presumably what Russia sought with the referendum in Crimea, or any dictator does with a "poll". It's the kind of ruse that fools nobody, yet it's enough to fend off simple criticism. Russia doesn't care about having a fair election any more than a dictator does, yet the burden is now on those who say it's unfair - a burden that really can't be met beyond suspicion.

The example I want to highlight, though, is scientific creationism. What should be said about all creationism is this - any starting point other than the science will exclude it from being science. It's that simple. The goal of science isn't to vindicate any doctrine, religious or otherwise, but to use observation to develop and test theories. Creationists fall afoul of this because they already have the answer.

Yet creationists want scientific legitimacy. While many will affirm that the bible is their starting point, they are also quick to criticise any scientific claim that seemingly contradicts that. They also crave people with qualifications - real qualifications if possible, but degree mills in the absence of those. They even have their own "scientific" journals where people submit "real" research.

What is interesting is exploring what the response to that should be. Science, of course, needs to be an open enterprise and people need to be able to explore avenues wherever they lead. At the same time, scientists need to guard against pseudoscientists who are looking to use the scientific process to serve their own ends.

What we end up with, sad to say, is Expelled: No Intelligence Allowed. The complaint was that Intelligent Design isn't being given a fair go by the scientific community, and proponents are finding that their support of Intelligent Design is meaning losing academic credibility. It sounds appalling, which it would be if it were the case.

There is a perceived circularity with scientific orthodoxy. Intelligent Design, to be a legitimate view, needs to have academic support. But since the evolutionists are the ones in charge of what gets called science, Intelligent Design cannot get the academic support it needs. In other words, the orthodoxy rigs the game by excluding any person or paper that might be sympathetic to ID as simply being anti-science.

Of course if this were really circular, then it would be utterly astounding that science progresses at all. Yet science does, and the ideas accepted by the biological community now are not the same as 50, 100, or 150 years ago when Darwin first published. The big deal is made of the orthodoxy because it's a convenient scape goat standing in the way of perceived scientific legitimacy.

What Expelled did was tie cases of ID proponents being fired or denied tenure to the fact that they were ID proponents. That in turn was tied into the wider narrative of academia trying to exclude God from the picture. What this does is gives a reason for the lack of legitimacy. They are serious scientists doing serious research promoting a serious view, but the atheistic evolutionists stand in their way. (One of the most baffling things about Expelled is how much of the film is about Richard Dawkins' atheism, from theologians discussing it to Ben Stein drilling Dawkins on what gods he doesn't believe in.)

The argument so far has been made without context. If we were to put ID into a cultural and historical context, ID an incarnation of creationism in an attempt to give it scientific legitimacy at least as far as what gets taught to students. ID is aimed at school boards, politicians, and at the wider public. It craves scientific legitimacy not because God should be vindicated in science, but because scientific legitimacy is what counts as far as what is taught in science class. If ID were to limit itself to being an expression of natural theology, there'd be no issue. But as the wedge document confirms, the motives of ID proponents is to ultimately bring people to Jesus.

Thus scientists are put in an awkward position. If people want to use the appearance of scientific legitimacy for their nonscientific ideas, then scientists have to guard against it. But if they do guard against it, they are accused of guarding the orthodoxy against proper scrutiny. Proper science is brought down to the level of pseudoscience by virtue of pseudoscience being able to better posture itself as legitimate science persecuted by the orthodoxy.

The value of real science is that what is the mainstream now had to be earned through the scientific process. Just as a real democracy requires an open political process. The pale imitation of dictatorships fools no-one even though it's an attempt of dictators to appease their critics. The same goes for creationists pretending to do science. They aren't doing so because they want to find the truth - they know their truth already - but because it's what's expected of them.

The problem is that their pale imitation isn't the same thing as doing real science, and real scientists call them out on it. The irony of it all is that scientists standing up for science has become to be seen an expression of ideology, while ideologues craving the appearance of scientific legitimacy as the persecuted minority standing up for Truth.

Friday, 21 March 2014

Review: God in the Age of Science? by Herman Philipse

Generally speaking, one can divide religious critique into two categories. The first is to attack religion as a political institution, whereby the social effects of religion are examined and subject to scrutiny. The second is to go after the truth status of religious claims. While these two categories have some overlap, it's worth remembering that truth and utility aren't the same thing.

It is unfortunate that critiques of the utility of religion are taken as the reason for critiques on the truth of religion. It's not that God is a nonsense notion, it's that atheists have some psychological hatred of theism as it is practised that leads to the denial of God altogether. It's unfortunate because the critiques of belief itself are ignored as some outcome of one's impression on the utility of religion, they remain largely unaddressed. Explain the "reasons" for atheism and explain away the need to address atheism.

Herman Philipse's book completely focuses on the second category. This category is further narrowed by the distinction between natural theology and revealed theology, where the focus was almost exclusively on natural theology. The question the book explores is what to make of a concept like God in light of modern science, and is largely an exploration of the case made by the philosopher Richard Swinburne.

To understand the way Philipse laid out the critique, it's worth exploring the three dilemmas Philipse proposes the theist has to answer:
Claims about God's existence are (a) factual claims, or (b) non-factual claims.
If (a), religious belief (c) needs to be backed up by reasons evidence, or (d) it does not.
If (c), this can be done by (e) methods completely unlike those used by scientists and scholars, or (f) like those methods.

Although there are a few exponents of (b), the claims themselves are prima facie (a) claims. "God exists", is for most people an attempt to say something true about the world, and not just an attitude they take to it. For (d), there are a couple of chapters devoted to exploring the merits of Plantinga's argument for reformed epistemology. But the real concern is the answer to the third dilemma, with Richard Swinburne's cumulative inductive case for the existence of God taken as the paradigmatic example of how one ought to approach God in the age of science.

The chapters addressing Plantinga are instructive to the tone of the rest of the book. While Plantinga has weaved an elaborate logical defence, of ad hoc claims, bare assertions, defeater-deflectors and defeater-defeaters, one might be curious as to what purpose Platinga's argument would achieve. At no point do we have any evidence that our brains possess a sensis divinitus, let alone that it's actually at work in religious experiences, that it's faulty for most people, but less faulty for monotheists, and reliable when it comes to Christian beliefs. Yet this idea gets two chapters of logical objections!
v But the vast majority of the book is taken up with a critical analysis of Swinburne's ideas. His argumentation style, much like the opening of the book, often involves particular dilemmas, followed by why each horn of the dilemma is problematic. For dilemma 3 above, the danger of choosing (e) is choosing methodology that has no respectability among intellectuals, while the danger of (f) is that it opens God up to empirical disconfirmation.

The exercise begins by seeing whether Swinburne is successful in casting God as a successful theory in the way scientific theories are. Swinburne's approach is correct, but unfortunately God is not up to the task of being a proper scientific theory. There are obstacles to this, such as God being an irreducible analogy, or using personal terms to describe something that doesn't fit our use of personal language.

To examine Swinburne's inductive argument, he sets aside his earlier criticisms before forcefully showing the problems with Swinburne's approach. Some of the errors are quite technical, such as whether some of Swinburne's arguments are successful C-inductive arguments, but there's a lot of food for thought at each stage. The end result (predictably) is that Swinburne's approach simply doesn't have the predictive power attributed to it.

Like Plantinga's argument, there were times when the exercise bordered on the absurd. God being the simplest thing there is because infinites are simpler than non-infinites mathematically. Philipse deals with this argument early, but as a justification this keeps coming up in Swinburne's inductive argument. One could simply point out that since there is no way of measuring God, there is no way of knowing how simple God is, but the joke goes beyond the pale when Swinburne insists that infinite things are simpler than finite things of the same kind. It takes a lot of complexity to have finite persons with finite knowledge, but an infinite person with infinite knowledge is simple?!
Is this book worth reading? It's a tough question to answer. There are many ways of addressing the truth questions of religion, and whether one feels it's worth digging into this book depends on whether natural theology is seen as the best way to assess the truth. This is in contrast to revealed theology (the specific doctrines of theistic religions) and in contrast to the idea that theology is a pseudodiscipline.

Philipse does his best to argue for the relevance of natural theology as the approach one ought to take, and he aimed at the best natural theology has to offer in his arguments. The end result is something quite technical, but still full of interesting approaches to particular problem. The arguments themselves cover a wide range of philosophical topics, covering not only philosophy of religion, but questions of language, epistemology, mathematics, and meaning. In that light, the case for natural theology is not as esoteric as it seems prima facie.

One of the strengths of the book is that it pushes the issue of theology in the scientific age, and is full of dilemmas facing believers at each potential turn. In that respect, the book is incredibly useful for the current debate about whether science and religion are compatible. Anyone who has an interest on this question will find this book invaluable.

However, this is not a book about how religion is practised, nor is it a book about revealed theology, and the arguments sometimes get bogged down in logical problems when empirical arguments would have been more to the point. And for those who see believing in God as an act of faith, there will be nothing in this book to change their minds. But for those who find the question interesting, and for those who seek a modern understanding of how to address the question, this book is well worth reading.

Saturday, 1 February 2014

Adopting The Rational Stance

As a programmer, it's not unusual to have to justify my position, or to have to think through a problem, as well as listen to others do the same. We argue on logic, on solutions, on frameworks, on practices to code - and all this in an environment where no two people completely agree on anything. Yet despite this, what doesn't happen (at least not in my experience) is seeing one programmer dismiss a proposition by finding some emotional "reason" the proponent might happen to hold it for.

Are computer programmers under the illusion that humans are purely rational beings who have not understood the role of emotion in cognition? Perhaps, though it seems somewhat unlikely that programmers are divorced from the human tendency to see rationality in their own views and emotion in the views of others. What seems more likely, however, is that it's quite irrelevant to the task at hand. That is to say, even if programmers aren't fully rational, it's still right to adopt the rational stance.

Yet this only seems odd in light of online discussions, where experience has taught me the harsh lesson that rationality is at best a front. Unconstrained by a shared goal, folk psychology tends to dominate. It seems quite ironic that in a state of anonymity we get even more personal.

What I fail to see, however, is much of a distinction between the two activities. To be sure, programming might often benefit from being highly constrained compared to some of the more open questions that people tend to fuss over, yet the aim of the activity is fundamentally the same. It's not whether we can be fully rational, but whether we ought to adopt the ideal of trying to be rational. Failure to do this would be like playing chess for the purpose of flipping over the board.

Any debate over ideas is an invitation to adopt the rational stance - to treat a problem as an object of rational thought, and assess the relative merits as if it were put forward by a rational agent. The goal, normatively-speaking, is not to worry about how an idea is held, but whether holding an idea is warranted. No easy task in practice, but as an aim it's evidentially achievable. Programmers do it everyday, and there's nothing special about programmers.

Tuesday, 21 January 2014

Book Review: Java EE6 Pocket Guide by Arun Gupta

A good programming book should cover three things: what the technology is, how the technology is used, and the why of the "what" and "how". As a pocket guide, Arun Gupta's Java EE 6 Pocket Guide could never have been more than just a brief overview on what is a sizable and extensive framework, the book does admirably in condensing down key features explaining what they are as well as demonstrations of their basic use. Gupta writes with clarity and with understanding.

The lack of depth does start to show with the illustrations of examples. They are merely snapshots of the various components in action. Combined with the well-written explanations, this might constitute a sufficient overview for someone trying to make sense of unfamiliar code (we've all been there), but it would be hard to see the practicality of such examples beyond that.

To give an indication of the content, I'll summarise one section where I'm quite familiar with the API (EJB). The Stateful Session Beans, it first gives a brief overview of what they are, then drops into a coding example of how to define them. Then there's another paragraph that goes through the relevant points from the code. After which there's further highlighting of other relevant annotations, then how to access them from the client.

The two areas I could see this book being useful is first for people who are trying to come at a Java EE system without prior familiarity with the language. Java developers making the professional crossover would fit into this category. This could also apply for people familiar with some aspects of the Java EE architecture who are needing to venture into unfamiliar territory. The other area would be as a cheat sheet for Java EE for those not wanting to rely on Google to get specific information on specific components.

This book will not teach you Java EE, but it will help those looking for a nice practical overview of unfamiliar features. And as a reference guide, it might be helpful for quick information about specific features written in an accessible and no-nonsense way.

This book was given freely as part of the O'Reilly Reader Review Program. The book can be purchased here.

Tuesday, 7 January 2014

Evolution and the God Debates

One of the most important aspects of any debate over a scientific issue is to separate out the science from the implications of the science. As non-experts, we aren't in a vantage point to comment on the scientific validity of certain propositions, as we lack the relevant expertise that would allow us to adequately assess the science. Thus any debate where we as non-experts try to comment on scientific validity is going to be a distraction from the real issues associated with the debate.

As non-experts on evolutionary biology, most of us aren't really affected by the debate over how natural selection works, or whether genetic drift is influential in the divergences between related species. Likewise, whether stochastic factors drive speciation doesn't matter to how we perceive evolutionary theory. Those are issues for experts to fight over in the peer review literature, and even if we think that we know the answer, our musings are not going to make a bit of difference because we are not part of the conversation biologists and philosophers are having with regards to evolution.

What we are interested in, however, and what we can have a say on, is how these facts fit into particular conceptions of how the world works.

It's easy to conflate the truth of evolution with the perceived implications of evolution, but it would be wrong to do so. As far as any debate ought to be concerned, it's only with the latter that we should concern ourselves with. If those particular implications are unpalatable, then too damn bad. I personally don't like the implications of what Nazism says about the human condition, but that doesn't give me recourse to deny the holocaust! And if I were to then deny the holocaust, people should rightly point out that my prejudices are seeping into my assessment of the history. As disturbing as I find the notion of genocide, it's a fact I have to live with*.

In terms of the god debates, arguments over evolution have been part of the conversation. I think there are three main reasons for this. The first is sociological, that there are many biologists who are also atheists taking part in the discussion. So there's the temptation to see the battle over evolution as being between atheists and theists, rather than as being an issue of science. Supporting this is that evolution-accepting biologists who are also theists such as Ken Miller and Francis Collins are attacked by other theists as being atheists themselves. In terms of ID, Philip Johnson's Wedge Strategy is built around changing the debate over evolution to a debate over the existence of God, though I wouldn't presume the strategy is a product of that line of thinking rather than causing that line of thinking.

The second issue would be a matter of theology, that evolution directly contradicts certain interpretations of creation accounts. It wouldn't matter in this case what the science says, because the supposition is that the biblical account of creation would be God's truth as opposed to the fallible truth of Man. There are a number of interpretations of the creation of man on this account, but what they all have in common is that they are based on scripture and have mankind as a special creation of God.

The final issue would be the philosophical inference to design, that without recourse to a particular theology, we would still have philosophical grounds for making an inference from the appearance of design in life to the cause of that design being the foresight and actions of a designer. We see a watch, we (reasonably) infer a watchmaker. We see something analogous to a watch in the natural world, so why wouldn't we then infer something analogous to a watchmaker as bringing about that order? Conversely, an atheist could argue that processes don't require a designer at all. Evolution by brute fact is one such process. So if evolution is true, then any designer would be superfluous.

With each of these three reasons, it's important to remember that such arguments are largely (if not wholly) tangential to the question of the science of evolution. In the first case, Philip Johnson and other ID proponents argue that science has been hijacked by philosophical naturalism, and thus biasing science away from design explanations. It's not a question of the validity of the science of evolution, but a question of the scientific enterprise itself. There are problems with Johnson's argument, but I won't go into detail here. In the case of theological issues, what the science says can be of no consequence to that view since Genesis is not a scientific text. It's only with the philosophical analogy to design that there's some overlap, namely whether the science has a coherence to it. And to that, again it's the science that would drive the arguments rather than the arguments driving the science.

What we are after, in effect, is a means of understanding the science in light of propositions in the god debates. If one is after a defeater argument for theism or atheism, then the science will prove a disappointment. The science may be able to rule on specific proposition (such as the age of the earth), but it requires further arguments beyond the scope of the science to make any sort of forceful point. The science of evolution does not imply atheism, just as the overturning of evolutionary theory does not imply theism. For both propositions, further argument beyond the science is required. The science itself should be left to the scientists.

*Some people take exception to analogies between holocaust denial and evolution denial on the grounds that the holocaust is morally repugnant, and its that moral repugnance the evolution-proponent is trying to capitalise on. But that would be missing the point that's being made. Holocaust denial is a view that is supported by a minority of scholars, and there are more scholars in the historical community who deny there was a holocaust than there are biologists who deny evolution. In each case it's a tiny minority, but the point would stand about non-experts trying to take a stance on the issue left for experts. On what grounds would we have, beyond our own prejudices, for favouring the extreme minority view among scholars?