Monday, 5 December 2011

Skip Lists: A C++ STL-style implementation

Recently someone mentioned an interesting container type to me, the skip list. It piqued my interest and so, naturally, I wanted to play around with it. It's been a while since I last wrote an STL-style container, so I thought I'd attempt to write an STL-compatible skip list implementation. Fun times.

And so I present to you my latest code offering, the C++ STL-style skip_list container. Grab it from the GitHub project here. Or read on for further information...

Skipping the list

The skip list is an interesting data structure. You could (simplistically) consider it a hybrid of a std::list and a std::set; it's a list-like data structure than provides good insertion, removal and search performance. As ever, the trick to good search speed is to trade off some memory to improve traversal performance.

Traditionally the skip list is an extension of a standard forwards-only linked list. Wikipedia has a pretty good page on the structure. Check it out if you want more gory details.

Atop a standard linked it, it maintains a set of higher-order linked lists that act as indexes into the main structure below. These provide faster access to the middle of the list. This provides efficiency on a par with a balanced binary tree (i.e. what a std::set is usually implemented in terms of). Insertion, removal and search operations are typically O(log N). Remember: a standard linked list (which you'd have to manually keep in order) would have all those operations take O(N).

The particularly interesting detail about the skip list implementation is the algorithm used to determine the allocation of nodes to higher-order lists. Rather than use a fixed balancing scheme, or inspecting the data as it's added and comparing against the existing structure, we assign nodes to levels probabilistically - always adding them to the main list, and then (with decreasing levels of probability) adding them to the high levels lists, too.

My implementation

I chose to implement a bi-directional skip list, so each node in my version retains a back-pointer to the previous node. This makes the list more useful in general, and ensures that it's a drop-in replacement for std::list.

Like std::set, my version takes a template Comparison functor (typically std::less) so you can tailor the ordering of data in your container. I also, naturally, support custom allocators, and provide all "the usual" STL container operations.

I have tested the code on:
  • Mac OS using Xcode 4.2
  • Windows usigin Visual Studio 2008
  • Linux using gcc 4.4
I have benchmarked the performance of my skip_list container. Because of the probabilistic nature of the container, sometimes it will perform better than other times when given random test data.

The memory consumption is almost exactly the same as std::set in general, and it tends to allow faster forwards and reverse iteration. Depending on the way the wind is blowing, large node insertion/removal operations can be dramatically faster (taking a little as 25% of the time of std::set for the same data) or a bit slower (I've seen up to ~110%).

The source archive contains my benchmarking code, so feel free to try it yourself.

The GitHub project for skip_list is

Future plans

I have not yet provided C++11 "move" or initialiser_list operations, so that would be an interesting addition.

I could extend the data structure to provide O(log N) random access (e.g. indexing and random access iteration), too, at the expense of one more integer value in each node. That would be an interesting extension to consider - probably as a parallel variant of the existing container.

Future writings

If there's enough interest, I might start a new blog series on writing an STL-like container based on this implementation. There was a lot of interest in my previous series describing an STL-style circular buffer. Since this is a meatier data structure, the case study would be more useful.

Let me know if you'd like this!

Thursday, 24 November 2011

PGMidi moved and updated

My popular PGMidi library for MIDI input/output on iOS has moved from Gitorious to a new location on GitHub. (This was requested many times, and who am I to disappoint?)

Please update your repos accordingly.

Over the next few days, I'll also be adding a few new tweaks and features to the project, so stay tuned.

Thanks for all the kind comments and feedback about the code - I'm really glad it's useful. Please do let me know if you've incorporated it in your own project.

Wednesday, 23 November 2011

Writing: How To Pick Your Programming Language

The November issue of ACCU's C Vu magazine is out now. It contains the latest instalment in my Becoming a Better Programmer column. This one's called How To Pick Your Programming Language.

It's a masterwork that uses ancient dark arts (and frivolous flow chart technology) to help you select the programming language that best suits you.

Best read with a pinch of salt!

I quite like the artwork I produced for this month's cover, and have finally remembered to align the {cvu} drop shadow correctly, something that's been bugging me for months. (I doubt anyone else would even have noticed it)

Wednesday, 16 November 2011

Xcode 4 keyboard/mouse shortcuts

(I'm posting this here mostly so I don't lose it, although I'm sure it'll be useful to other Xcode 4 users out there.)

There are a couple of really handy Xcode mouse click modifier key combinations that I can never remember when I want them (kind of like a Super Street Fighter key combo).

In particular, you can click on symbols in the code editor and have them open in this editor, in the alt editor, in a new window, or... even... with a (ugly looking) popup asking you where to open (e.g. in a new tab).

Here's the lowdown:

Xcode 4 editor symbol clicks

Modifiers Click What happens
⌘⎇^ + single Open in alt editor
⌘⎇⇧ + single Select where to open (with popup)

+ single Open in this editor
+ double Open in this new editor window
+ single Show help popup
+ double Show help in organiser

⌘⎇⇧ Make alt editor counterpart again (super useful)

Key (what the silly symbols mean) 
Command (cmd)
Option (alt)
^ Control (ctrl)

Friday, 11 November 2011

It's the thought that accounts

Thinking well is wise; planning well, wiser; doing well, wisest and best of all.
– Persian Proverb

I run. Every week. It's my waistline, you see. Perhaps it's a guilt thing, but I do feel I need to do something to keep it under control.  

Now, let's be clear: I'm no masochist. Exercise is not my favourite thing in the world. Far from it. It definitely ranks above hot pokers being stuck in my eyes. Marginally. But there are plenty of things I'd rather do with my evenings. Many of them involve sitting down, preferably with a glass of wine.

But I know that I should run. It's good for me.

Is that fact alone enough to ensure I go regularly, every week, for the full distance? With no slacking or slowing of the pace?

It is not.

I dislike exercise and would gladly employ the weakest of excuses to get out of a run. “Oh no, my running shorts have a loose thread.” “Oh no, I have a runny nose.” “Oh no, I'm a bit tired.” “Oh no, my leg has fallen off."

(Ok, some excuses are better than others.)

What unseen force coaxes me to continue running regularly when guilt alone can't drag me out the door? What magical power leads me on where willpower fails?


I run with a friend. That person knows when I'm slacking, and encourages me out of the house even when I don't fancy it. They turn up at the door, as we'd arranged before my lethargy set in. I perform the same kind of service back. I’ve lost count of the times that I wouldn't have run, or would have given up half-way round had I not had someone there, watching me and running alongside me.

And, as a by-product we enjoy the run more for the company and shared experience.

Sometimes we both don't feel like going on the run. Even if we admit it to each other, we won't let the other person off the hook. We encourage each other to push through the pain. And, once we've run, we're always glad we did it, even if it didn't feel like a great idea at the time.

Stretch the metaphor

Some metaphors are tenuous literary devices, written to entertain, or for use as contrived segues. Some are so oblique as to be distracting, or form such a bad parallel as to be downright misleading.
However, I believe this picture of accountability is directly relevant to the quality of our code.

For all the good it does technical writers, speakers, and code prophets like myself to talk about producing good, well-crafted code, and as much as the luminaries like Uncle Bob Martin extol the (genuine) virtues of “clean” code, and Fowler explains why we need well-factored code, it matters not one jot if, in the heat of the workplace, we can't put it into practice. If the harsh realities of the codeface cause us to shed our development morals and resort to hacking at code like uninformed idiots, what have we achieved?

We can complain about the poor state of our codebases, but who can we look at to blame?

We need to bake into our development regimen ways to avoid the temptation for shortcuts, bodges and quick-fixes. We need something to lure us out of the trap of thoughtless design, sloppy, easy solutions and half-baked practices. The kind of thing that costs us effort to do, but that in retrospect we're always glad we have done.

The spirit is willing, but when the deadline looms, all too often the flesh is weak.

How do you think we'll achieve this?

Accountability counts

I know that in my career to date, the single most import thing that has encouraged me to work to the best of my abilities has been accountability, to a team of great programmers.

It's the other coders that make me look good. It's those other coders that have made me a better programmer.

Being accountable to other programmers for the quality of your work will dramatically improve the quality of your coding.

That is a single simple, but powerful idea.


To ensure you're crafting excellent code, you need people who are checking it every step of the way. People who will make sure you're working to the best of your ability, and are keeping up to the quality standard of the project/team you're working on.

This needn't be some bureaucratic big-brother process, or a regimented personal development plan that feeds back directly into your salary. In fact, it had better not be. A lightweight, low-ceremony system of accountability, involving no forms, lengthy reviewing sessions or formal reviews is far superior, and will yield much better results.

Most important is to simply recognise the need for such a thing; to realise that you must be accountable to other people for the quality of your code to encourage you to work at your best. To realise that actively putting yourself into that vulnerable position of accountability is not a sign of weakness, but a valuable way to gain feedback and improve your skills.

How accountable do you feel that you currently are for the quality of the code you produce? Is anyone challenging you to produce high quality work, to prevent you from slipping into bad, lazy practices?
Accountability is worth pursuing not only in the quality of our code output, but also in the way we learn, and how we plan our personal development. It's even beneficial in matters of character and personal life (but that's a whole other magazine's column).

Making it work

There are some simple ways to build accountability for the quality of code into your development process. In one development team we found it particularly useful when the all coders agreed on a simple rule: all code passed two eyes before entering source control. With this as a peer-agreed rule, it was our choice to be accountable to one another, rather then some managerial diktat passed down from faceless suits on high. Grass-roots buy-in was key to this success of the scheme.

To satisfy the rule, we employed pair programming and/or a low-ceremony one-on-one code review, keeping each checked-in change small to make the scheme manageable. Knowing another person was going to scrutinise your work was enough to foster a resistance to sloppy practise and to improve the general quality of our code.

If you know that someone else will read and comment on your code, you're more likely to write good code.

This practice genuinely improved the quality of the team, too. We all learnt from one another, and shared our knowledge of the system around. It encouraged a greater responsibility for and understanding of the system.

We also ended up with closer collaboration as a result, enjoyed working with each other, and had more fun writing the code as a consequence of this scheme. The accountability lead to a pleasant, more productive workflow.

Setting the standard

When building developer accountability into your daily routine it is worth spending a while considering the benchmark that you're aiming for. Ask yourself the following questions:

How is the quality of your work judged? How do people currently rate your performance? What is the yardstick they use to gauge it's quality? How do you think they should rate it?
  • The software works, that's good enough.
  • It was written fast, and released on schedule (internal quality is not paramount).
  • It was well-written, and can be maintained easily in the future.
  • Some combination of the above.
Which is seen as most important?

Who currently judges your work? Who is the audience for your work? Is it only seen by yourself? Your peers? Your superiors? Your manager? Your customer? How are they qualified to judge the quality of your handiwork?

Who should be the arbiter of your work quality? Who really knows how well you’ve performed? How can you get them involved? Is it as simple as asking them? Does their opinion have any bearing on the company's current view of your work's quality?

Which aspects of your work should be placed under accountability?
  • The lines of code you produce?
  • The design?
  • The conduct and process you used to develop it?
  • The way you worked with others?
  • The clothes you wore when you did it?
Which aspect matters the most to you at the moment? Where do you need the most accountability and encouragement to keep improving?

The next steps

If you think that this is important, and something you should start adding to your work:
  • Agree that accountability is a good thing. Commit to it.
  • Find someone to become accountable to. Consider making it a reciprocal arrangement; perhaps involve the entire development team.
  • Consider implementing a simple scheme like the one described above in your team, where every line of code changed, added or removed must go past two sets of eyes.
  • Agree on how you will work out the accountability – small meetings, end of week reviews, design meetings, pair programming, code reviews, etc.
  • Commit to a certain quality of work, be prepared to be challenged on it. Don't be defensive.
  • If this happens team-wide, or project-wide then ensure you have everyone's buy-in. Draft a set of team standards or group code of conduct for quality of development.
Also, consider approaching this from the other side: can you help someone else out with feedback, encouragement, and accountability? Could you become another programmer's moral software compass?

Often this kind of accountability works better in pairs of peers, rather than in a subordinate relationship.


Accountability between programmers requires a degree of bravery; you have to be willing to accept criticism. And tactful enough to give it well. But the benefits can be marked and profound in the quality of code you create.


  • How are you accountable to others for the quality of your work?
  • What should you be held accountable for?
  • How do you ensure the work you do today is as good as previous work?
  • How is your current work teaching you and helping you to improve?
  • When have you been glad you kept quality up, even when you didn't feel like it?
  • Does accountability only work when you chose to enter into an accountability relationship, or can it effectively be something you are required to do? 

Sunday, 9 October 2011

My latest iOS app: The Mahjong Score Book

My latest iOS app is out now (thanks to the iOS 5 SDK having finally gone GM). It's called the Mahjong Score Book and, as the name suggests, is designed to help Mahjong players keep tabs on their scoring in Mahjong games.

Of course, once you have your scores captured inside a wee computer, there's more exciting things we can do with them. So the app allows you to:

  • keep records of all the games you've ever played, with a very clear visual presentation, associated with the date played
  • draw pretty graphs of game progress
  • adopt different rule sets (limit hand value, maximum wins as east wind, etc)
  • export games to databases, as images, or text files
  • automatically roll dice to determine starting positions
  • associate notes with each game
  • and much more.

If you're interested, then check it out in iTunes here.

I appreciate that it's a niche app. It simply something I wrote to fulfil a personal need. Of course, someone else out there is going to find it useful, too. And so I've polished it, honed it, and produced a really very slick app.

I wrote Mahjong Score Book app back in the summer whilst on holiday, playing Mahjong with friends. It serviced its purpose well then and many times since, and I've learnt a few new iOS techniques whilst polishing off the application.

I'm sure that it won't make me millions, but I'm very proud of what I've produced in a relatively short time. If you know someone who plays Mahjong then please point them at it!

Wednesday, 21 September 2011

iOS: Using older SDKs with newer Xcode versions

When you update Xcode versions, the installer automatically removes any old SDKs you have lying around, and replaces them with the latest version.

This is fine behaviour, as the most recent SDKs remain backwards compatible. You can set your project to target older iOS versions. If you do this, the newer SDK features are disabled for you.

However, there are times when you need to use an older SDK.

For example, I am running the latest Xcode with a beta iOS 5 SDK installed. Since this was originally installed on a clean machine, I didn't set the beta install to use a parallel directory and leave the "release version" of the developer tools intact - they simply weren't installed. (Making a parallel install is, in general, the best practice when installing a beta Xcode/SDK set).

Fear not. You can still get your newer Xcode to build with an older SDK, without downgrading your Xcode or making a parallel install:

  • Close any running Xcode instance you have open.
  • Locate the install DMG for an old version of Xcode (e.g. Xcode_3.2.5_and_ios_sdk_4.2_final.dmg, they name them so well) and open it.
  • Do not run the installer!
  • Open the Packages directory in that disk image. It is a hidden directory. Try this terminal incantation: "open /Volumes/Xcode\ and\ iOS\ SDK/Packages"
  • Locate the iPhoneOS and iPhoneSimulator SDKs for the version you want. Run just those pkg files. (e.g. I ran the iPhoneSDK4_2.pkg and it's matching iPhoneSimulatorSDK4_2.pkg)
  • Make sure you specify the /Developer directory as your install location. If you don't, the SDKs will be installed in your root directory, under the /Platforms directory; you'll have to manually copy them into /Developer/Platforms yourself.
  • Now, re-open Xcode. If the SDKs installed in the right locations, they will be selectable in your project now.

Tuesday, 20 September 2011

How to set up Jenkins CI on a Mac

In this post I will describe how to get a running Jenkins server set up on your Mac. Like most free software ("free" as in price and "free" as in freedom), Jenkins is very capable, very functional, and mostly documented. But it didn't quite work out of the box.

As with many such projects, you get far more than you pay for. But you can end up spending more than you expect.

There aren't enough step-by-step how-to guides. And there aren't many documents that help you out when things go wrong. There is a great community behind Jenkins, though, which does help. And plenty of people moaning and blogging. Now I'm adding to that noise.

It took me a few days to get the setup working properly. Hopefully this story-cum-howto will save you some of that effort.

The Prologue

All good developers know that a continuous integration (CI) server is a linchpin of the development effort. I joined a large software project without one, and made loud noises that we needed one. And so, it naturally fell to me to set up.

It's been some time since I set up a CI server. Previously, I've used ViewTier's Parabuild. It was more than adequate. But times have moved on. Although I still have a licence for it, the cools kids are hanging around at other parties these days.

Jenkins (the recent fork of Husdon) seems well-regarded, popular, and to have a good development and support community. It's also open source, so seemed the right way to go. Plenty of people have sung it's praises to me in the past, and that kind of thing counts for a lost.

Our requirements for the builds were:
  • to build two products from the same codebase on Mac OSX
  • to build two products from the same codebase on Windows

Both of these need 32-bit and 64-bit versions.

That's already a reasonable configuration matrix, and highlighted why we needed CI in place. A developer would check in a tweak that they'd built on one configuration. All the other config could easily get broken without anyone noticing for a while.

So: Jenkins to the rescue.


I purchased a small Mac Mini to use as a build server. I downloaded a copy of Jenkins (free, whoop!), installed the Mac dev tools (free, whoop!) bought (and installed) Parallels, Windows 7, and the Visual Studio toolchain (not quite as free), and sat down for a small configuration session.

Getting your project ready for a CI build

Before setting up your build on an CI server, you should first create a simple script that builds everything from a clean checkout. Then check that script into the repository, to be versioned alongside the software itself.

For our project, I already had that in place. The script cleaned, built, versioned and packaged the software in one step.

Such scripts are clearly useful for deployment on a CI server, and also for making official software releases by hand, whether or not you release from the CI server builds. It's a record of the recipe needed to build a release.

With a fixed recipe like this in place, every software release can be guaranteed to be good and reproducible.

That's development 101.

Installing Jenkins on the Mac

The Jenkins website has a handy Mac installer that you can download. (In retrospect, I'm not sure if this was more hassle than it was worth, but this is the route I obviously sought to go down.)

STEP 1: Install Jenkins

Download the Mac installer and run it.

This installer creates a system launch daemon that fires up Jenkins when your machine boots. This runs in the background even if you haven't logged in, making a true stand-alone build server installation.

However, if you have a fresh Lion install you don't yet have Java.

STEP 2: Install Java

Try to run a Java app. Any app. The OS may fumble around for a while looking for Java. If you're lucky, it'll download and install it automatically. Otherwise, install it by hand.

And of course, now, Jenkins "just works".


Configuring Jenkins on the Mac

The Jenkins war (web application archive) is unpacked into /Users/shared/Jenkins. The application runs from there. All configuration is stored there. The source code checkouts and builds go in there. It's the center of your Jenkins universe.

A launch daemon plist is installed in /Library/LaunchDaemons. It runs a script /Library/Application Support/ as the user "daemon" (a system specific user that runs background processes - it is not shown on the login screen, nor does it have a home directory).

This installation has all the hallmarks of a runnable system. You can now point your browser to http://localhost:8080 and start configuring the Jenkins server. The lights are most definitely on. But no one's home yet. As we're about to see...

STEP 3: Set up Jenkins to build a Mac project

I was a good soldier with a simple shell script that built and packaged my application. If you don't have this, write one now. To build Mac projects, you'll need some cunning invocation of xcodebuild and probably packagemaker.

This single script is a critical step in configuring your build job. Whilst it is possible to place multiple build commands into the Jenkins task itself, it's far better to keep them under source control in a script checked in to your codebase (if you need to ask why then you probably need to go to a more basic tutorial!).

Configure Jenkins to check out your repository, to react to appropriate build triggers (e.g. manual build requests through the UI, automatic detection of the repository changing, or other triggers) and to run the appropriate scripts to kick off the build.

Then press "Build Now" to start your first build.

In all probability Jenkins will crash and burn. But don't tear your hair out just yet. You'll be needing it for later on.

Fix Jenkins so it works

Welcome to the nether-world of almost working builds.

STEP 4: Configure the Java heap size

My project is large. It includes lots of third party libraries, including the vastness that is Boost and many other comparable-size libraries. There are also several SDKs that are shipped as large binaries (with versions for each platform and 32/64 bit). That's a lot of data to shovel around.

Jenkins choked trying to check out this monster. It would collapse with Java heap exhaustion errors before it even got to triggering a build. Goodness only knows why a large heap is required to check files out of a subversion repository, but the solution can only be to increase the heap size to remove the bottleneck.

By default, Java allocates a very conservative default heap size to running applications; I believe its 256M or  so on 32-bit Mac OS.

On the Mac, this can be changed using the "Java Preferences" application (found in the /Applications/Utilities folder). The trick is to adjust the Java launch command line (hit Options...) to include the comand-line sneeze: "-Xmx1024M" (or whatever heap size you want). However, this didn't seem to affect the Jenkins Java process launched through launchd.

To set the heap size in that context, you have to adjust the launch script itself. You can place the command line switch into the file directly. However, the file does have provision to load the heap size parameter from a configuration plist. This plist does not exist by default, but you can create/edit it with the following incantation:

sudo defaults write /Library/Preferences/org.jenkins-ci heapSize 1024M

This will write a file /Library/Preferences/org.jenkins-ci.plist (note that you must not specify the plist file extension to the defaults command).

To make the system use this new heap size, you can't just restart Jenkins (either gracefully within the web interface, or by "kill -9"-ing the process. You can't even use "sudo launchctl {stop,start} org.jenkins-ci".

You could reboot. Or, more cleanly, you have to force launchd to reload of the configuration for the launch daemon using launchctl by unloading, and then reloading the daemon. It's the reloading that'll force the new configuration to take hold. (It took me a while to figure that one out!)

With an increased heap, Jenkins will fall over less. In my case, I got through a whole checkout.

But once Jenkins manages to check out the project and run the build script, you're still not quite done...

My script called xcodebuild to invoke Xcode from the command line to build the various configurations of the project. This script worked fine when run directly from the command line. However, when running within Jenkins it would bomb out with quite unfathomable errors - e.g. NSAssertions triggered from deep within the Xcode IDE codebase. Or it would just enter a hibernation state; lock-up completely, performing no work, but generating no error.

The reason for the strangeness is that xcodebuild doesn't work when run as a user that has no home directory, like daemon. It throws its toys out of the pram in as baroque a manner as it can muster.

STEP 5: Create a "jenkins" user

So, to solve this we can either have Jenkins run as one of the existing users, or - more cleanly - create a new user specifically for jenkins.

To do this:

  • Create a user called "jenkins" from Control Panel. If you care particularly, you might want to create a "hidden" user; follow the instructions here:
  • Stop jenkins again: sudo launchctl unload -w /Library/LaunchAgents/org.jenkins-ci.plist
  • Edit  /Library/LaunchAgents/org.jenkins-ci.plist, change the username entry from daemon to jenkins
  • Change all permissions on the existing jenkins files: "sudo chown -R jenkins /User/Shared/jenkins" "sudo chgrp -R staff /User/Shared/Jenkins"
  • Restart Jenkins: sudo launchctl load -w /Library/LaunchAgents/org.jenkins-ci.plist
Re-start the build. Sit and wait.

Job done

From memory, that was the set-up steps required to get builds to work on a Mac from a fresh Jenkins install. These things aren't really covered by the install guides. They're obvious once you know them. Hindsight is great like that.

Plenty of people followed my whining on Twitter with disbelief, saying that Jenkins "just works" for them. Others suggested moving over to Hudson instead, but I imagine I'd've had the same issues there.

Perhaps I'm unusual, and this stuff does just work for everyone else. If that's not the case, then I hope this rant proves useful.

As a postscript, I now have my Jenkins server working well. I have configured a Windows client, running under a Parallels virtual machine on the same computer. It's not the fastest build server when both run together, but it's passable.

There are definitely some rough edges and features lacking from Jekins, but I can't complain at the price. And there are plenty of excellent plugins that really do make it a very capable build server.

Thursday, 15 September 2011

Writing: Smarter, Not Harder

The September issue of ACCU's C Vu magazine is out now. It contains the latest instalment in my Becoming a Better Programmer column. This one's called Smarter, Not Harder.

In it I investigate how high-quality developers use their skill and experience to be maximally productive. I describe useful tactics to help you solve problems more easily and get the job done in the most effective way. In short: how to pick your battles.

Tuesday, 6 September 2011


Confusion of goals and perfection of means seems, in my opinion, to characterise our age.
Albert Einstein

It's becoming an epidemic! They're springing up everywhere. We've got them coming our of our ears. It's as if you can't write a line of code, kick off a development process, or even think about the act of coding without signing up to one.

With all these manifestos for software development, our profession is in danger of becoming more about politics than the actual art, craft, science, and trade of software development.

Of course, a large and important part of professional software development is the people problem. And that necessarily involves politics, to some extent. But we're making even the foundational coding principles a political battle. Is this for the best? Or is it just a fashionable, sound-bite-sized way to get your point across, and to try to garner support for your pet hobby-horse?

These “development” manifestos are often too ambiguous for people to sign up to in any meaningful way. They're so general that they simply must be right. Akin to a development horoscope, if you will. Very few of them break new ground, or introduce anything genuinely radical. And, sadly, when a manifesto becomes popular we see factions form around it, leading to disputes about what the manifesto really stands for. Whole debates spring up around the exegesis of the particular manifesto items.
Software religion is alive and well.

Whether or not manifestos are a good idea, they seem to be springing up for any conceivable purpose. So, in order to stem the flow, and make it easier for future software activists who'd like to pen their own manifesto, here I present the one, the overarching, generic software development manifesto. Manifesto, if you like.

A generic manifesto for software development

We, the undersigned, have an opinion about software development. We are concerned about the future of our profession, and our passion leads us to draw the following conclusions:
  • We believe in a fixed set of immutable ideals
over tailoring our approach to each specific situation.
  • We believe in concentrating on and discussing only the things that interest us
over the bigger problem.
  • We believe in our opinion
over the opinions and experiences of others.
  • We believe in arbitrary black-and-white mandates
over real-world scenarios with complex issues and delicate resolutions.
  • We believe that when our approach is hard to follow
then it only shows how much more important it is.
  • We believe in crafting an arbitrary set of commandments
over the realisation that it's just never that simple.
  • We believe in trying to establish a movement to promote our view
over something that will be genuinely useful.
  • We believe that we are better developers than those who don't agree with us
because they don't agree with us.

That is, we believe we're doing the right thing. And if you don't you're wrong. And if you don't do what we do, you're doing it wrong.


Alright. I'll admit it. I exaggerated for effect. And my tongue is in my cheek. Mostly. 



  1. What foundational development “principles” do you hold dear?
  2. Do you sign up to, or align yourself with, development streams like “agile”, “craftsmanship” and so on? How closely do you agree with each of the items in their manifesto?
  3. What do you think these manifestos do have to offer the development community?
  4. What kinds of harm might they really be able do, if any?
  5. Or do you keep your head down and ignore this kind of thing? Should you actually follow these software fashions and fads to maintain personal development?

Friday, 12 August 2011

Pushing a git repository into an existing subversion repository

I worked on a project using git as the version control system. Eventally I needed to share it with other developers, who had an existing corporate Subversion repository. They didn't want to be bogged down in the minutae of learning a new version control system.

So I had to push the code in my git repo up to the svn repo.

There are two options:
  1. Take a baseline of the code and just comit that to svn, losing all version history. It's simple; it's quick; it works. Ten points for getting the job done. No points for elegance. And minus several thousand points for losing revision history.
  2. Serialise the development history in git and use that to re-vivify the history within Subversion. Many, many, points for doing the right thing. Minus quite a few for the faff it takes to work out how to do it.
Clearly, (2) is the way to go.

For your delight, this is how I finally managed to do it. Hopefully it'll help you avoid similar head scratching and Googling.

How to push a git repository into svn

Obviously, we'll want to invoke the git-svn command. The trick is how to arrange your git repo so that it's tracking the subversion repository correctly.

Step 1: Create the landing point in the svn repo

svn mkdir svn://DEST/repo/projectname/{trunk,branches,tags}

It's worth noting that in this example, I was happy to just clone the mainline of development, and ignore any git branches.

As you can infer, the corporate svn repo has many top-level directories that are all themselves "mini-repos". This is why I had to push the git history into an existing svn repo, rather than just create a new svn repo.

Step 2: Create a git svn clone tracking svn

git svn clone svn://DEST/repo/projectname/trunk dest

Now we have a git repo that tracks the destination svn landing point for the import operation.

Step 3: Track the git repo we want to import

cd dest
git remote add -f source /path/to/git/source/repo

Now, if you inspect the git history (for this I used the excellent GitX (L) ), you'll see a whole series of commits from the original git repo and, disconnected from this, the master HEAD plus a git-svn HEAD pointing to the original (single) svn commit we cloned.

Step 4: Rebase the original git repo onto git-svn

Here's where the secret magic lies. I seems like there are many ways to go from here. This was the only one I found to work. Of course, I tried many ways that failed. Once I found one that worked, I stopped trying. So if there's a better way I'd love to hear it, but this seems to work well.

git rebase --onto remotes/git-svn --root source/master

At this point, I realised that my git history wasn't strictly linear; I had worked on a few machines, so the history of trunk wove arond a bit.

This meant that what I had expected to be a straightforward operation (that's what you'd expect with a SVN hat on) required a few rebase fix-ups along the way:

gvim foo # fix merge conflict
git add foo
git rebase --continue
# ... rinse and repeat

These were required because branches in the source repo from work on different machines that got merged together to form the "source" trunk line of development didn't flatten into a rebase without a few tweaks.

In general, the conflicts were small and weren't hard to fix.

Step 5: Admire your work

git log

You should now see that the entire master development line of the source git repo has been replayed into the master of your working repo, stacked on top of the git-svn point.

Again, GitX (L) helped to visualise this.

Step 6: Push up to svn

Now that we've arranged everything above git-svn, it's a simple case of:

git svn dcommit

To push the changes up into svn.

Friday, 22 July 2011

Xcode: Determining the current SDK version in a script

In an iOS XCode project, I needed a shell script target (aka "External Build Tool" target) that invokes other scripts which in turn build stuff using Xcode. Curiously recursive.

Now, I needed to pass that script the current iOS SDK version being used so it could arrange its own build malarkey, and get all its SDK ducks in a row.

There is no environment variable for that defined by Xcode. Bah. The closest you get is the current deployment target version, which appears in $IPHONEOS_DEPLOYMENT_TARGET.

That seems like rather a large omission.

You do get given $SDKROOT which is a file path to the current SDK. But that's not quite the same thing as a simple version number.

Never, fear, shell gibberish to the rescue. This is what I came up with:

SDK_VERSION=$(echo $SDKROOT | sed -e 's/.*iPhone\(OS\)*\(Simulator\)*\([0-9]*.[0-9]*\).sdk/\3/')

If you know a better way to do this, I'd love to know.

Tuesday, 19 July 2011

C++: Declaring a pointer to a template method

Busy writing some template gibberish, we needed to make a healthy trade in pointers to template-member-functions. Of template classes. (Where the template function parameters themselves were pointers to template methods on template classes, but let's not worry about that detail right now).

It took a little run-up to get the C++ syntax right, so I present it here for your viewing pleasure.

All code tested against g++ 4.2.1 only.

Case 1: Normal pointer to member

Let's just remind ourselves of the syntax for a simple pointer to (normal, non-template) member function:

Note: I've gratuitously changed the < < stream operators into "--" just to get the syntax through blogger's composer window. Sorry about that. I pray that no other C++ syntax was sacrificed in this publishing exercise
class Target1
void Method(int a)
std::cout -- "Target1(" -- a -- ")\n";

void PointerToNormalMemberFunction()
Target1 target;
// This is how we construct a normal pointer to member function
void (Target1::*oneParam)(int) = &Target1::Method;
Relatively simple.

Case 2: A pointer to template member function

Here's the first incursion of templates. If you're looking at a template method, this is how you'd declare your pointers to it:
class Target2
template <typename T>
void Method(T a)
std::cout -- "Target2("--a--")\n";

void PointerToTemplateMemberFunction()
Target2 target;

// This is how we construct a pointer to a template member function
// See how the template type of the method is mentioned at the end of the method name.
void (Target2::*oneParamTemplateInt)(int) = &Target2::Method<int>;
void (Target2::*oneParamTemplateFloat)(float) = &Target2::Method<float>;

// However, the compiler can deduce the template type of the method
void (Target2::*shorterInt)(int) = &Target2::Method;
void (Target2::*shorterFloat)(float) = &Target2::Method;
Note that you can chose whether or not to specify the template types of the method when you assign it to your member-function pointer. The compiler can deduce these for you.

Case 3: Pointer to template methods with more than one template parameter

This is not significantly different from the above, we just extend the types in the pointer-to-member.

class Target3
template <typenameT>
void Method(T1 a, T2 b)
std::cout -- "Target3("--a--","--b--")\n";

void PointerToTemplateMemberFunctionWithTwoParameters()
Target3 target;

// This is how we construct a pointer to a template member function
// with multiple template parameters. Just like above, really.
void (Target3::*oneParamTemplateIntFloat)(int,float) = &Target3::Method<int,float>;
void (Target3::*oneParamTemplateFloatInt)(float,int) = &Target3::Method<float,int>;

// Again, the compiler can deduce the type of the methods
void (Target3::*shorterIntFloat)(int,float) = &Target3::Method;
void (Target3::*shorterFloatInt)(float,int) = &Target3::Method;
Again, note, the compiler can generally deduce the correct template method without you having to specify the template parameter types.

Case 4: Pointer to template methods in a template class.

Now it's getting sillier - a pointer to a template method in a template class. the syntax does still make sense, it just depends how far down the rabbit hole you want to go.

template<typename TYPE>
class Target4
Target4(const TYPE &value) : value(value) {}
TYPE value;

template <typename T>
void OneParam(T a)
std::cout -- "Target4::OneParam("--value--","--a--")\n";

template <typename T1, typename T2>
void TwoParam(T1 a, T2 b)
std::cout -- "Target4::TwoParam("--value--","--a--","--b--")\n";

void PointerToTemplateMemberInTemplateClass()
Target4<char> target('c');

void (Target4<char>::*oneParam)(float) = &Target4<char>::OneParam<float>;

// Again, we can miss off the last template types
void (Target4<char>::*shorter)(float) = &Target4<char>::OneParam;

// Two parameters just extends the scheme
void (Target4<char>::*twoParam)(float,int) = &Target4<char>::TwoParam;

Case 5: Using a pointer to a template method of a template class inside the template class itself

If you want to make use of a pointer to template method within a template class, you simply cannot specify the template method's parameter types. The compiler considers this a syntax error. So you have to rely on the compiler deducing the correct template method instantiation. (See edit below.)

In the case of this example, it copes fine. In more complex cases, it may hurt less if your call template method overloads different names.

template<typename TYPE>
class Target5
Target5(const TYPE &value) : value(value) {}
TYPE value;

template <typename T>
void OneParam(T a)
std::cout -- "Target5::OneParam("--value--","--a--")\n";

typedef void (Target5<E>::*MethodTypeToCall)(T);
// Here, the compiler picks the right overload
MethodTypeToCall toCall = &Target5<E>::Private;
// In this case, the compiler does not let us write the following line (parse error):
//MethodTypeToCall toCall = &Target5<E>::Private<t;;

template <typename T1, typename T2>
void TwoParam(T1 a, T2 b)
std::cout -- "Target5::TwoParam("--value--","--a--","--b--")\n";

typedef void (Target5<E>::*MethodTypeToCall)(T1,T2);
MethodTypeToCall toCall = &Target5<E>::Private; // compiler picks the right overload
// you can't add the method's template parameters to the end of that line


template <typename T>
void Private(T a)
{ std::cout -- "Target5::Private("--value--","--a--")\n"; }
template <typename T1, typename T2>
void Private(T1 a, T2 b)
{ std::cout -- "Target5::Private("--value--","--a--","--b--")\n"; }

void HoldingAPointerToTemplateMemberInTemplateClass()
Target5<r> target('c');

void (Target5<r>::*oneParam)(int) = &Target5<r>::OneParam;
void (Target5<r>::*twoParam)(float,int) = &Target5<r>::TwoParam;
Edit: it's been pointed to to me that you can name a specific overloaded template method using the following syntax. Add this to your pipe and smoke the whole template shenanigans:

MethodTypeToCall toCall2 = &Target5::template Private<T>;

This kind of template gibberish is why you know you love C++.

Simples, init?

The extra thing we added to all this syntactical joy was to have one of the template method's (template) parameter types itself a pointer to a template method on a template class.

It was at this point our brains dribbled out of our ears, and we had to retrace our template syntax steps back up this rabbit hole.

Monday, 18 July 2011

Writing: It's The Thought That Accounts

The July issue of ACCU's C Vu magazine is out now. It contains the latest instalment in my Becoming a Better Programmer column. This one's called It's The Thought That Accounts.

In it I describe how you can become a better programmer, and how you will be encouraged to write better code, through accountability. Far from being a dirty word, or some kind of bureaucratic nightmare, developer accountability can be fun, stimulating, enriching, and valuable.

You'll also find out about my wine and running predilections.

This is a great issue of C Vu, with some really interesting articles. If you're a developer who is not already an ACCU member, then I strongly urge you to join. It's super-cheap and really worthwhile!

I'll hold you accountable to that...

Friday, 24 June 2011

Are we there yet? (Becoming a Better Programmer)

In the name of God, stop a moment, cease your work, look around you. Leo Tolstoy

A program is made of a number of subsystems. Each of those subsystems is composed of a smaller parts - components, modules, classes, functions, data types, and the like. Sometimes even boxes and lines. Or clever ideas.

The jobbing programmer moves from one assignment to the next; from one task to another. Their working day is composed of a series of construction and maintenance tasks on a series of these software components: composing new parts, stitching parts together, extending, enhancing or mending existing pieces of code.

So our job is simply a string of lots of smaller jobs. It's recursive. Programmers love that that kind of thing.

Are we there yet?

So there you are: getting the job done. (You think.)

Just like a small child travelling in the back of a car constantly brays “are we there yet?”, pretty soon you'll encounter the braying manager: “are you done yet?

This is an important question. Its essential for a software developer to be able to answer that one simple request: to know what “done” looks like, and to have a realistic idea of how close you are to being “done”. And then to communicate it.

Many programmers fall short here; it's tempting to just keep hacking away until the task seems complete. They don't have a good grasp on whether they're nearly finished or not. They think: There could be any number of bugs to iron out, or unforeseen problems to trip me up. I can't possibly tell if I'm almost done.

But that's simply not good enough. Usually, avoiding the question is an excuse for lazy practice, a justification for “coding from the hip”, without forethought and planning. It's not methodical.

It's also likely to create problems for you. I often see people working far too hard:

  • They are doing more work than necessary, because they didn't know when to stop.

  • Without knowing when they'll be done, they don't actually complete the tasks they think are finished. This leads to having to pick things back up later on, to work out what's missing and how to stitch it in. Code construction is far slower and harder this way.

  • The wrong bits of code get polished, as the correct goal was never in sight. This is wasted work.

  • Developers working too hard are forced to put in extra hours. You'll not get enough sleep!

Let's see how to avoid this and to answer “are we there yet” effectively.

Developing backwards: decomposition

Different programming shops manage their day-to-day development efforts differently. Often this depends on the size and structure of the software team.

Some place a single developer in charge of a large swathe of functionality, give them a delivery date, and ask them for occasional progress reports. Others follow “agile” processes, and manage a backlog of more granular tasks (perhaps phrasing them as stories), divvying those out to programmers as they are able to move into a new task.

The first step towards defining “done” is to know exactly what you're working on. If it's a fiendishly large and complex problem, then it's going to be fiendishly complex to say when you'll be done.

It's a far simpler exercise to answer how far through you are through a small, well-understood problem. Obvious, really.

So if you have been allotted a monster task, before you begin chipping away at it, break it down into smaller, understandable parts. Too many people rush headlong into code or design without taking a step back to consider how they will work through it.

Split large tasks up into a series of smaller, well-understood tasks. You will be able to judge progress through these more accurately.

Often this isn't a complex a task, at least for a top-level decomposition. (You may have to drill down a few times. Do so. But take note: this is an indication that you've been handed a task at far too high a granularity.)

Sometimes such a decomposition is hard to do, and is a significant task itself. Don't let that put you off. If you don't do it up-front for estimation purposes, you'll only end up doing it later on in less focussed ways as you battle to the finish line.

Make sure that at any point in time, you know the smallest unit you're working on; rather than just the big target for your project.

Define done

You've got an idea of the big picture; you know what you're ultimately trying to build. And you know the particular sub-task you're working on at the moment.

Now, make sure that for whatever task you are working on, you know when to stop.

To do this, you have to define what “done” is. You have to know what “success” means. What the “complete” software will look like.

Make sure you define “done”.

This is important. If you haven't determined when to stop, you'll keep working far past when you needed to. You'll be working harder and longer than you needed to. Or, you won't work hard enough – you'll not get everything done. (Not getting everything done sounds easier, doesn't it? But it's not... the half-done work will come back to bite you, and will make more work for you later down the line, whether that's bugs, rework, or an unstable product).

Don't start a piece of coding work until you know what success is. If you don't yet know, make your first task determining what “done” is. Only then, get going. With the certainty of knowing where you're headed, you'll be able to work in a focused, directed manner. You'll be able to make informed choices, and to discount unnecessary things that might side-track or delay you.

If you can't tell when it's done, then you shouldn't start it.

So how does this look in practice? How do you define “done”? Your “done” criteria needs to be:


It must be unambiguous and specific. A list of all the features to be implemented, the APIs added or extended, or the specific faults to be fixed.

If, as you get into the task, you discover things that might affect the completion criteria (e.g. you discover more bugs that need fixing, or uncover unforeseen problems) then you must make sure that you reflect this in your “done” criteria.

This criteria is usually directly traceable to some software requirements or a user story – if you have them. If this is the case, make sure that this connection is documented.


Make sure that the success criteria is seen by all important parties. This probably includes: your manager, your customers, the downstream teams using your code, or the testers who will validate your work.

Make sure everyone knows and agrees on this criteria. And make sure they'll have a way of telling – and agreeing – when you are “done”.

The nature of each task will clearly define what “done” means. However you should consider:

  • How much code must be completed. (Do you measure this in units of functionality, APIs implemented, user stories completed?)

  • How much is design done, and how it's captured.

  • Whether any documents or reports must be generated.

When it's a coding task, you can mostly clearly demonstrate “being done” by creating an unambiguous test set. Write tests that will show when you've fashioned the full suite of code required.

Use tests written in code to define when your code is complete and working.

There are some other questions that you may have to consider when you describe what “done” is:

  • Where is the code delivered to? (e.g. to version control)

  • Where is the code deployed to? (Is it “done” when it's live on a server - or do you deliver testable product ready for a deployment team to roll out?)

  • What are the economics of “done”? The exact numbers required that may lead to certain tradeoffs or measurements. For example: how well should your solution scale? It's not good enough if your software only manages 10 simultaneous users if 10,000 are required. The more precise your done criteria the better you understand these economics.

  • How will you signal that you're done? When you think you're done how will you let the customer/manager/QA department know? This probably looks different for each person. How will you garner agreement that you are indeed done – who signs-off on your work? Do you just check in, do you change a project reporting ticket, or do you raise an invoice?

Just do it

When you've defined “done”, you can work with focus. Work up to the “done” point. Don't do more than necessary.

Stop when your code is good enough – not necessarily perfect (there may be a real difference between the two states). If the code gets used or worked on an awful lot, it may eventually be refactored to be perfect – but don't polish it yet. This may just be wasted effort. (Beware: this is not an excuse to write bad code, just a warning against unnecessary over-polishing).

Don't do more work than necessarily. Work until you're “done”. Then stop.

Having a single, specific goal in mind helps you to focus on a single task. Without this focus it's easy to hack at code randomly trying to achieve a number of things and not managing any of them successfully.


  1. Do you know when you're current task will be “done”? What does “done” look like?

  2. Have you decomposed your current task into a single goal, or a series of simple goals?

  3. Do you decompose your work into achievable, measurable units?

Thursday, 9 June 2011

The Enigma Continues

Version 1.4 of my cool iOS puzzle game, The Enigma Squares, has been released. Check it out here. In this version, I present you more than 80 new levels, with a number of fun, a number of fiendish, and a number of speed-trial puzzles.

Check it out on iTunes now.

I've also released a new game. The Enigma Kids is a family-oriented puzzler. With child-friendly puzzles, great graphics and lots of fun, everyone will enjoy this game.

Check out The Enigma Kids on iTunes here.

Let me know what you think!

Wednesday, 25 May 2011

Optimising Parallels performance for development

I use the wonderful Parallels 6 to perform all my Windows development. (I find working on a Mac makes Windows development a lot more bearable - at least for this old Unix-head).

In general, Parallels works superbly (and is reported to perform a lot better than VMware, if you believe the hype). However, for operations like rebuilding a large, complex C++ source tree, the virtual machine performance lagged a long way behind a native machine. Large compiles can take more than twice the time of a real Windows PC. Clearly, this taxes the host machine hard: hammering CPU, memory, and disk at once.

This problem exhibits on my four core Mac Pro with as many resources as I could throw at the problem, as well as on my more humble MacBook Pro.

I performed a number of experiments to work out how to fix this:
  • Tweaking the allocated CPU cores, memory, and disk space
  • Tweaking the settings for the virtual hard drive
  • Network building from host mac file system
None of these made much difference. In fact, I was somewhat surprised that operating on a networked disk was fantastically slower. That was a shame, as it would be advantageous to share a single source tree with my Xcode development for cross-platform development. (As it happens, I manage this by having a Windows-side git clone of the git repo on the Mac side).

However, I've now managed to get my compile times right down. Here's how.

Do bare in mind that these tweaks are in addition to the obvious steps you should take with any machine:
  • Have as fast a CPU as possible.
  • Throw as many CPU cores as you can at the problem - builds can be parallelised very easily.
  • Throw as much memory as you have at your disposal at the problem.

1. Swap the swap

The factor slowing me down the most turned out to be the Parallels virtual disk that Windows was running on. First, the swap file was hosted on it. So I made a second virtual disk, and pushed the virtual memory off onto it. That made a small, but appreciable difference.

2. Only ever use fixed size disks

In your disk settings, do not use automatically expanding disks. That's just another overhead for the computer to manage whilst accessing disks. Create a fixed size blob of disk space and run with it.

3. Use a virtual SCSI disk

This is the big one. So pay attention at the back.

It seems that Parallels can emulate a SCSI disk far, far faster than it can emulate and IDE disk. However, it defaults to creating IDE disks. I believe that this is to make installation of Windows simpler, as you don't have to fettle with custom SCSI drivers during installation.

You can't just switch the main disk from IDE to SCSI, though. Windows has a minor eppe and refuses to run if you do that. I couldn't be bothered to wipe it all and try to reinstall on a new SCSI virtual disk, so I added yet another large SCSI virtual disk for my source trees and checked out onto that disk.

Remember: it's a fixed size, non-expanding SCSI disk.

With this configuration the build simply screams in comparison to building on the virtual IDE C: drive.

I hope you find this information useful!