Sunday, October 30, 2011

Jenkins: trend is destiny

Every project is driven by the strong demand for more features. Be it the evil manager (who juggles with three mobile phones to extend unattainable deadlines), the evil boss (who signs your pay-check), or the evil customer (who ultimately funds the project), they all want more and different. Left to these forces alone, any project rapidly decays in contradictory layers of increasing complexity. Hopefully, as experienced developers we all know better: by constantly capitalising our progress with non-regression tests and fighting entropy through code refactoring.
It is a good thing to know where to go, it is even better to know where we stand. And this post is just about that: how to quickly grasp and continuously monitor the global state of your software project. To do so, I will show you how to configure a job in Jenkins. Jenkins is an open source continuous integration server. In more mundane terms, it allows you to :
  • periodically run tasks on the latest version of the code base,
  • access the results through a web interface. 
Here are some of the things Jenkins can do for you:
  • build the project,
  • run all non-regression test suites,
  • run static analysis tools,
  • present tool results,
  • plot metric trends (line count, test count, warning count...),
  • package the product.
So at all times you get a synthetic view of your project, its general health and the progress being made. Since Jenkins can even package your software into a product, you can almost ship it any time! As I hope to demonstrate in this tutorial, Jenkins has a low initial investment cost (which can moreover be gradually paid) and then runs all by itself. It is for the most part extremely intuitive.

I will be working with Ubuntu (Oneiric Ocelot). Since Jenkins is a Java application, most of the tutorial should still apply with other platforms. However this post assumes you know how to install software packages for your distribution (with apt-get, aptitude or synaptic for instance). In order to install Jenkins the easiest way is to get it from the Ubuntu Software center. With older versions of Ubuntu, you may need to first add Jenkins repository to your software sources and then install with aptitude:
wget -q -O - | sudo apt-key add -
sudo sh -c 'echo deb binary/ > /etc/apt/sources.list.d/jenkins.list'
sudo aptitude update
sudo aptitude install jenkins
This page explains the installation process in a more detailed way. Now, you should be able to access Jenkins web interface by following this link http://localhost:8080/. If this does not work, then maybe you need to start the service first:
sudo service jenkins start
As a running example, I will be using Hime Parser Generator. As indicated by its name, Hime automatically generates C# parsers from context free grammars (much like yacc). But really, the purpose of Hime is irrelevant to the discussion of setting up a job in Jenkins. Hime is written in C#. This will not cause any problem. In fact, Jenkins is language agnostic, even though many of its plugins are targeted for the Java world.

A first build
Let us create our first job. To do so, simply click on New Job from the Jenkins entrance page. Name the job Hime.Parser. The job data is going to be stored on disk in directory:
From experience, it is preferable to avoid spaces in job names. Otherwise, some Jenkins plugins may display awkward behaviours. Then select the 'Build a free-style software project' option. Click OK. You will be brought to the configuration page of the newly created job. Let us ignore this page for the time being and simply, scroll downward to click Save. You are now at the main control page of the job. For now, options, listed on the left-side, are fairly limited, basically we can either Delete Projet or Build Now. Let us launch our first build. The first build, build #1, should soon appear with its status (a blue ball) and build date in the 'Build History' box.

Checking out the code
For the second build we are going to fetch the project source code from its repository. Hime code is hosted on codeplex, and it is tracked with Subversion. (even though I would prefer it to be running on Mercurial). Let us go back to our job's configuration page and choose Subversion in the 'Source Code Management' section. Then, we enter the following as its 'Repository URL':
Next select 'Emulate clean checkout by first deleting unversioned/ignored files, then 'svn update'' as the 'Check-out Strategy'. This strategy is safer than just use svn update as it will start each build with a clean repository, but largely more efficient than always checking out a full-blown fresh copy. Then save and launch a new build. While it is running, you can click on the build and check the Console Output to follow what is going on. You should see something like:
Started by user anonymous
Checking out a fresh workspace because there's no workspace at /var/lib/jenkins/jobs/Hime.Parser/workspace
Cleaning workspace /var/lib/jenkins/jobs/Hime.Parser/workspace
Checking out
A Hime
A Hime/trunk
A Hime/trunk/Exe.HimeCC
A Hime/trunk/Tests/Tests.csproj
A Hime/trunk/TraceAndTestImpact.testsettings
A Hime/trunk/VersionInfo.cs
At revision 11108 Finished:
From the job main page, you can now browse the project source code by clicking on Workspace.

Tracking line count and installing plugins
Now we are going to use Jenkins in order to track a very simple yet important metric: line count. To do so, we will use the external program sloccount and the corresponding Jenkins plugin. Before going any further, let me first explain the Jenkins execution model. Each job execution is split in two major phase: 'Build' and 'Post-Build'. During the 'Build' phase external tools are called and may produce results as files (mostly .xml but any format is possible). Once the 'Build' phase completes, the 'Post-build Actions' execute. Most of them read some result file produced during the previous phase and convert it in visual elements to display in the web interface. Jenkins has a great variety of plugins for build actions, for instance shell, python, ruby, ant (you name it...) scripts, and for post-build actions.
Plugins are managed in the 'Update Center' which can be accessed from Jenkins entrance page by clicking on Manage Jenkins, then on Manage Plugins and at last on the 'Available' tab. For the time being, select Jenkins SLOCCount Plug-in from the list of all available plugins. (If there seems to be no plugin available, then you may need to check for updates manually. This can be done in the 'Advanced' tab). Install and restart Jenkins.
If you now go to the configuration page of the Hime.Parser, you may notice option 'Publish SLOCCount analysis results' which appeared among the 'Post-build Actions'. Tick it. In addition, we must obviously invoke sloccount. In the 'Build' section, add a build step of type 'Execute shell' with the following command:
sloccount --details --wide --duplicates VersionInfo.cs Lib.CentralDogma Exe.HimeDemo Lib.Redist Exe.HimeCC >
As you may well know, sloccount counts .xml files too. So it is a good idea to run it as one of the first build step. At this point, the workspace is still clear from any .xml result files that may be produced by other build steps.
Save the new configuration and launch a build. A new link SLOCCount, should have appeared in the left menu. If you click it you will get to a sortable table which displays all file sizes. If you build a second time (and refresh the page) then Jenkins plots the graph of total program size per build in the main page of the job. For now it should evidently show a flat trend.

Compiling and building regularly
Again back to the configuration page, this time to set up a whole project compilation. We add a shell script in the 'Build' section which uses mdtool to compile all projects:
mdtool build -p:Lib.Redist -c:Release
mdtool build -p:Lib.CentralDogma -c:Release
mdtool build -p:Exe.HimeDemo -c:Release
mdtool build -p:Exe.HimeCC -c:Release
mdtool build -p:Tests -c:Release
Until now we launched all our builds manually. It would be more convenient to have Jenkins start a build every time the code changes. So we turn to the 'Build Triggers' section. We select 'Poll SCM' and have Jenkins poll the subversion repository every quarter-hour. Following the cron syntax:
*/15 * * * *
If you want more details about the syntax, just click on the question mark next to the field. Jenkins offers a very comprehensive help. Some people prefer to build at regular intervals of time, such as every night. This is done by opting for 'Build periodically'. Also notice the 'Build after other projects are built' which is useful when your software product is composed of several small projects which depend on each other.
Save and launch a build. If you like, you may check the mdtool compiler progress in the Console Output. This time click on trend next to the 'Build History'. This page displays a 'Timeline' with all builds and plots each build duration. As you can check, the last build lasts much longer than the previous ones.
At this point, Jenkins benefit is still modest. However, it will warn you by a build failure every time a code modification broke the compilation process.

Executing non-regression tests
This is probably the most important part of continuous builds: running tests frequently. Remember how we just compiled all projects in Hime. Among them was project Tests which contains all nunit test suites. We will run them with nunit-console and then let Jenkins display the results. First, let us install the Jenkins NUnit plugin from the Update Center. Then go back to the configuration page of the job. Add a new shell in the 'Build' section with the following content:
nunit-console -noshadow Tests/bin/Release/Tests.dll
And, fill the 'Publish NUnit test result report' in the 'Post-build Actions' with:
Now build, twice. If you reload the page after the builds complete, a graph with 'Test Result Trends' should appear. This graph plots the number of tests over time. Successful tests are in blue, ignored in yellow and failed in red. If you click on the graph you will browse down the tests hierarchy and see the duration and status of each test.

Displaying Static analysis reports
Back to the Update Center, let us install plugins to display results of static analysis tools. In order to get the list of compiler warnings, the list of tasks (TODO, FIXME, XXX... comments present in code) and the results of Gendarme, we need to install plugins Warnings Plug-in, Task Scanner Plugin, Jenkins Violations plugin and Static Code Analysis Plug-ins.
Then we set up each build step from, as usual, the configuration page of the Hime.Parser job.
For compiler warnings, select 'Scan for compiler warnings' among the 'Post-build Actions'. There is no ad-hoc parser for mdtool outputs, so we choose 'MSBuild' to 'Scan console log'.
Then, for tasks, select 'Scan workspace for open tasks' and fill 'Files to scan' with a fileset representing all C# source files:
Also, fill 'Tasks tags' with TODO as a 'Normal priority'.
As for Gendarme, it is first run in a shell script with the following command:
gendarme --config gendarmeConfig.xml --xml gendarmeResults.xml Exe.HimeCC/bin/Release/himecc.exe Exe.HimeDemo/bin/Release/HimeDemo.exe Lib.Redist/bin/Release/Hime.Redist.dll Tests/bin/Release/Tests.dll Lib.CentralDogma/bin/Release/Hime.CentralDogma.dll || exit 0
Note that we are using rule filters stored in file gendarmeConfig.xml on the Hime repository. Also, the command needs to be combined with an exit 0, in order not to fail the build on the first Gendarme warning. In the 'Post-build Actions' section, tick 'Report Violations' and fill the gendarme 'XML filename patter' with:
We are now ready to save, build (twice again to get trends) and check the result. We get three new trend graphs, all click-able. Clicking on 'Open Tasks Trend' brings a list of all open tasks in the project down to the source file. Similarly for 'Compiler Warnings Trend' and 'gendarme'. Note that the MSBuild parser sometimes fails to extract the correct file name or warning description from the compilation outptut. This is because of line breaks that mdtool inserts in order to keep column-width under a certain limit. I would really like to know how to avoid this bug...

The last action of this tutorial will be to add a minimal packaging process to our job. It will simply create a zip file with all the content necessary for end-product delivery. Back to adding a new shell script in the configuration page:
mkdir himecc_latest
cp Exe.HimeCC/bin/Release/* himecc_latest
cp Lib.Redist/bin/Release/Hime.Redist.xml himecc_latest
zip -r himecc_latest
Then tell Jenkins it should present the resulting archive on the main page of the job by clicking the post-build action 'Archive the artefacts'. Set as the name of 'Files to archive'. Let us build one last time and check the downloadable artefact.
Going further
By now, you must have gotten a fairly good grip on Jenkins. Going further is pretty easy and can be done by gradual improvements. For instance, having test coverage trends is pretty necessary and so are code duplication detection warnings....
Here are some plug-ins which may prove useful:

I have found that the two most important trends to follow in any project are its line and test count. Test count needs to increase while line count should decrease relentlessly. Keeping these two trends steady, is the best recipe to attain (almost) flawless software. Doing so, you may realise how much trend is, in fact, destiny.
Visualising important code metrics in Jenkins, creates a positive feedback loop: the impact of any code modification can be, almost immediately, sensed and the project stays the course.

Here are some more metrics, that can not, to the best of my knowledge, be tracked in Jenkins and that would be great to have:
  • More information from the bug-tracker, such as the time taken to complete each issue, the number of implemented and remaining features...
  • The time elapsed between the discovery of two consecutive bugs,
  • The ability to sort tasks (TODO comments) according to their age (i.e. the number of commits they have been in existence).
Here is my set of questions for you today:
  • Do you use a continuous integration server?
  • Which one do you prefer?
  • What about sonar?
  • If you are using Jenkins, which plug-ins do you find helpful?
In a later post, I would like to talk about the following topics:
  • How to order Jenkins with a script using its command-line interface,
  • How to write your own plug-in for Jenkins, in particular to follow tests coverage statistics as computed by opencover

Some metrics for project Gaia. Trend is destiny?

Thursday, June 2, 2011

Testing internal classes: quality and minimal interface at the same time

Today, is a brief entry. I just found a very satisfactory answer to one of my everlasting questions about non-regression testing. Namely, as stated in this previous post, how is it possible to test internal classes while keeping all nunit test code in a separate project?
Well, there is an attribute from System.Runtime.CompilerServices, that lets you do just that. Let's say your test project is called TestSuite, then all you need to do is add the following line to AssemblyInfo.cs in the project under test:
[assembly: InternalsVisibleTo("TestSuite")]
Now all internal classes and methods are visible to the test methods in TestSuite. So you can do deep non-regression testing of an assembly and still present a minimal interface to your users. Pretty neat, isn't it?

If I am not wrong, this problem does not arise in Java. In Java, the notion equivalent to assembly-internal is package-internal. So even though junit code is in a distinct project, it suffices to declare it as part of the same package as the project under test in order to gain full access to all package-internal classes and methods.

As often, stackoverflow has it all: this answer originates from there.

René Magritte, Decalcomania

Wednesday, March 9, 2011

Coding rules for clean and robust C# code

This post presents my coding standards for writing clean and consequently robust C# code.
As I stated in a previous post, more time is spent understanding, debugging and reading code than actually writing it. So it is many times worthwhile spending some effort writing clear code upfront.

These standards have thus two main objectives, namely to:
  • set a limit on code complexity, be it for its size or shape, so that all coding tasks remain always feasible,
  • establish a normalized way of writing code, so that anybody can quickly understand any part of the project.
In an ideal world, there would be only one best way of writing any piece of code; and the meaning of the code would just be plain obvious at first read. Following the rules listed in this post, is, in my opinion, a step towards this ideal.

I am not claiming this is the only valid set of coding rules, or even that it is complete. However, I have seen many ways to fail a software project and following these rules surely limits risks.
Even though, some rules may seem arbitrary, they are all motivated by, sometimes unfortunate, past experiences. So, if you do not understand the rationale for some rule, please, let me know and I will try to write further explanatory posts.
At last, I tried to order rules by importance. Very often, coding standards are so long that nobody reads, even less applies them. So if you feel this is too much, try to follow the first few rules only. And maybe, in the future, some tool will be able to check all these rules automatically.

Driving principles
First, here are a few general and driving principles:
  • Keep it simple: try to implement the simplest solution that solves the requirements.
  • Do not over-design: in other words do not code for yet inexistant but potentially future requirements.
  • Do not early-optimize: performance problems are solved after they are spot, not when they are imagined.
  • Remove known bugs before coding new functionalities.
  • Write at least one automatic non-regression test, before fixing any bug. This is really the minimum. Writing test before coding (full-fledge TDD) is best.
  • Refactor and clean up code often.
The coding priorities are:
  • robustness: the code does not unexpectedly stop,
  • correction: the code does what it is expected to do,
  • performance: the code is sufficiently efficient both in time and space.
Bojagi, or the minimality of patterns

Size limitations

The cost of all software development tasks grow, non-linearly, with the size of the code. So the most important rule is to try to limit the line count. Here are some limits I like to apply:
  • 1 class per file,
  • less than 20 000 lines of codes per solution (if not create libraries),
  • less than 1000 lines per class,
  • less than 100 lines per method,
  • less than 150 characters per lines,
  • less than 10 fields per class,
  • less than 4 arguments per method.
Please keep in mind these are approximate values that give an idea of the optimal scale for each category.

Forbidden language constructs
Here are language constructs that are absolutely forbidden:
  • Static methods, except for Main.
  • Static fields (unless they are private readonly constants).
  • Nested classes. Why hide some code that could be reused? Also, it is better to have the whole complexity of your architecture apparent.
  • Goto statements.
  • Macros, (in particular, regions (#region, #endregion), conditional code (#if (DEBUG))) are forbidden.
  • The ref method parameter.
  • The new method modifier.
  • Operator overloading.
  • Dead code.
  • Obsolete code that is commented out. After all, you should be using a powerful source control system such as mercurial (read this and that, if you don't know what I am talking about).
  • The ternary conditional expression c?e1:e2.

Strongly discouraged language constructs

Here are language constructs which should be used scarcely:
  • The null value can always almost be avoided. In particular, you should never pass the null value as an argument. It is better to create another variant of the method. Returning null can be advantageously replaced by throwing an exception. In other cases, you can use the null object pattern. When null is used to encode a special case, then document it explicitly.
  • Casts are forbidden in general. They may be necessary to implement powerful functionnalities, such as dynamic class loading. But then, they must be hidden in thoroughly tested libraries. Their use must be motivated and documented.
  • Generic classes should be rarely defined, even though their use is encouraged. Use generic classes for only one thing: when you need a collection of elements with all the same type. This practice avoids unnecessary casts. Any other need should be carefully evaluated. In particular, powerfull à la C++ tricks are banned.
  • Comments should be as few as possible. Special one line comments, starting with TODO: to signal not so important tasks or interrogations are allowed. Nothing worst than paraphrasing comments such as:
    // Creates an instance of MyClass
    this.myField = new MyClass();

Reducing and expliciting dependencies

  • Use the most restricting visibility for classes: internal then public.
  • Similarly, use the most restricting visibility for fields, properties and functions: private then internal protected then internal then protected then public.
  • Minimize using directives. Put them all at the start of the file rather than fully qualifying class names in the code.
  • Access fields through this. rather than prefixing them with m_ or _.
  • Try to abide by the law of Demeter: avoid successive fields accesses in the same statement.
  • Minimize and comment the use of delegates and events. They make the control flow harder to follow!

Structuring statements
  • Initialize fields in the constructors rather than next to their declaration.
  • Declare all objects, that implement the IDisposable interface, with the statement using. In rare occasions, it may not be possible, in which case you should document how the object will be disposed. Classes with the IDisposable interface are for instance streams and Windows forms.
  • Boolean conditions should not have any side-effects.
  • Reduce code imbrication level. In particular, prefer short conditions and early exit, so that the following code:
    if (x != null && y != null)
    is rewritten into:
    if (x == null) return;
    if (y == null) return;
  • Constants must be declared const whenever possible, static readonly otherwise.
  • Do not use error code, always prefer exceptions.
  • Ensure that all exceptions get caught at some point. In the worst case, set a general exception handling mechanism at the root of the program. Note this may not be that straightforward because of threads and the message loop of graphical applications.
  • Adding a logger is a good idea before shipping code.

Process rules
Bug tracker and project management
  • A bug-tracker such as Redmine should be used.
  • Tickets must be solved in the order of priority.
  • Once a ticket is solved, it should be first sent back to its author, who is the only one authorized to validate and close it.
Rules for commit messages
  • Use a version control system such as Mercurial.
  • All commit messages are prefixed by one of the following:
    [add]: adding of a new feature,
    [fix]: fixing a bug (optimizations fall under this category),
    [clean]: cleaning up code,
    [doc]: for any comment,
    [spec]: adding a new test
  • Whenever possible, it is also advised to indicate a ticket number in the commit message.
Organisation of a solution
  • Every Mercurial repository stores one solution.
  • Every solution has a project that produces either a library (.dll) or an executable (.exe).
  • Every solution has a project PrePush that contains pre-push tests, and a project PostPush for lengthy tests.
  • The central repository compiles and executes all pre-push tests as a hook before accepting any push.
  • A continuous integration system such as Hudson executes both pre-push and post-push tests. It also triggers the check of downstream solutions.
  • Every solution has a version number, preferably generated from the source control system number.
  • For library projects, an automatically generated documentation, with a tool such as Doxygen, is advised.
Non-regression tests
  • NUnit non-regression tests are mandatory.
  • The minimum consists in writing a test before fixing any bug. That way at least the same mistake is not made twice.
  • Better yet, apply TDD. That is, write tests before even starting to write code.

Syntactic conventions
A project should follow syntactic conventions, so that reading the code is easier for everybody. I believe that code should be light, succinct and go to the point. So I tried to choose conventions that remove any unnecessary hindrance to one's understanding.
Note that limiting complexity, compartimentalizing code and writing automatic tests are much more important than focusing on the small details of instruction syntax. So the rules stated in this section of the coding standards are really the least important. However, surprisingly, most coding standards seem to contain several (boring) pages of such rules. Note also, that most of these rules can be automatically checked by tools of minimum intelligence.

Naming conventions
  • Use camel-case, underscores "_" are fobidden except in test suites.
  • Project, namespace, class, interface, struct, method and properties names start with a capital letter.
  • Fields and variables start with a lower case.
  • Interfaces start with I.
    struct Data;
    int bufferSize;
    class HTMLDocument;
    String MimeType();
    struct data;
    int buffer_size;
    class HtmlDocument;
    String mimeType();
  • Avoid abbreviations (except, maybe, for loop counters and X-Y coordinates).
    int characterSize;
    int length;
    int index;
    int charSize;
    int len;
    int idx;
  • Getters should have the same name as the field they access, except their first letter is capital.
    int Count { get { return this.count; } }
    int getCount { get { return this.count; } }
  • Prefix boolean variables by is, has...
    bool isValid;

    bool valid;
  • Prefix event names by Event,
  • Prefix methods that handle an event with On,
  • Prefix delegate types by EventHandler,
  • Prefix methods and properties created for test purposes only with Test,
  • Prefix test methods that trigger an event with TestTrigger.
  • As much as possible, method names should contain a verb. Try using few different verbs. Avoid fancy verbs whose meaning is not clear to everybody. Try using the same verb for the same concept.
  • Prefix field accesses by this., rather than prefixing fields name with "_", or "m_".
Care should be taken in presenting statements. A code with irregular layout looks messy. It is an indication of the poor care that was put into the code. And, according to the broken windows theory, it encourages further degradations from other coders.

Here are some of my preferred layout conventions:
  • Each instruction, even short ones, has its own line.
    x++; y++;
  • Braces have their own line. This rule applies regardless of the element that the braces enclose: namespace, class, method, property, conditionals... Is this convention better than the java one where the opening brace is on the same line as the keyword? Frankly, it doesn't matter much, as long as you are coherent. But if you really ask me, having the braces on their own lines, makes it easier to visually identify block, plus an if statement can quickly be suppressed with a single one line comment.
    void Execute()
    void Execute() {
    if (condition) {
    else {
    if (condition)
    if (condition) DoTrue(); else DoFalse();
    if (condition) DoTrue(); else {
  • For very short bodies, such as those of getters, everything can be put on the same line. In this case, spaces around the braces are mandatory.
    int GetInfo { get { return; } }
  • For if statements, if the body statement completely holds onto one and there is no else, then the braces are omitted.
    if (condition) return;
    if (condition) break;
    if (condition) x = 0;
    if (condition) continue;
  • Put a space after keywords if, while, switch and foreach.
    if (condition)
    if(condition) {
  • Boolean expressions that do not hold on one line should have their operators aligned to the left.
    if (b1
    || b2
    || b3)
    if (b1 ||
    b2 ||
  • Avoid spaces around unary operators.
    i ++;
  • Put spaces around binary operators.
    y = (m * x) + b;
    c = a | b;

    c = a|b;
  • Put no space before and one space after a comma, colon or semi-colon.
    DoSomething(a, b);
    class A: I
    for (int i = 0; i < 10; i++)
    class A:I
    for (int i = 0 ; i < 10 ; i++)
  • Avoid spaces between the name of a function and its opening parenthesis or between a parenthesis and an argument.
    DoIt(a, b);
    DoIt (a, b);
    DoIt( a, b );

Here are a few things I would like to ask you:
  • Does your project have coding standards?
  • Are they known and followed by the team members?
  • Do you think it is a good idea to ask for all non-negative integer values to have the type uint? I noticed in practice, the type uint seems to be very rarely used.
  • What about software costs estimation models such as COCOMO? Can these kind of models bring any value to the development process? How?

Sunday, March 6, 2011

Mercurial: sharing code in the team

In a previous post, I explained how a source control system such as Mercurial can be used to prevent the accidental loss of source code. More specifically, I showed how to:
  • create a new source code repository with hg init,
  • track a file in the repository with hg add,
  • save a modification of the code base with hg commit,
  • and restore any previously commited version with hg update.
This time, I would like to explore another, even more important, function of source control systems: that of allowing a team of developpers to work at the same time on the same piece of software. Consider the simplest case of only two developpers: Laurel and Hardy are preparing the script for their next movie scene. They are pressed by time. They know some large companies, such as International Big Movies, pair scriptwriters by opposite locations on earth: one in India with one in the US or one in Japan with one in Europe. That way, while one works, the other one sleeps. However this solution is clearly above their means. So they would rather work in parallel. Both would start with the same initial text, each make his own revisions, and still be able to converge towards a final and common version. Mercurial will let them do just that, safely and almost seamlessly. Here is how.

2001: A Space Odyssey, a movie by Internation Big Movies?

Suppose three computers sit in Laurel and Hardy's office: stan, ollie and hal-roach. Laurel obviously works on stan, Hardy uses ollie and the last machine hal-roach is a server for administrative tasks. The server hal-roach is accessible to both stan and ollie through the network.
They first set up a central repository, say script, on hal-roach. At all times, this repository contains the reference version of the script. For now, it is empty. Let me recall the commands to create a repository from an empty directory:
hal-roach> mkdir script
hal-roach> cd script
hal-roach> hg init
In order to exchange data with the central repository, they must first publish it. To do so, they have several options, including publishing through:
  • http with the built-in server via command hg serve,
  • http(s) with an existing web server, such as Apache,
  • ssh,
  • a shared disk system mechanism, such as Samba.
Most of these methods are described in this page of the Mercurial wiki. Complex administration tasks are not Laurel and Hardy's cup of tea. Since, they are going to access the central repository through their trusted internal network, they choose the easiest fastest method: the built-in server. First, they configure the repository, so that anybody will be able to deposit his changes via http. Then, they edit the configuration file hgrc in subdirectory .hg to contain the lines:
allow_push = *
push_ssl = false
Then they run the server with command:
hal-roach> hg serve
Using a web-browser, they can now check the status of the repository simply by going to the address:
Next, Laurel gets a copy of the whole repository on his machine:
stan> hg clone http://hal-roach:8000/ script
Hardy does the same on his own machine:
ollie> hg clone http://hal-roach:8000/ script
Laurel begins work.
stan> cd script
He creates a file dialogue.txt with the following content:
Stan: You know, Ollie, I been thinkin'!
He then commits the new file:
stan> hg add dialogue.txt
stan> hg commit -m "[add] First line"
In order to make his contribution available to Hardy, he now pushes the state of his local repository towards the central repository:
stan> hg push
Hardy retrieves the lastest changes from the central repository by typing command:
ollie> hg pull -u
And adds his line to the dialogue:
Ollie: Waht about?
He then commits and pushes:
ollie> hg commit -m "[add] a question"
ollie> hg push
Now Laurel pulls the latest changes:
stan> hg pull -u
Adds the next two lines, commits and pushes:
Stan: Well, if we caught our own fish, then we wouldn't
have to pay for it and whoever we sold it to, it would
be clear profit.
Ollie: Tell me that again!
stan> hg commit -m "[add] my answer"
stan> hg push
Immediately after he pushes, Hardy notices his typographical error in writing 'Waht' instead of 'What'. So, while Laurel crafts his next replicas, Hardy commits a correction:
Ollie: What about?
ollie> hg commit -m "[fix] small typo"
But when he wants to push, Mercurial aborts:
ollie> hg push
abort: push creates new remote heads on branch 'default'!
(you should pull and merge or use push -f to force)
So he pulls the last changes made by Laurel. He then has a look at the history of changes on the repository:
ollie> hg log
There are four commits so far. Commits 2 and 3 both originate from commit 1. There are thus two parallel versions of the script, in Mercurial jargon two heads. The heads on a repository can be obtained by typing command:
ollie> hg heads
Mercurial refused the push to avoid replicating this situation on the central repository. It is, in fact, generally preferable to have only one head on the reference repository. So, before going any further, Hardy must reconcile both versions. To do so, he types command:
ollie> hg merge
Mercurial automatically combines both changes. The dialogue now reads:
Stan: You know, Ollie, I been thinkin'!
Ollie: What about?
Stan: Well, if we caught our own fish, then we wouldn't
have to pay for it and whoever we sold it to, it would
be clear profit.
Ollie: Tell me that again!
Hardy commits the merge and can now push:
ollie> hg commit -m "[merge]"
ollie> hg push
Since the differences between both heads were on distinct parts of the text, Mercurial merged them correctly without any manual intervention. Sometimes, as we will see next, there are conflicts and choices must be made. Laurel pulls the version Hardy just merged, ends the dialogue and pushes:
stan> hg pull -u
Stan: Well, if we caught our own fish, then we wouldn't
have to pay for it and whoever we sold it to, it would
be clear profit.
Ollie: That's a pretty smart thought!
stan> hg commit -m "[add] finished dialogue"
stan> hg pull
But Hardy has a different idea and completes the dialogue differently, and commits:
Stan: Well, if we caught our own fish, then the people
we sold it to wouldn't have to pay for it, the profit
would go to the fish...
ollie> hg commit -m "[add] antimetabole"
Before he is able to push, he has to pull and merge once more.
ollie> hg pull
ollie> hg merge
This time, Mercurial is obviously not able to automatically choose the right version of the fifth dialogue line. Hardy chooses his version, commits and pushes.

Laurel and Hardy both agree on the final version of the dialogue, which reads:
Stan: You know, Ollie, I been thinkin'!
Ollie: What about?
Stan: Well, if we caught our own fish, then we wouldn't
have to pay for it and whoever we sold it to, it would
be clear profit.
Ollie: Tell me that again!
Stan: Well, if we caught our own fish, then the people
we sold it to wouldn't have to pay for it, the profit
would go to the fish...
Ollie: That's a pretty smart thought!
Mercurial is pretty powerful, isn't it? However, like any other tool, a proper usage will let you get the most out of it. Here are a few simple practices I learned to follow.
First, I dislike long painful conflict-ridden merge sessions. So I prefer small and frequent commits. It also helps having your developers be responsible for distinct components. The standard sequence of Mercurial commands should be to:
  • pull the latest repository state before starting any task,
  • perform a (preferably small) task,
  • commit,
  • push,
  • pull, merge and push if the previous push was to create an additional head.
Small commits are also easier to describe. Commit messages come in 5 different flavours, which I indicate with a keyword in squared brackets:
  • [add] for any feature addition (usually goes with a corresponding test),
  • [fix] for any bug fix (usually goes with a corresponding test),
  • [clean] for any code refactoring,
  • [doc] for any comment: documentation, TODO, notes, remarks...
  • [spec] for any additional test,
  • [merge].
Such a taxonomy helps quickly grasp the nature of the modifications performed by your co-workers when coming back from vacations.

I would like to end this post with the description of a procedure which I find crucial, even though, unfortunately, not widely followed. Did you notice that, in the running example, Hardy introduced a spelling error and pushed it to the central repository? Had he run a spell-checker, he would have caught his error before even corrupting the central repository. Mercurial provides a mechanism to run scripts automatically before every push. It is called a pre-push hook. Hardy could have set up a hook to run a spell-checker. That way, if the spell-checker finds an error, the push operation aborts.
For software projects, I like to set up a pre-push hook that compiles the code and runs a suite of non-regression tests. To do so, I add the following lines to the .hgrc file of the central repository:
pretxnchangegroup.duplicate = hg push test-repository
pretxnchangegroup.check = cd test-repository && hg
update && ./run_script_that_compiles_and_performs_tests
I am using two pretxchangegroup hooks. The first hook propagates the changes brought by the push to a test repository. Then compilation and tests are run from the test repository. If any of the scripts fails, then the changes are reverted and the repository regains its previous state.
If you wonder, someone, who tries to pull from the repository before the hooks are complete, does not, fortunately, get the latest changes. Note also that the machine which executes the hooks depends on the system used to serve the repository. With Samba, they run locally, with apache they run on the server as the apache restricted user, with ssh they run on the server with your identity. Also keep in mind that the apache server may set a time limit to its connections.
Obviously tests that are run during a pre-push hook should not last too long. Otherwise pushing can become quite cumbersome and coders will be reluctant to do it often. That is why I like to keep the duration of pre-push tests below 1 minute (even if up to 5 minutes may be bearable). To achieve this while still having a good coverage, I try to keep my projects small. I structure large software developments into several small projects, each not exceeding 20 000 lines of code. If there are some tests that last long, I group them in another suite, called the post-push suite. This additional suite is executed later by the continuous integration system. (But this is a story for a future post...)
Maybe some of you think that running all tests before accepting any modification is paranoid. However, the productivity gains achieved by finding bugs as early as possible can never be stressed enough. Bugs caught by the pre-push hook are relatively easy to fix, since the code modifications are still fresh and few. There is also no time wasted by the rest of the team. In practice, a large fraction of bugs are caught by the non-regression tests run in the pre-push hooks. Only integration and performance bugs are left for the continuous integration system or testing team to catch. Pre-push hooks also prevents you from pushing incorrect code by inadvertence or overconfidence.

Do you do pre-push hooks? In particular, do you run tests at this point? If so what kind of hooks do you use?

Stan Laurel: You remember how dumb I used to be?
Oliver Hardy: Yeah?
Stan Laurel: Well, I’m better now.