Sunday, July 4, 2010

Beautiful code: a sense of direction

Whenever I hear someone boast about the huge number of lines he produced for his last project, I silently rejoice of not having to take over his code. To be honest, like most industry programmers, I was often asked to expand code I did not write in the first place. Maybe you did too. If so, you must know how often one can experience moments of profound despair, pondering at endless lines of incomprehensible code. For those of you lucky enough to ignore this feeling, let me just say that, every single time, I dug up countless skeletons, a lot more than the ones hiding in the closet of your most crooked politician. Sometimes, I believe that funambulism while plate spinning at the same time might even be easier. A single move and everything collapses. Invariants break, exceptions fly and your manager loses numerous days of life expectancy.

Working on someone else's code, harder than jultagi?

Let us now be honest and consider the reverse situation. I vividly remember when one of my coworkers confessed, slightly embarrassed, what he really thought about my code. He found it simply unintelligible. I was taken aback! Until then, I had no idea that, what I took great pride in, could be so unpleasant for someone else. You may laugh at me, but frankly, do you still easily understand code you wrote only 6 months ago?

In fact, writing a piece of code is an order of magnitude easier than reading it. In that respect, some languages fare worse than others: for instance Perl is renowned as being write-only. Not only that but also beauty is said to lie in the eye of the beholder. So objective beauty, even for software, must not exist! For a very long time, I thought so. And it was extremely frustrating. I was not able to rationally explain why some piece of code did not feel right. Neither could I show other programmers how to improve their code writing skills. My only alternative was to lead by examples or resort to arguments from authority.

Now, I believe that code beauty rests on only 3 principles. They are, if you will, the golden ratios of software. Despite their relative simplicity, they are surprisingly rarely followed by programmers, even advanced ones. However, if applied systematically, they invariably lead you to beautifully crafted pieces of code. With them, I gained a sense of direction that I hope to convey in this post.

A golden ratio for software?

My three golden principles of code beauty are:
  • Less is more,
  • Local is manageable,
  • Innovation is risk.
Less is more, often referred to as the KISS rule (Keep it simple, stupid!), means you should always try to minimize the complexity of your programs by decreasing the number of lines, classes, fields, methods, parameters... Good looking software is fit! The rationale for this principle is easy to understand: how much longer does it take you to read 1000 lines of code rather than 100 lines? Probably 10 times longer, provided you never get tired. This gets worse if you have to understand, debug, rewrite, and cover with tests 1000 lines of code rather than just 100. There is another, less considered, aspect of this principle: smaller programs have less room for irregularities. Copy-pasted code, duplicated states, complex logic are less likely. So, mechanically, the same amount of tests have a higher coverage, corner cases are fewer and bugs do not hide. Reducing code size is bug hunting! At last, for those of you who, like me, are fond of arguments by passing to the limit: an empty piece of code can simply not crash!

Local is manageable, also known as the separation of concerns principle, means you should structure your code in classes of limited size. At the same time, coupling between classes should be minimized. In C# this implies that methods access modifiers are chosen in this order: private first and then internal protected, internal, protected and finally public. Chinese junks were probably the first ships divided into watertight compartments. If the hull was damaged in one place, these subdivisions would circumscribe the flooding and help prevent the ship from sinking. In the same way, software compartmentalization limits propagation of bug impacts. Code modifications can be performed locally, small parts replaced easily. In great software, richness in behaviour is achieved by the combination of multiple small and simple components rather than by the gradual stratification of code into a monolithic and complex ensemble.

Chinese junk: compartmentalization at work!

In this age of unlimited belief in the virtues of progress, you probably won't often hear the last principle: Innovation is risk. However it should be common sense that every technology comes with its own risks. In that respect software is no different. If I can implement all the required functionalities with some integer fields, classes and methods, I am the happiest man. I do not enjoy worrying about when to release files, how floating points are rounded, which method is called through a delegate or whether my threads won't deadlock... Most programmers are immediately aware of the benefits of advanced language constructs. But few, myself included, really understand how to avoid misuse. Manipulate gadgets with care and maintain a sense of mistrust towards novelty!

I find these three principles particularly relevant because of their adequacy with some characteristics of the human mind, namely:
  • the inability to keep track of the state of more than a few items at the same time (due to the limits of our working memory),
  • the aptitude to consider a system at various scales,
  • the tendency to replicate similar behavior and to hold default positions.
To sum up, between several pieces of code that perform the exact same task, I prefer the smallest, most structured and conventional one. So I try to write code that complies with these principles. It takes admittedly a bit longer than just writing something that works. But because I have learned how hard it can be to read code, I know that the gain in readability largely and quickly pays back.

As you may know, writing crystal clear code right away is almost impossible. More importandly, software has a natural tendency to grow. So much so that coding can be considered as a constant fight against entropy! To avoid code decay, regular refactoring is recommended. Refactoring techniques are code transformations which preserve the external behavior of programs. Together with non-regression tests, they are a fundamental tool to clean up code without breaking it. Almost every refactoring has its inverse transformation. So, without a clear objective in mind, it can be confusing to know which one to choose. Fortunately, when guided by the three aforementioned principles, this dilemna disappears. Let me end this post with a list of a few refactoring techniques. If you want to explore this topic in more depth, SourceMaking is a good place to start.

As you will see, most refactoring techniques are straightforward. But, as I like to say, there are no small simplifications. Small steps have a low cost and can be easily made. Yet, they often open up opportunities for bigger transformations. Also consider the fact that any major code change can always be broken down into a succession of very small steps.

First, here are a few refactoring that reduce the level of access modifiers:
  • if a public method is never called outside of its assembly, then make it internal,
  • if an internal method is never called outside of its subclasses, then make it internal protected,
  • if an internal method is never called outside of its class, then make it private,
  • if a private method is never called, then remove it,
  • make all fields private and create setters/getters whenever necessary.
Then, try to decrease the number of fields and variables by defining them at the appropriate scope. Here are some code transformation to do so:
  • if a field is used in only one method, then make it a local variable of the method,
  • if a local variable is assigned once and then immediately used, then remove it,
  • if the same value is passed around as a parameter of several methods, then introduce a field to hold it. This may be a good indication that the class could be split in two parts: one to handle all computations related to this value and the remaining.
Reduce as much as possible the overall complexity, roughly evaluated by the number of lines of code, of your classes. To do this, start by successively reviewing each field. For this task, an IDE which is able to automatically find all occurences of a variable, helps. Fields come in two flavours:
  • some are initialized in the class constructors and then never modified. This is in particular the case for all fields in "functional" objects. These fields should be made readonly,
  • other hold the state of the object and are constantly modified. Remove any such field when their value can be obtained by a computation from other fields in the class. Replace these fields by getters. This simplification removes the difficulty of preserving complex invariants between different pieces of data.
    To make things more concrete, here is a first example:
    class Point
    {
        int x;
        int y;
        int distance;
    
        public void ShiftHorizontally(int deltaX)
        {
            this.x += deltaX;
            this.distance = this.x*this.x + this.y*this.y;
        }
    
        public void ShiftVertically(int deltaY)
        {
            this.y += deltaY;
            this.distance = this.x*this.x + this.y*this.y;
        }
    }
    which could be rewritten like this:
    class Point
    {
        int x;
        int y;
        int Distance
        {
            get
            {
                return this.x*this.x + this.y*this.y;
            }
        }
    
        public void ShiftHorizontally(int deltaX)
        {
            this.x += deltaX;
        }
    
        public void ShiftVertically(int deltaY)
        {
            this.y += deltaY;
        }
    }
    You can also apply this refactoring when the same value is reachable through different fields, like the field hasRichTaylor in the following class:
    class Person
    {
        int wealth;
        bool IsRich
        {
            return this.wealth > 1000000000;
        }
        Person taylor;
        bool hasRichTaylor;
    
        public ChangeTaylor(Person taylor)
        {
            this.taylor = taylor;
            this.hasRichTaylor = taylor.IsRich;
        }
    }
    Class Person is safer written this way:
    class Person
    {
        int wealth;
        bool IsRich
        {
            return this.wealth > 1000000000;
        }
        Person taylor;
        bool HasRichTaylor
        {
            get { return this.taylor.IsRich; }
        }
    
        public ChangeTaylor(Person taylor)
        {
            this.taylor = taylor;
        }
    }
  • Sometimes, you can decrease the number of occurences of a field by replacing several method calls by a single call to a richer method (for instance prefer one call AddRange over several calls to Add).
  • Finally, I like to use automatic setters/getters over fields whenever possible, I would write:
    class Tree
    {
        public int Height
        {
            private set;
            get;
        }
    }
    rather than:
    class Tree
    {
        private int height;
    
        public int Height
        {
            get { return this.height; }
        }
    }
    Other than the fact that in the second solution, the getter Height sometimes ends up far from the field height, this is admittedly a matter of taste...
Moving code between methods, is another extremely effective way to reduce class complexity. For instance, if a method is called only once, then obviously inline it. Don't worry about the fact that later, you will maybe need this method elsewhere. For now, just reduce your code size. If needed, you can always apply the inverse transformation which consists in creating a new method to hold a piece of code that is duplicated in two or more places. There are several variants of this refactoring:
  • if you access a field through a chain of fields, such as a.b.c, then define a direct getter C,
  • if you evaluate several times the same expression, such as this.LineHeight*this.lines, then define a getter this.Height,
  • if you have two loops with the same body but different ending conditions, then make a method with the loop boundary as parameter.
Whenever you have several methods with the same name, but different parameters, you should try to make one method do all the work and all the other ones call it. This refactoring is particularly relevant for class constructors. Note that, if you can't do it easily, it may be an indication that the several versions of the method do not all perform a similar task and some of them should be renamed.
At last, moving code from several callers inside the body of the method which is called, is a very effective code reduction. There are many variants of this refactoring, but the idea is always the same:
  • all calls to f are followed by the same block of instructions, then push this block down into the body of f. Sometimes, you may have to add some additional parameters to f,
  • all creations of a class A is followed by a call to some method in A, then call this method directly from the constructor of A,
  • method g is always called just after method f, then merge both methods and their parameters together,
  • the result of a method f is always used as the argument of a method g (in other words you have g(f(x))), then insert f inside g and change the signature of g,
  • a method g is always called with the same constant parameter g(c), then remove the parameter and push down the constant inside g,
  • a method g always takes a new object as argument: g(new MyObject(x)). This is an indication that the signature of g is not well-chosen, and the object should rather be created inside g.
In object oriented programming style, conditionals are usually few. Here are some techniques to decrease the conditional complexity of your code, roughly measured by the width, i.e. the maximum level of imbrication:
  • try to avoid the null value. There are very few cases when you really need it. If you ensure your variables are always initialized, null testing can be removed altogether. In a later post, I will explain why I am generally not in favor of defensive programming,
  • rather than checking a method argument is valid and perform some computation, it is nicer to filter out incorrect argument values and return early. More concretely, the following code:
    int result = 0;
    List<int> content = this.CurrentData;
    if (content != null)
    {
        if (content.Count > 0)
        {
            result = content[0];
        }
    }
    return result;
    is rewritten into:
    if (this.CurrentData == null) return 0;
    if (this.CurrentData.Count <= 0) return 0; 
    return this.CurrentData[0];
  • obviously when the true branch of an if then else ends with a breaking instruction, then the else branch is not necesssary:
    if (condition)
    {
        // do something
        continue; // or break; or return;
    }
    else
    {
        // do something else
    }
    becomes:
    if (condition)
    {
        // do something
        continue; // or break; or return;
    }
    // do something else
  • when two branches of an if then else have some code in common, try to move this code out, either before or after the conditional block. Example:
    if (condition)
    {
        f(a, b, e1);
    }
    else
    {
        f(a, b, e2);
    }
    becomes:
    int x;
    if (condition)
    {
        x = e1;
    }
    else
    {
        x = e2;
    }
    f(a, b, x);
  • if a condition inside a loop does not depend on the loop iteration, then try to put it before the loop,
  • when you have a sequence with a lot of if else if else if else, try to use a switch statement instead,
  • At last, always prefer enum over integer or string, the risk of forgetting to handle a case is lower.
The presence of an enum field and several switch instructions is a good indication that a class could be broken down into a hierarchy of smaller classes with a common abstract ancestor.
Breaking down a large class undoubtedly improves the overall architecture of your program. However, determining which fields and methods to extract can be a hard task. Try to spot fields that tend to be together in all methods.
Get rid, as much as possible, of static fields and methods. It is always possible to write a whole program with only static methods. But then, it is easy to lose the purpose of each class. Since class hierarchy is not driven by the data anymore, it is harder to feel the meaning and responsability of each class. The tendency to make larger classes increases. Data tends to be decoupled from their processing code. The number of parameters tends to grow. And static fields unnecessarily use up memory... So every time you see a static method, try to move it inside the class of one of its paramaters.

Applying these refactoring may not always be obvious. You will often need to first slightly bend the code to your will. Do not hesitate to reorder two instructions, remove a spurious test, or add some instructions. By doing this you will increase code regularity and often find corner case bugs. Besides, please note that I am not advocating to reduce code by shortening the size of variable names, or removing information which implicitely holds. On the contrary, I believe variable and method names are the best documentation and should be carefully chosen. Also, even though unnecessary, I prefer to explicitely state when fields or methods are private, when classes are internal and prefix references to class variables and methods by this.

In this post, I presented several refactoring techniques. More importantly, I presented my canons for code beauty in the form of three principles:
  • Less is more,
  • Local is manageable,
  • Innovation is risk.
These principles give you a sense of direction when simplifying code. Because I believe a piece of software is either simple and easy to understand or wrong and bug-ridden, I perform refactoring regularly. It has become a pleasant habit and a soothing exercise. This practive keeps code complexity in check. It can also be helpful to continuously measure the evolution of code complexity. And I will cover this point in a later post. However, you should not worry about finishing code clean up quickly. Any code has quite some room for improvement. So it takes time and a strict feature freeze to achieve ultimate code beauty. As the famous verse by Boileau says:
Hasten slowly, and without losing heart,
Put your work twenty times upon the anvil.

Refactoring at work!

Here are my questions for today:
  • What principles lead your code developments?
  • Do you often perform refactoring, or do you avoid the risk of breaking code that works and would you rather patch?
  • I know my list of refactoring techniques is by no way exhaustive, what other fundamental refactoring do you like to apply?
  • Do you know of any automatic tool that either performs simplifications, or lets you know of possible code simplifications?

Sunday, May 9, 2010

Non-regression tests: camming devices for humble programmers

More than once, have I been surprised by the scarcity, or even lack, of tests in both proprietary and open source software. After years of coding, I learned maybe only one thing: we inescapably make mistakes. For some reason, somehow, a certain amount of bugs ends up in our code:
  • maybe the specification was fuzzy,
  • or some library did not behave as documented,
  • or a corner case could hardly be anticipated...
Mistakes are simply unavoidable. But, isn't it just plain stupid to make the same mistake twice? Hopefully, using non-regression tests ensure that any bug previously found and fixed is permanently removed. Similarly to camming devices in climbing, they save your progress and sometimes... your life.


Without or with non-regression tests?

In this post, I explain on a concrete example how to write non-regression tests, how to add them and when. All examples are written in C#, a language I am particularly fond of. Hopefully, readers knowledgeable in any imperative language such as C, C++ or Java should have no major difficulties understanding the code. I am using SharpDevelop, an open source IDE for C# and NUnit, an open source testing framework for any .NET language, including C#. NUnit is integrated by default in SharpDevelop. Alternatively, it can also run as a standalone program.

As a starting point, let us consider a simple, hypothetical coding situation. We are working on a word manipulation library. The library exports a unique namespace WordManipulations, which contains a class Sentence. This class wraps a string and provides some simple text manipulation operations. The first step consists in adding a new project, called Tests, dedicated to non-regression tests. It references both the dynamic linked library (DLL) nunit.framework and the project WordManipulations. We then create a class Suite00_Sentence in order to group all tests related to class Sentence. We place the attribute [TestFixture] to signal to NUnit that it is a test suite. Our first test method will be called Test000_CreationFromString and is similarly tagged by the NUnit attribute [Test]:
using NUnit.Framework;
using WordManipulations;
namespace Tests
{
[TestFixture]
public class Suite00_Sentence
{
[Test]
public void Test000_CreationFromString()
{
}
}
}
At this point, the organisation of the whole solution should look like this:
  • WordManipulations
    • Sentence
  • Tests
    • Suite00_Sentence
      • Test000_CreationFromString

Let us now check everything compiles nicely and our first test executes correctly. In SharpDevelop, simply click on the play button of the unit testing side-panel. If you like it better, you can also directly use the NUnit interface. To do so, first compile the project Tests. Then go to directory Tests/bin/Debug in the Windows file explorer and double-click on file Tests.dll. This should invoke NUnit. From there, simply click Run. In both cases, a green light should indicate the test suite ran successfully.

For the next step, let us fill in the body of Test000_CreationFromString. It specifies that an instance of class Sentence can be built from a string:
public void Test000_CreationFromString()
{
Sentence empty = new Sentence("");
}
For this test to succeed, we implement the corresponding constructor in class Sentence as follows:
public class Sentence
{
private string content;

public Sentence(string content)
{
this.content = content;
}
}
Let us go a bit further. Say we want a method LastWord to return the index of the last word in a sentence. An additional test simulates the most simple use case of LastWord. Namely, when the input is "a b", then the expected output should be 2:
public void Test001_LastWord()
{
Sentence input = new Sentence("a b");
int result = input.LastWord();
Assert.AreEqual(2, result);
}
Class Assert from the NUnit framework provides several ways to check the expected output of tests.

An implementation of LastWord that validates this test could be for instance:
public int LastWord()
{
int i = this.content.Length - 1;
char currentChar = this.content[i];
while (currentChar != ' ')
{
i--;
currentChar = this.content[i];
}
return (i + 1);
}
However, soon somebody named Mary Poppins reports an array out of bounds on "Supercalifragilisticexpialidocious". We immediately write a non-regression test that reproduces the bug:
public void
Test002_LastWordShouldNotFailOnSingleWordSentence()
{
Sentence input
= new Sentence("Supercalifragilisticexpialidocious");
input.LastWord();
}
Before starting a long debugging session, we simplify the test to the bone. It turns out that the current version of LastWord even fails on an empty string. So we modify our test accordingly:
public void
Test002_LastWordShouldNotFailOnEmptySentence()
{
Sentence input = new Sentence("");
input.LastWord();
}
To pass this test, we rewrite LastWord:
public int LastWord()
{
int i = this.content.Length;
do
{
i--;
} while ((i > 0) && (this.content[i] != ' '));
}
We then run all the tests written so far. They all succeed, so we can go back to the initial bug report. The method seems not to raise any exception anymore on a single word sentence. However, it does not seem to return the correct value either. So we add another test:
[Test]
public void Test003_LastWordOnSingleWordSentence()
{
Sentence input = new Sentence("a");
int result = input.LastWord();
Assert.AreEqual(0, result);
}
We modify our code once more, and hopefully last, time:
public int LastWord()
{
for (int i = this.content.Length - 1; i > 0; i--)
{
if (this.content[i] == ' ') return (i + 1);
}
return 0;
}
In the end, we produced four test cases and a much clearer source code than the one we started with. As you may have guessed, I am particularly fond of early returns. I actually believe, there is still (at least) one bug. Can you find it?

Let me draw some general rules from this example and lay out the methodology of non-regression testing. A non-regression test is a piece of code that checks for the absence of a particular bug. It should fail in the presence of the bug and succeed in its absence. Tests have various purposes:
  • robustness tests check the program does not stop abruptly,
  • functional tests check the program computes the expected value,
  • performance tests check the program runs within its allocated time and memory budget.
A test is usually composed of three main phases:
  • a prelude sets up the context and/or prepares the input,
  • an action triggers the bug,
  • an assertion checks the expected behaviour.
In many cases, the prelude or assertion may not be necessary. In particular robustness tests obviously do not need any assertion. Tests should be made as concise and simple as possible. A collection of small tests is generally preferable to one large tests crowded with assertions. Tests should be independent from each other and could theoretically be run in any order. However, ordering tests by their date of introduction documents the software building process. Earlier tests tend to break less often as the software matures. In addition, reordering tests from simplest to most complex greatly speeds up later debugging. By the same logic, any test that fails, should be duplicated and reduced to the smallest prefix that breaks before debugging. Since NUnit runs tests in alphabetical order, these principles lead to the following organization and naming conventions:
  • Test suites should be named after the class they target and prefixed by "SuiteXX_", where XX stands for a two digits number,
  • Test methods should be named after the behaviour they check and prefixed by "TestXXX_", where XXX stands for a three digits number.
There are various occasions to add tests. The most important rule is to add at least one test every time a new bug is found. Writing tests before coding is also strongly recommended. In a way these tests work as executable specifications. Among others, they help:
  • evaluate the quality of classes external interfaces,
  • choose the rôle of each method,
  • explore scenarios previously identified during design.
In constrast to textual documentation, which may not be renewed as fast as the code changes, executable documentation of this kind remains always true. A few other occasions I can think off the top of my head are :
  • during code refactoring, you fear to break some invariant, first write some tests,
  • code learning/review: you must work on a particularly obsure piece of software, for every bit of understanding you painfully acquire, write some tests,
  • code quality ramp up: use the report of a code coverage tool to write some tests.
That said, writing numerous redundant tests out of thin air in order to reach some tests count target is simply useless. Coverage of real situations will still be low.

When following a non-regression methodology, every bug becomes the occasion to improve both specification and code quality... forever. Thus, after a while, one begins to appreciate bugs for their true worth. In addition to this profound psychological twist, non-regression testing has other impacts on the overall development process. Since at all times, the project runs without breaking its past behaviour, coding becomes a matter of incremental refinement. Project risk is managed almost to the point that the software may be wrapped and shipped any minute.


Centre Pompidou, and it is not a Fluid Catalytic Cracking Unit

On the other hand, non-regression testing also imprints its particular coding style. A bit like structural expressionist buildings, software produced with this methodology tend to expose their internal structure. In fact, to test the lower layer of a software, it is often necessary to be able to call methods, trigger events and check the value of fields that are private. In order to distinguish these probes from standard methods, I respectively name them TestRunMethodName, TestTriggerEventName and TestGetFieldName. However, the visibility of many classes, which should ideally be internal, needs to be elevated to public in order to become testable. This tends to burden a namespace unnecessarily. If you have an elegant solution to this problem, I would really like to hear about it!

To summarize this post:
  • either during feature addition or debugging, tests should be written before coding,
  • tests should be as simple and concise as possible,
  • as I like to repeat ad nauseam to my team members:
1 bug => 1 test

In some later post, I plan to talk about increasing tests coverage with Partcover, explain how to systematically run non-regression tests before any change to a central Mercurial repository and debug the myth of untestable graphical interfaces.

My questions for you today are:
  • Do you do non-regression testing?
  • What are your practices like?
  • Which unit testing framework do you use?
  • Do you have any idea how to test classes without making them public?

Tuesday, April 27, 2010

Mercurial: a granary for software

Never since the invention of agriculture has a human activity revolved so much around the accumulation of labour than... code writing. Fortunately, in contrast to grain, software does not rot. Or does it?

Well, have you ever deleted some important files by mistake? I certainly did. And I still remember the sheer feeling of horror piercing through my heart when realizing my mistake. Inadvertent deletion is just one of the least refined ways of wasting hours of coding in the blink of an eye, here are some other:
  • you try to add yet another small feature to a complex program and suddenly nothing works anymore,
  • you factor two seemingly identical methods, and some test breaks,
  • you remove some obviously useless redrawing code and three days later some weird behaviour pops up...
In any of these cases, don't you wish to simply reverse time, go back just before it broke and forget all about it?

That is exactly what source control systems are for: they are "the granaries of software".

Granaries in Niger

Today, I will introduce my personal favorite source control system: Mercurial. Written in Python, Mercurial is available both for Windows and Unix systems. Under Windows, my advice is rather to download the graphical interface TortoiseHg. All the following examples were executed in a windows shell (cmd.exe) with TortoiseHg installed. The Mercurial commands would still be indentical in any other configuration.

First, in order to check the correct installation of Mercurial, let us type:
hg --version
This should display a version number and copyright notice. Now make an empty directory and enter it:
mkdir hg-project
cd hg-project
To initialize a new repository simply type:
hg init
This creates a subdirectory .hg in which Mercurial keeps all information about the repository. Let us add a dummy file dummy.txt with some text:
echo "Hello Mercurial!" > dummy.txt
Now, if you type:
hg status
Mercurial displays the status of the files present in the repository. Right now, there is only one file dummy.txt which is not tracked. This is indicated by a question mark next to the file name, such as this:
? dummy.txt
To track the file, first type:
hg add dummy.txt
The status (obtained by typing the command hg status) becomes:
A dummy.txt
which means dummy.txt is marked for addition in the next revision.
To create a new revision, simply type:
hg commit --user "James Hacker" --message "[add] First revision of the repository. Added a dummy file for explanatory purposes."
Note how a user name, here "James Hacker", was required. This allows Mercurial to track the author of each modification of the code base. The message entered as argument to the --message option describes the purpose of this particular revision. At this point, typing hg status will not display anything. This means all files in the current directory are tracked and synchronizaed with their tracked version.

That is where the real fun begins! Inadvertent deletion of tracked files are not to be feared anymore:
del dummy.txt
The status of the repository indicates that dummy.txt is missing. This is displayed with an exclamation mark next to the file name:
! dummy.txt
In order to recover the lost file, just type:
hg revert --all

Another scenario is when you code for a few day, but end up disappointed with the result and want to go back to your initial state. To simulate this scenario let us first replace the text in dummy.txt:
echo "By doing this, I am taking the wrong path" > dummy.txt
The status of the repository indicates, by an M in front of the file name, that dummy.txt was modified:
M dummy.txt
We commit this modification to the repository:
hg commit --user "James Hacker" --mesage "[add] Performed some experiment. This may turn out to be a mistake..."
On second thought, we are not happy with this revision. Hopefully, it is easy to recover any previous state. Mercurial assigns to each revision a distinct number starting from 0. To get an overview of all past revisions, we can type:
hg history
It should display a list of revisions, each with its number, author, date of creation and description. We want to go back to the first revision (numbered 0), so we type:
hg update --rev 0
And voila! If we check the content of dummy.txt, it is now back to its initial content:
more dummy.txt
From there, we could add a new file to the repository and perform another commit:
echo "Tracking a second file." > another-dummy.txt
hg add another-dummy.txt
hg commit --user "James Hacker" --message "[add] A second dummy file."
Mercurial warns us that another head was created in the repository. Indeed, if we check the history of the repository (with command hg history), three revisions are displayed. Indeed, even though we considered the second revision (revision 1) to be a dead-end, it will never be deleted from the repository. Mercurial keeps it in case we change our mind and want to start again from there. In fact, we can safely travel to any previous revision. The command hg parent tells us our current position in the tree of all revisions. Let us briefly play with this feature:
hg parent
hg update --rev 1
hg parent

Rome was not built in a day, and so is great software. A source control system such as Mercurial is the granary that protects your code from disasters during development time.
This post presented the basics of Mercurial. In a later post, I will talk about the Mercurial graphical user interface TortoiseHg, explain how default settings are changed and share my personal practices for revision messages.

Some questions for today:
  • Do you use a source control system?
  • If not, why?
  • If so, what is your preferred source control system?

Sunday, April 18, 2010

Foreword

Experienced software developers know there is more to coding than coding itself. Yet, academic curricula still seem to focus almost exclusively on algorithmic theory. At least mine did! Good academic curricula sometimes include some course on complexity and parsing tools such as lex/yacc.

This blog presents my way of developing software. I slowly matured this method through lots of errors and experience in the coding trenches. And I now believe it allows me to produce quality code most efficiently.
Rather than reminding you of the 37 different sorting algorithms, I will talk about rules, methodology and tools:
  • Rules direct work. They limit freedom yet show the general direction. Good programmers love discipline. For instance, they emphasize on limiting code size, and do not fear syntactic restrictions put on the programming language they work with.
  • Methodology structures daily work. Even though, it seems impossible to estimate the duration of any large coding task beforehand, following a methodology allows to track progress. It also relieves the stress of deciding what to do next.
  • As for tools, there is a French saying that goes méchant ouvrier point de bons outils", which translates as "the bad workman always blames his tools". In contrast, not only have good developers quality tools, but also the tools actually improve their work by shaping their thinking and actions.
Dear reader, I hope you find the posts in this blog concrete, relevant and practical. While writing them I am also eager to hear from you. I am interested in your opinion, the improvements you see or the way you would rather do things...
By the way, did you learn any of the following topics at school: test-driven development, non-regression testing, unit testing, source control software, refactoring, software design or coding standards?
If you are a computer science teacher, do you teach any of these topics? If not, why?

Above all, with these writings, I would like to share my passion for the craft of programming!