Deliberating On Software Development

March 7, 2010

IoC Containers and Transient Instances

Filed under: Uncategorized — Tags: — x97mdr @ 2:27 pm

As seems to be the case on this blog,  I’m about to confess to something foolish that I’ve done so as to better inform you dear reader!

There are many IoC containers out there in the .Net-o-sphere that are quite excellent.  Personally, I use Ninject because I love its simple syntax and its powerful contextual binding.  When I first came across Ninject some time ago (shortly after 1.0 was released) I read everything I could about it and IoC in general and then said to myself “IoC … whatever! I don’t need that crap! … way too complex”.  A startling lack of vision on my part.  Then I started getting deep into my program’s re-design from C to C#.  We have a Settings object that contains the settings the user has entered from the command-line and/or a parameter file.  Lots of objects need the Settings but sometimes these objects are fairly deep in the object graph and to get the settings I used the Singleton instance of it.  As you develop in a more object-oriented way many other circumstances like this turn up.

As I started unit testing I discovered that Ninject was born and raised for just this sort of activity: injecting objects deep into the object graph when you want them and it gets rid of the ugly Singleton instances that are such a pain when you are unit testing.  It was like I was born-again.  Praise the Ninject!  I used Ninject to create everything … and I mean everything!  I was passing parameters to the kernel to help resolve bindings, using providers where I thought I needed more complicated logic, you name it.  If I could have used Ninject to inject my lunch into my bag every day I would have.

Then it came time to performance test my application.  When I ran the performance tests they were incredibly slow … like I could have done the calculations without the computer slower.  I did some research on the net and found that there was a performance bug in Ninject 1.0 that was resolved in 1.5 and 2.0 but when I tried replacing 1.0 with 1.5 or 2.0 the performance was barely affected and still very slow.

I was perplexed so of course I went to the Ninject user group and posted a question.  The answers sort of slapped me in the face a bit.  As one of my friends at work says: I was trying to be holier than the pope!  Basically IoC containers are intended to be used to resolve long-lived instances.  This is, in fact, why some containers set the default lifetime of objects to singleton rather than transient.  To resolve large numbers of transient objects you should use the factory pattern, make the IoC Container resolve the factory as a singleton and inject any dependencies that the transient objects need into the factory.  I don’t have many types of transient objects that need to be created, only a few classes, so I created factories for them and my performance problems melted like an arctic ice pack.

So how could I have avoided this problem?  I should have been more diligent about asking questions on best practices when I first started using Ninject (or really any other piece of technology).  Most worthwhile open source projects have newsgroups or forums that are active and have some informative users who are willing to lend a hand.  In my own defense though, while doing research on IoC (and I read quite a few articles on this) never once did I come across best practices for using the tools, it was mostly the nuts and bolts of how to use a tool that documentation focuses on.  This particular problem never seemed to be discussed, though if I’m wrong maybe someone will point me to where it is discussed.  That’s the main reason I wrote this article though, to help anyone else who may have been in the same situation as me.

IoC FTW!

Your First Concordion.Net Project (Part 5)

Filed under: Uncategorized — x97mdr @ 9:46 am

Running Specs with Gallio

Part 1Part 2Part 3Part 4

We have our specifications developed so now we want to actually run this beasty! After trying to develop my own test runner for Concordion I decided to settle on writing a plugin for the Gallio framework instead.

Gallio is an open testing framework that runs all sorts of .Net test frameworks on a number of different platforms. You can find out more about its adaptability and rich reporting capabilities here.

So how do we get Gallio to run our specifications? We need to use a plugin. In the Concordion distributable file you will notice a directory called Gallio.ConcordionAdapter. This is the plugin specifically written to allow Concordion specifications to be run on Gallio.

What I like to do when running specifications is to start off by running them with Gallio.Echo. This is the command-line executable and I find this easiest to debug problems with. There are a number of other runners for Gallio (like Icarus (the GUI); or runners for Visual Studio, Test-Driven.Net, etc.).

I like to create the specification file in the root directory of the specification project so let’s do that right now. Here’s the file in the Calculator.Spec project folder

Calculator.Sample.11

The file is called run-spec-with-echo.cmd … as you can tell I really like long and descriptive names!

Now let’s fill this up with a command-line. We will need to tell Gallio where to find the Gallio.ConcordionAdapter plugin and where to find our specification assembly. This folder has contents like this:

Calculator.Sample.12

To tell Gallio to find this directory we add the command-line option

/pd:c:\Concordion\Gallio.ConcordionAdapter

This tells Gallio to search in this directory for a .plugin file that it can use to work with the Concordion plugin. Now we need to tell Concordion where our specification assembly is.

bin\Debug\Calculator.Spec.dll

Since the batch file is in our project directory but our specification assembly is in the project output folder we need to add the relative path to the specification assembly. Unless otherwise specified, Gallio.Echo’s working directory is the current folder. Our command file should now resemble this

set GALLIO_PATH=C:\Dev\concordion-net\tools\Gallio-trunk\bin
%GALLIO_PATH%\Gallio.Echo.exe /pd:c:\Concordion\Gallio.ConcordionAdapter bin\debug\Calculator.Spec.dll
pause

So let’s try and run the file! If you do you should see the following results on screen

Calculator.Sample.13

Uh oh … Houston we have a problem! Notice that there is a failure in our specification. If you remember back to the last post we intentionally made our ArithmeticTest fixture a dumb one so that we could see it fail.

Fixing the Fixture

Red-Green-Refactor is the name of the game here! Ok, let’s fix up our test by filling out the Calculator class in our Calculator project and calling that from the specification to perform the arithmetic operations. Our Calculator class should resemble this:

public class Calculator{    public long Add(long first, long second)    {        return first + second;    }

    public long Subtract(long first, long second)    {        return first - second;    }

    public long Multiply(long first, long second)    {        return first * second;    }

    public long Divide(long first, long second)    {        return first / second;    }}

And we will want to modify our fixture like so

    [ConcordionTest]  public class ArithmeticTest  {      public long firstOperand;      public long secondOperand;

      public long Addition(long first, long second)      {          return new Calculator().Add(first, second);      }

      public long Subtraction(long first, long second)      {          return new Calculator().Subtract(first, second);      }

      public long Multiplication(long first, long second)      {          return new Calculator().Multiply(first, second);      }

      public long Division(long first, long second)      {          return new Calculator().Divide(first, second);      }  }

Ooops … we forgot something! Did you notice? For the above code to work we need to add a reference to the Calculator project in the Calculator.Spec project

Calculator.Sample.14

Good, now we have our reference so it should compile and run successfully now. Let’s see how it goes.

image

Woot! All of our tests now pass! Let’s take a peek at the output now.

image

Notice the green surrounding the results now? That means our Calculator class successfully performs the arithmetic we want!

Epilogue

If you want to see the source code for this series of posts you can download it here.

This sample was pretty basic but it should be enough to get you up and running!

Thanks for trying out this tool, I hope you enjoy it. Stay tuned, there will be more to come on advanced features of Concordion!

Your First Concordion.Net Project (Part 4)

Filed under: Uncategorized — x97mdr @ 9:46 am

More Specifications!

Part 1Part 2Part 3Part 5

In the previous post we got our project set up and added the first specification.  The sad part was it didn’t really do much.  It only acted as a gateway into our other specifications.  Let’s add something a bit juicier!

In the gateway specification document (Calculator.html) we put a link to an Operations.html specification.  Let’s add that now.  Remember to add the .html and class file then set the properties on the .html file as per the gateway specification and set up the fixture just like the CalculatorTest class.

Calculator.Sample

The content of Operations.html should resemble this when finished

Calculator.Sample.8

Now we have a document describing the various operations that we can perform.  So let’s add two more specifications: one for the arithmetic and one for the trigonometric.  Your project should now look like this:

Calculator.Sample.9

Did you remember to set the .html and .cs files up like the others? I hope so … otherwise you’re in for some heartache :-)

Arithmetic Operations

Let’s add some basic arithmetic operations to our specification: addition, subtraction, multiplication, division.  The specification file is much like our other specification files excepting that it now contains some special markup.  Here is an excerpt from the page’s Multiplication example:

<h3>Example – Multiplication</h3>

<p>
    The result of <b concordion:set="#firstOperand">2</b> * <b concordion:set="#secondOperand">2</b>

    the result will be:
    <b concordion:assertEquals="Multiplication(#firstOperand, #secondOperand)">4</b>
</p>

The rendered html file will resemble this:

Calculator.Sample.10

You can see the source in the included project but the items to note are the concordion:set and the concordion:assertEquals attributes.  concordion:set is used to set a value of a public field or property in the fixture class.  The assertEquals checks the value in the element (the expected value) against the value returned by the operation defined in the concordion:assertEquals attribute.  It is the concordion:assertEquals that will actually turn red or green (maybe yellow, but hopefully not yellow … cross your fingers!)

Creating the fixture

Now that we have a specification, we need some code in the fixture to back it up!  So let’s add some.  Note that this code is going to fail.  True TDD practitioners follow the red-green-refactor methodology so we will try and follow suit.  Our fixture code will look like this:

[ConcordionTest]
public class ArithmeticTest
{
    public long firstOperand;
    public long secondOperand;

    public long Addition(long first, long second)
    {
        return -1;
    }

    public long Subtraction(long first, long second)
    {
        return -1;
    }

    public long Multiplication(long first, long second)
    {
        return -1;
    }

    public long Division(long first, long second)
    {
        return -1;
    }
}

We should now have a complete fixture that should be able to run, albeit with some errors!

Next time we will work on how to run your project with Gallio and produce some results so that we can refactor the above code and turn it into something more meaningful!

Your First Concordion.Net Project (Part 3)

Filed under: Uncategorized — x97mdr @ 9:46 am

Adding Specifications

Part 1Part 2Part 4Part 5

As with any sort of Test-Driven Development or Behaviour-Driven Development we should start by writing our tests or specifications first.

First, let’s do some basic setup of our specification assembly, Calculator.Spec.

Style Sheet

I like to cheat a little bit and steal from David’s style sheet included with the original Concordion. It’s simple, clean and very easy to read so we’ll use it. You will want to drop it at the top level of your Calculator.Spec project. You should change the Build Action on this file to Embedded Resource and set the Copy To Output Folder action to Copy Always.

This will ensure that the style sheet file is always included with the output when you build. Here is what it looks like

Calculator.Sample.2

Folder Structure

Now that we have a style sheet we need a place to put our documentation. I like to have a top-level folder that is used as an entry point to all of the other fixtures. So let’s make a folder at the top-level of the Calculator.Spec project and call it Calculator. In this folder we will place two files: Calculator.html and CalculatorTest.cs

Calculator.html will be our HTML specification and CalculatorTest.cs will be the fixture class for Calculator.html. Note the naming convention here: The fixture name is the specification with the word ’Test’ appended. You should now see something like this

Calculator.Sample.3

Filling in the Specification

The next step is to fill in the specification html file. To do this I will do a little more cut and paste magic from the main Concordion.Net specifications. You will want to replace the DOCTYPE and HTML tags that Visual Studio inserts with the following statement
<html xmlns:concordion="http://www.concordion.org/2007/concordion">

This statement declares the xml namespace for all of the concordion elements that we will be decorating our HMTL with.

Note: while I intend for Calculator.html to be a specification that doesn’t actually check anything I am still declaring the xml namespace. This is because it is required by Concordion.

Now we should add a link to our style sheet at the root of our folder structure like so

<link href="../concordion.css" rel="stylesheet" type="text/css" />

Now we can start adding text. I will add a little blurb about our calculator on this page and then add some “Further Details” reference links. In the end our Calculator.html page should resemble this

Calculator.Sample.4

Last, but not least, you will need to set the Build Action to Embedded Resource and the Copy To Output Folder action to Copy Always

Calculator.Sample.6

IMPORTANT: You should set those two properties on every html specification file you write! If you do not then they will not copy tot he output folder and Concordion.Net will not be able to find them.

Writing the Specification

We haven’t actually written any code to support this fixture yet so we should probably do that next! The discerning reader may even be wondering why haven’t we added a reference to Concordion.Net yet? We must add a reference to Concordion.Net to the project now. I will create a folder at the level of the solution called ‘lib’ and place the Concordion.Net assembly there and then add a reference to it in the project.

Calculator.Sample.5

Now we will need to open up CalculatorTest.cs and do some touch-up work to it so that Concordion.Net can find it properly:

  1. Make the class public. If you do not it won’t be exported and if it isn’t exported then the test runner will not find it. Then you will be sad and I will be too!
  2. Decorate the class with the [ConcordionTest] attribute. This tells the test runner that this class is intended to be a test. It also allows you to use other classes that support the tests without the test runner trying to find and run them.

Your final class should look like this

using Concordion.Integration;
namespace Calculator.Spec.Calculator
{
[ConcordionTest]
public class CalculatorTest
{
}
}

One last thing … Concordion Files and Namespaces

Concordion has to have some means of linking a specification with a fixture class. The way that Concordion.Net does this is based on the namespace of the class. Thus, if a class has a fully-qualified name of Calculator.Spec.Calculator.CalculatorTest (like above) then Concordion.Net will look in the path Calculator\Spec\Calculator for the Calculator.html file.

Since we are embedding our specifications and they will be copied directly to the output it is necessary to modify the namespace a bit so that Concordion.Net can link the fixture to the specification. We do this by trimming the namespace like so:

using Concordion.Integration;
namespace Calculator
{
[ConcordionTest]
public class CalculatorTest
{
}
}

Notice that namespace has been reduced to just Calculator? Now Concordion.Net will look for Calculator\Calculator.html which is exactly where the html file will be when the project is built.

Next we will look at how to add some real specifications … that actually run tests!

Your First Concordion.Net Project (Part 2)

Filed under: Uncategorized — x97mdr @ 9:45 am

Setting Up Visual Studio

Part 1Part 3Part 4Part 5

If you’re a diligent .NET Developer then you have probably read Microsoft’s guidelines for setting up a Visual Studio project. If you have read this then I would also point you towards this excellent posting

Most Concordion.Net projects should follow the same basic structure of three assemblies:

  • <project-name> – the assembly containing your business logic
  • <project-name>.Test – an assembly for your unit tests
  • <project-name>.Spec – an assembly for your Concordion.Net specifications.

In our example we will be creating a project called Calculator. This will be a very simple project that contains only a calculator class API for others to use in their own code. Yes, I know this is a very simple example but it will have to do for now because I’m not feeling very imaginative! With Calculator we will have the following three projects:

  • Calculator
  • Calculator.Test
  • Calculator.Spec

Calculator.Sample.1

This will be the basic structure of our assembly. While I normally use xUnit for my open source projects I won’t be covering the Calculator.Test project much more than to say that you should have one there, it’s more for completeness.

We will talk about adding references later on as they’re required.

The ConcordionAssembly Attribute

One of the very first things you need to do on any Concordion.Net project is to mark your assembly with a ConcordionAssembly attribute. This allows Gallio (the thing that runs the specifications) know that this assembly contains Concordion specifications. If this attribute is not present then your tests will not be found. You can mark the assembly like this

[assembly: ConcordionAssembly]

place this declaration in the AssemblyInfo.cs file of the Calculator.Spec project.

The next part in this series will talk a bit about how to add specifications to the Calculator.Spec project.

Your First Concordion.NET Project (Part 1)

Filed under: Uncategorized — x97mdr @ 9:45 am

What is Concordion.Net?

Part 2Part 3Part 4Part 5

So, you’ve stumbled across Concordion or Concordion.Net on the internet, maybe liked the style of specification writing Concordion promotes and now you want to do something with it! So where to begin … ?

  • Read about Behaviour Driven Development on Dan North’s classic blog post.  This post describes the basic ideas behind behaviour-driven development and is very informative.
  • Read about Concordion technique on the main Concordion web page.  No matter what testing framework you choose it is important to get the basics down before attempting to use it.
  • Read through the Concordion.Net wiki.  It is still growing but there is some useful information there on how to setup Concordion.Net, what the version compatibility is with the Java version, etc.

The next part in this series will detail how to setup Visual Studio to prepare for your Concordion.Net project.

Using Ninject 2 to resolve conditional bindings with a kernel

Filed under: Uncategorized — x97mdr @ 12:25 am

A while ago I saw that Nate Kohari was working on a beta version of Ninject, called Ninject 2.  I use Ninject 1.0 in my projects and I’m really happy.  It has a clean API and is very lightweight.  Since once of the projects I use Ninject in is a batch program that has a large number of user settings I often use Ninject to configure the particular object I get based on these settings, which I keep in an object called, not surprisingly, Settings. So here is an example of what I might do:

Bind<IEngineSession>()
	.To<DeriveEngineSession>()
	.OnlyIf(context => context.Kernel.Get<Settings>().Session == SessionType.Derive);

Bind<IEngineSession>()
	.To<DonorEngineSession>()
	.OnlyIf(context => context.Kernel.Get<Settings>().Session == SessionType.Donor);

Bind<IEngineSession>()
	.To<EditEngineSession>()
	.OnlyIf(context => context.Kernel.Get<Settings>().Session == SessionType.Edit);

Note that inside of the conditional binding I resolve an instance of the Settings object and use it.  Well, when I tried to do this with Ninject 2 I ran into some big problems because the IContext is not part of the conditional binding syntax now, it is replaced by IRequest which does not have access to the Kernel.  I was quite perplexed and put off for a while until I saw that Ninject 2 was officially released.  I thought perhaps the bug was resolved so I eagerly downloaded Ninject to try it out, only to be disappointed.  There was no IKernel as part of IRequest and the ParentContext field of IRequest was null on the initial bind so I couldn’t use that to get an IKernel object to resolve my objects.

Perplexed, I posted a question to the Ninject newsgroup asking if this was a bug.  One of the things I love about Ninject is that either Nate or Ian get back to you with lightning speed and efficiency and always have an answer at the ready.  As Ian pointed out, the binding itself has a property called Kernel that I can use, rather than getting it from the binding, like so:

Bind<IEngineSession>()
	.To<DeriveEngineSession>()
	.When(request => Kernel.Get<Settings>().Session == SessionType.Derive);

Bind<IEngineSession>()
	.To<DonorEngineSession>()
	.When(request => Kernel.Get<Settings>().Session == SessionType.Donor);

Bind<IEngineSession>()
	.To<EditEngineSession>()
	.When(request => Kernel.Get<Settings>().Session == SessionType.Edit);

I am pleased as can be now since this problem was the only thing preventing me from making the switch in all of my projects!  Thanks to Nate Kohari, Ian Davis, et al. for pouring so much love into this framework and making it a shining example.  I even gave my team a tour of the repository because, unlike some open source projects, it is very well laid out and easy to use … I maybe have even stolen some ideas (like embedding NAnt directly into the working folder and providing a script to build the project on demand) into my own projects!

March 6, 2010

Why should I continuously integrate? (or … how I learned to love claps and cheers)

One of the practices I consider essential for any development team is to continuously integrate their code.  There is a good discussion of continuous integration out there already but in short continuous integration means that every time you commit to your version control repository a fully-fledged build of your project occurs.  These builds should have various features:

  • Have a meaningful version number.  For example major.minor.build.revision where revision is the revision of your version control repository.
  • Build a clean copy of the source code each time (delete + svn checkout as opposed to svn update)
  • Perform unit and/or integration tests to determine the quality of the build
  • Gather the various deployable pieces of your application into an archive (or set of archives) that the anyone can retrieve with little trouble

While it is not necessary, some other nice things you can do as part of a continuous integration process are

  • Perform static analysis of your code and produce reports on this for every build
  • Perform code coverage analysis of your code and produce reports on every build
  • Run integration tests using tools like Cucumber, SpecFlow or Fitnesse.
  • Link build numbers back to your activity management tool (Bugzilla, Trac, Jira, TFS Work Items)
  • Track who breaks builds, etc.

Real Life Continuous Integration

On my team we have been doing full-fledged continuous integration on all projects for over a year now.  We use a combination of tools to support the trinity of activities involved in configuration management.

I have tried a few different continuous integration tools (VisualBuildPro, IBM Rational BuildForge and JetBrains’ TeamCity) and found Hudson to be the best among them.  There is a plethora of plug-ins available, an active user and developer support group and its open source!  Does a tool get any better?  TeamCity comes a close second but its lack of support for trend charts and plugins was a real downer.  It does have pre-flight builds but this feature wasn’t enough to win me over.  IBM Rational Build Forge was ridiculously over priced for the functionality, required a lot of manual configuration and had a confusing set of terminology (as is common with IBM Rational products).  Avoid it at all costs.  All that being said, in the end we chose Hudson and every build in Hudson has some key ingredients:

  • We monitor the Subversion repositories for change every 5 minutes.
  • When a change is detected Hudson deletes the old working copy and gets a fresh copy from Subversion (no svn update!!)
  • The actual build is done with an NAnt script
    • You could use MSBuild, Rake or PSake instead.
    • The NAnt script builds the solution with MSBuild, runs any unit tests, calls FxCop to produce a report for some static analysis and then packages the results of the build.  For some projects we use ILMerge to reduce the number of assemblies we need to deploy as I am a heavy user of open source tools.
    • Hudson creates environment variables BUILD_NUMBER and SVN_REVISION that we use in our build script to generate the version numbers.  These get put into the AssemblyInfo.cs files and cause the Version property of the assembly to reflect the 
  • NUnit plugin to read the NUnit results report and produce a trend chart on
  • Violations plugin to read the FxCop results and produce a trend chart on possible code violations
  • Warnings plugin to parse the MSBuild log file and produce a trend chart on compiler violations
  • Task Scanner plugin parses the class files (*.cs) looking for those //TODO and // HACK (or whatever else you tell it) comments and then makes a nice report on them
  • The Jira plugin is probably the nicest feature.  It parses the commit comments from Subversion for each build for tags in Jira format (i.e. PROJECT-19) and if it finds one it puts a comment in the relevant Jira issue with a link back to the Hudson build.  This is an awesome way for our users to be notified when an issue that they created has been worked on!
  • A zip archive is produced with all of the materials for using the product (the executable, configuration files, etc.)

When used in conjunction with “Commit early, commit often” Hudson becomes the heartbeat of our team.  Anytime someone performs a commit we get a pile of useful trend charts and reports indicating the health of the project.  With the Jira plugin we get connection to the activity management tool so users know what build their issue has been worked on and Hudson keeps track of the source code changes for every build … in other words complete traceability!  If a user encounters a problem we can tell immediately what the version is that they’re using and whether they are up to date or not.  We can tell what has changed since the last build to determine if merely upgrading will resolve their problem.  We can point a user to a Hudson build and they can get everything they need from a single page since everything is archived in one place.

One important thing that I have found out is that it is important to have a way to build your project exactly like Hudson would, but do it locally.  This was the main reason I chose NAnt as the build script.  In my source control repository there is a .build file containing the NAnt script, a copy of the NAnt executable and

Good for you smarty-pants … but how long does it take to get all that setup if I know nothing about continuous integration?

To get my setup running it took me a solid week (spread out over time).  I started off very small, with no plug-ins and Hudson simply building the solution file.  I would add a plugin, get it working and then expand.  I think this is a good way for a newbie to start because even if you are only building the project it can catch a lot of errors.  one of the most common mistakes I find on my team is when someone adds a file to the solution file in Visual Studio but forgets to add the file to Subversion.  When they commit the change set MSBuild barfs when it cannot find the file that is not in Subversion.  This is an easy and effective way of ensuring that the source control tree is always in a buildable state.  I love this because one of my pet peeves is checking out a project only to discover it doesn’t actually build.  It’s little mistakes like these that upset the rhythm of a team and affect productivity.

In terms of tools for newcomers I would highly recommended Hudson because it is such a simple point-and-click Web 2.0 interface and has great help built right in where you need it.  Hudson is ridiculously easy to install and use, even installing it as a Windows service is built right into the package now, and installing plug-ins, etc. is downright simple.

If you’re still unconvinced you will see some payback from continuous integration (despite the evidence to the contrary) then look at the relatively minimal investment of a few days to a week as something that you can bring to every project in the future.

The Future

I would like to expand our setup to include code coverage metrics and possibly use a tool like StyleCop to measure how well the source code is formatted.  I have in the past attempted to integrate trend charts for a Fitnesse wiki but gave up because the restriction of having a wiki server up and running to perform the tests was tedious and error-prone.  We’re moving our specification tests to SpecFlow anyway and this should be considerably easier to use since its just NUnit under the hood.

When should you commit source code?

Filed under: Uncategorized — Tags: — x97mdr @ 11:52 pm

Are you just starting out with source control?  Have you ever had a hard drive fry with two days worth of work on it? Have you ever tried to merge two branches and had to spend a day working out the kinks?  If so, then hopefully this post is for you!

Tale of woe

There is a tale of woe that is a background to this story.  The version control environment I use is a combination of Subversion on the server with Bazaar locally.  This is sort of an advanced setup but I prefer it because it lets other members of my team who are more comfortable with Subversion to stick with it but lets me use the advanced features of a a Distributed Version Control (DVCS) tool like Bazaar. 

Now one day recently I was working away on a project that had to be done quickly.  I had been working on this project for about 2 days solid, so there was a lost of changes.   I realized at one point that I had modified a file I shouldn’t have and I went to revert this file using Bazaar.  I used TortoiseBzr, right-clicked the file in Windows Explorer, selected only the single file and pressed OK.  I bet you can’t guess what happened next!  For reasons a Bazaar neophyte like myself were unaware of, the entire repository reverted.  Two days of work … gone!  … or so I thought.  After a full-on panic I finally got my head about me and I filed a bug report and got a really fast and informative reply, was able to recover my lost data and move on.  Major kudos to the Bazaar development team here, they were helpful and knowledgeable.  However, this caused me nearly 4 hours of panic-ridden stress … so how could I have avoided this faithful reader?

Commit Early, Commit Often

This mantra is repeated all over the internety-thing here, here and here.  The biggest mistake that programmers make when working with a version control system is waiting long periods of time before checking in their code.  This was the problem that caused me all that stress and it was completely avoidable.  Had I committed at least a few times a day I would have lost at most a few hours work rather than two days worth.  The next obvious question you’re going to ask me is : “What is the smallest chunk to commit”?

What do I commit?

When developers ask me this question I usually tell them something like this: If it relates to a single activity (bug, feature, task), it compiles and it passes unit tests then commit it.  It takes some experience and experimenting to come up with your own formula but I personally find if I make a few classes with their accompanying tests then that’s a unit small enough for me to check in.  This could take 20 minutes or it could take 2 hours but I try to commit at least two to three times per day at a minimum.  I really like the first answer in this stack overflow question where he states that he usually commits a “complete thought”.  It’s fairly abstract again, there is no concrete rule after all, but another possible guidance.

Benefits

There are a lot of benefits to committing code in small rather than large chunks.  Most of these become apparent when you work with another developer on the same branch.  It is much easier to resolve merge conflicts with another developer in the same branch there the merge conflict is related to just a few files rather than dozens of files.  Additionally, committing in small chunks means that if there are conflicts then you are making the scope of the problem that you have to resolve much smaller.  If you commit 2 dozen files after 4 days there could be several small conflicts that need to be resolved and worked out, all intermingling with each other.  Trust me, this is the stuff of nightmares!  If you commit in penny-packets your merge conflicts will be more coherent and less entangled.

If you use continuous integration to build your project, and you run unit and/or integration tests as part of that build, then you will get even more benefit out of using your version control tool and committing early and often.  The continuous integration system can use unit tests  to make an independent verification that every commit meets a certain set of criteria.  This knowledge can be very comforting to all involved in the project.

You may also find that you develop a rhythm while working this way.  You make a few changes, test them then commit them and watch the build pass.  While I have no data to say that this process makes you more productive, I can say anecdotally that it makes me more productive because I spend less time getting held up by common mistakes.

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.