Friday, 27 July 2012

If you treat your unit tests like garbage, they'll eventually become just that. And they will smell bad...

Some time ago, I was TDDing a piece of code. My coworker glanced from the rear and, seeing a name of the spec I was writing, which was something like:

ShouldReportAllErrorsSortedAlphabeticallyWhenItEncountersErrorsDuringSearch()
he said:

  • - What a crappy names you've got for your unit tests! They're so unreadable!
  • - Why do you think they're crappy?
  • - These are too long! I can't read this!

I then suggested that we looked at the names of the unit tests he and his team were writing. I stopped at a unit test named

TrySendingHttpRequest()
and the following dialog occured:

  • - See this one?
  • - Yep.
  • - When will it pass or fail? And what are you trying to achieve in this test by sending the HTTP request?
  • - I don't know. I don't care. If it fails, I'll have to fix it, if not, I can forget it. That's all there is to it.
  • - Well then, if my unit test names are unreadable, yours are useless.

Then I said: "let's now look at the output from CruiseControl". I navigated to the cruise control unit test output web page of the project I was in. The unit test name I was writing was there, but written in a more human-readable form: "Should report all errors sorted alphabetically when it encounters errors during search". "See this?" - I said - this is our living documentation. He discarded this argument as "bringing unnecessary details" into the unit test suite.

The next day, I sent him an excerpt from Growing Object Oriented Software Guided By Tests book, which recommends to give unit tests names such as

notifiesListenersThatServerIsUnavailableWhenCannotConnectToItsMonitoringPort()
It did not convince him, however. The answer I received was: "We do it the way we do because it's comfortable for us. Just because we do something different than you doesn't mean we're doing it wrong. And whatever evidence you have that suggests long unit test names make sense, I will just keep doing what I'm doing right now. Period."

The lesson

In case you didn't notice - this was not only a debate about long and short names of unit test names, this was a debate about what's the place of unit tests in team and project's ecosystem.

While I personally value unit tests as a design tool, a feedback tool and a living documentation on micro level, there are people who just don't care - for them, unit tests are some crappy code that passes through the production code to make sure nothing breaks. Those guys are not aware of the value unit tests AKA unit-level specification can bring to the team and the project.

There are, however, downsides of this. If you treat your unit tests like garbage, they'll eventually become just this. E.g. for anyone that does not care about how unit tests are named (I assume that this someone would not even dare to use TDD - it's even more difficult than thinking of a descriptive name), here are a few things to consider:

  1. how do you know what situation does the test describe?
  2. how do you know whether the test describes a single situation, or few of them at the same time?
  3. how do you know what is the right assertion/set of assertions for the test?
  4. how do you know whether the test should stay or be removed when you modify the functionality covered by the test?
  5. how do you know whether the test or production code is at fault when you see unit test failure?
  6. how do you know that you will not introduce a duplicate test for a behavior that's already covered by another test when adding to test suite originally created by another team member?
  7. by looking at the unit testing tool output, how do you know whether the fix will be easy or not?
  8. how do you plan to maintain the test suite in the long term?
  9. how would you answer a new developer in your team who asks you " what is this test for"
  10. what's your plan on documenting the code you're writing?
  11. before writing the test, do you know what will you be testing? If so, then how do you know it?
  12. when writing a new suite of unit tests, how dou you keep track of behaviors/scenarios already covered and ones that still need to be covered?

I could keep adding to the list, but let's stay with this for now. Feel free to comment on the questions above. If there's any demand, I'll post my answers from the perspective of a person doing TDD. What needs to be noted is that a person treating unit tests like garbage will probably have an answer to all of the above questions. The funny part is what these answers usually are.

Apart from naming, there are other aspects of unit test that make it stink. That, however, I will leave for another time. See ya!

Thursday, 19 July 2012

TDD and Single Responsibility Principle violation - a simple example

As many have already stated, TDD encourages simple, elegant design. Today I'd like to show you a simple example on how you can smell Single Responsibility Principle violation from your specs (AKA unit tests).

A session storage

Imagine we're building a web server which allows different clients connect to it via HTTP requests. The server is so simple that it holds all its sessions for different clients in memory. One of the specs that led to implementing this in-memory session storage looks like this:

[Test] public void
ShouldKillAllContainedSessionsWhenItIsKilled()
{
  var sessions = new Sessions();
  var session1 = Substitute.For<Session>();
  var session2 = Substitute.For<Session>();

  sessions.Add(session1);
  sessions.Add(session2);

  sessions.Kill();

  session1.Received().Kill();
  session2.Received().Kill();
}

Now imagine that a company became interested in our web server and wants to turn it into a commercial product. The company strategy is as follows: users will be offered free version to try things out and when they pay, they will receive activation code.

There is only one difference between the trial and full version - the trial version has a limit of only one active session at a given moment.

This change of policy needs to be reflected in the session storage. So we add a logic that checks for the session limit and throws an exception as soon as the limit is exceeded. A specification of such behavior would look like this:

[Test] public void
ShouldThrowExceptionWhenTryingToAddSessionsBeyondDefaultLimitOfOne()
{
  var sessions = new Sessions();
  var session1 = Any.InstanceOf<Session>();
  var session2 = Any.InstanceOf<Session>();

  sessions.Add(session1);
  
  Assert.Throws<Exception>(() => sessions.Add(session2));
}

This, however, leaves us with another issue: what about the first spec mentioned in this post? If the limit of sessions is 1 by default, it will fail, since we're adding two sessions there. So we'll need to somehow extend the limit for the purpose of this spec.

I assume that you're smart enough not to use Setup and Teardown methods in your unit-level specs (I'll have a post about why Setup and Teardown usually stink someday), so your "corrected" spec looks like this now:

[Test] public void
ShouldKillAllContainedSessionsWhenItIsKilled()
{
  var sessions = new Sessions();
  var session1 = Substitute.For<Session>();
  var session2 = Substitute.For<Session>();
  sessions.Limit = 2;

  sessions.Add(session1);
  sessions.Add(session2);

  sessions.Kill();

  session1.Received().Kill();
  session2.Received().Kill();
}

And so we added extending the session limit to 2. Now, doesn't this bother you? Look at the name of the spec - does it tell anything about a session limit? No, it doesn't. Let's think about the behavior as well - did it become invalid after we introduced the session limit? No, it didn't.

So how did it happen that the session limit concept leaked to an unrelated specification? Is this a problem with the spec?

No, it isn't. It's a problem with our production code design. In fact, this spec is just a symptom of what we did with the design:

We violated Single Responsibility Principle

That's right, folks, we really did it. Currently, the session storage is responsible not only for storing and accessing the sessions, it also knows EXACTLY what is the business-imposed limit and takes care of enforcing this limit. That's two responsibilities, not one.

Ok, so we screwed up. How do we fix it?

The easiest way is to have the limit-related logic isolated in a separate class. Because we want this class to be a result of implementing specifications (remember, test-driven ;-)), we undo the addition of the limit-related code to session storage class (and existing unit tests) and start over. Here's an example spec that will drive an implementation that respects Single Responsibility Principle. Implementing it will not impact other specs any way other than with a need to supply a dummy constructor parameter.

[Test] public void
ShouldValidateNumberOfSessionsBeforeAddingANewOne()
{
  var sessionCountValidation 
    = Substitute.For<SessionCountValidation>()
  var sessions = new Sessions(sessionCountValidation);
  var sessionsCountBeforeAddingNewOne = sessions.Count;

  sessions.Add(Any.InstanceOf<Session>());

  sessionCountValidation.Received()
    .Perform(sessionsCountBeforeAddingNewOne);
}

This way we crafted the signature of desired session count validation object. We can implement it (test-driven of course) as soon as this spec starts to pass,

That's it. This is a simple lesson, but very important one. Remember: always listen to your tests - the advice may be priceless.

See ya!

Saturday, 14 July 2012

You may have been practicing test-first without knowing it. Congratulations!

This post is going to be mostly about seeing TDD in a larger picture and is mostly inspired by some materials by Ken Pugh on Acceptance Test Driven Development (although it does not exactly echo Ken's opinions).

What is a requirement?

Imagine you've got technical requirements, one of which looks like this:

All incoming e-mail messages that are directed to recipient called emergency@my.mail.com shall be broadcasted to all accounts in the my.mail.com domain

Here, the technical requirement defines a special emergency e-mail address that may be used as a kind of emergency notification mechanism (like when an error occurs on your website). What's usually expected from the development team is to take this requirement and implement, so that the requirement is fulfilled. In other words, this is a request to perform an amount of work.

But how do you know that this amount of work needs to be performed? How do you know that you need to state such a requirement and send it to the development team?

Let's stop for a minute and consider these questions. Thinking about it, we can arrive at the following hypothetical situations:

  1. A customer (be it business representative or real end-user) is familiar with another system, where such notification mechanism worked and worked well. Going to our system, he expects it to have a similar functionality. However, in a situation when he expects an emergency to be communicated to all staff, nothing happens.
  2. A customer is familiar with our system and knows what it can and cannot do. He comes up with an idea that having such feature would greatly increase visibility of website errors, and recognizes that up to now, the system does not implement it.
  3. A customer not familiar with our system reads a user manual and discovers that this kind of functionality, which he knows he'll need, is not described in there, so looks like it does not exist.
  4. A customer is about to buy our system and as one of the feature he wants the system to support, he mentions emergency notification solution, but the sales representative tells him that the system does not support this kind of functionality

Test-first

We can imagine that one of the situations described above may have led to this new requirement being specified. But what are these situations? Let's try to rephrase them a little, adding a small comment in braces after each one that should clear the situation a bit:

  1. A customer leads to a situation where he expects everybody to be notified of the emergency situation, but they're not notified (FAIL).
  2. A customer searches his memory expecting to find the fact that our system implements the feature, but cannot find the fact because it's not true (FAIL).
  3. A customer expects the user manual to contain information that such a feature exists, but the manual says that it doesn't (FAIL).
  4. A customer expects the sales representative to tell him that their system supports the desired feature, but the representative tells him otherwise (FAIL)

Yes, you've guessed it - these are tests! Well, maybe not JUnit/NUnit/Rspec/whateverUnit tests, but they're tests anyway. What's more, they're tests that fail, which makes us "repair" this "failure" by providing a technical requirement and then implementing it.

What got lost?

Also, note that the tests tell nothing about e-mail and the broadcast address - they tell us that our customer expects everybody to be notified of emergency situation. The e-mail part is what's been added in the requirement. At the same time, another piece of information was dropped by translating test into technical requirement - the user's intention.

It's because this requirement was HOW the user expectation should be satisfied.

The sad part is that, although these tests contain some information that got "lost in translation" to technical requirements, usually such tests are never even stored, not mentioning automating them (which, in some cases, could be possible).

There are two important lessons to learn from this story:

  1. It's a pity that we don't value the tests that come before requirements
  2. It's a pity that we can't see that tests come first. If we could, it would let us understand better why unit-level Test Driven Development makes perfect sense, because it's the same process of specifying what we expect to have, seeing it not fulfilled and then providing it.

Ok, that's it for today. Have fun!

Saturday, 7 July 2012

Class mocks - maintaining constructors

In one of my previous posts I promised, that I'd share with you some hints on reducing the cost of maintaining class mocks. Remember that I'm not recommending class mocks, just showing you what to do when you have little choice. That said, I'll try to highlight a way to deal with constructor maintenance.

My plan is as follows: first, I'll introduce the general pattern and describe its canonical use in unit testing. Then I'll show you how adding one teeny tiny method drives us to the solution to our mock constructor issue.

The builder to the rescue!

Before we jump into creating mocks, let's note that there are many unit tests where we create the specified object (e.g. in class X specifications, we create new object of class X in every spec/unit test, at least that's what I do, since I strongly believe that Setup/Teardown are last resort mechanisms).

Now, just to let you get a feeling of what I'm talking about, let's look at simple example:

[Test]
public void ShouldCopyFromSourceToDestination()
{
  //GIVEN
  var source = new Mock<ISource>();
  var destination = new Mock<IDestination>();
  var anyLog = Any.InstanceOf<ILog>();
  var anyFile = Any.InstanceOf<IFile>();  
  var copyOperation = new CopyOperation(source, destination, anyLog);
  source.Setup(m => m.Get()).Returns(anyFile);

  //WHEN
  copyOperation.PerformOn(anyFile);
  
  //THEN
  destination.Verify(m => m.Write(anyFile));
}

Let's see - what does this logging object do here? It's not really part of the behavior - it only makes the intent less visible! Also, when we add another parameter to the constructor, we'll have to update all specs that create a CopyOperation object.

There are cases when having many constructor parameters is a sign that it's high time to introduce a Facade in the design. But sometimes not. In such cases we can use a builder pattern that takes only the parameters that are important. No need for logger when we specify logging-agnostic behaviors. For example:

[Test]
public void ShouldCopyFromSourceToDestination()
{
  //GIVEN
  var source = new Mock<ISource>();
  var destination = new Mock<IDestination>();
  var anyFile = Any.InstanceOf<IFile>();  
  var copyOperation = new CopyOperationBuilder()
    .Source(source)
    .Destination(destination)
    .Build();
  
  source.Setup(m => m.Get()).Returns(anyFile);

  //WHEN
  copyOperation.PerformOn(anyFile);
  
  //THEN
  destination.Verify(m => m.Write(anyFile));
}

The implementation of such builder is dead simple:

public class CopyOperationBuilder()
{
  private ISource _source 
    = new Mock<ISource>().Object;
  private IDestination _destination 
    = new Mock<IDestination>().Object;
  private ILog _log 
    = new Mock<ILog>().Object;

  public CopyOperationBuilder Source(
    ISource source)
  {
    _source = source;
    return this;
  }

  public CopyOperationBuilder Destination(
    IDestination destination)
  {
    _destination = destination;
    return this;
  }

  public CopyOperationBuilder Log(ILog log)
  {
    _log = log;
    return this;
  }

  public CopyOperation Build()
  {
    return new CopyOperation(
      _source, 
      _destination, 
      _log);
  }
}

Easy, isn't it? Not only did we make the specification more meaningful, we also made the spec less prone to changes in constructor - when new parameter is added that doesn't affect the spec, the builder is the only place we need to update.

OK, now that I introduced the pattern and its usual usage in unit testing, let's see how we can benefit from it in case of mocking.

Mock builder

Now, time for a trick. We can just add another method in the builder, that will build a mock for us:

public class CopyOperationBuilder
{
  //all the methods described above...

  public Mock<CopyOperation> BuildMock()
  {
    return new Mock<CopyOperation>(
      _source, 
      _destination, 
      _log) { CallBase = true };
  }
}

And that's it. Simple and sweet trick that may become handy in those very rare cases where we need to create class mocks.

Good night everyone!

Thursday, 5 July 2012

Eclipse Juno has landed with unit testing support for C++

Hi, today I'd like to share a wonderful news with you - a C++ IDE - eclipse CDT version 8.1 (the one included in eclipse Juno that has just landed few days ago) finally includes support for unit testing (which, by the way, JDT had for ages, but nevermind)!

EDIT: It looks like Unit Testing support is not in the default package of eclipse C++ IDE and you have to install it separately from the eclipse update site. See comments.

The good news

CDT 8.1 unit testing supports the following frameworks:

  1. Boost Test Runner
  2. Google Test Runner
  3. Qt Tests Runner

A quick screenshot:

First, you have to enable C++ Unit Testing view:

The main screen of the feature should be shown:

Note that I highlighted one of the buttons in red. This button is "one click run" - it builds the project and runs all the tests - something I've been missing for a long time in C++ world.

Let's give it a try

Now, let's go through a quick tutorial on how to use the feature. I'm going to use Google Test as my unit testing framework and HippoMocks as my mocking framework, so if you don't know at least one of these, it's a good opportunity to learn a bit about them (by observation, since I'm not going to explain today how they work). Also, I'll use C++11 compiler, mainly for the auto keyword. Ok, let's go! For this example, I used:

  1. Ubuntu GNU/Linux
  2. gcc in version at least 4.6 (on Ubuntu you can just install build-essentials package)
  3. google test framework installed (note that on Ubuntu you have to rebuild it from source!)
  4. The latest version of Hippo Mocks framework

Set up a new project

Just create a new C++ empty project and add a new source file. Add the following flag to the gcc compiler: -std=c++0x (it will let you use C++11 features) and the following libraries to the linker:

  1. gtest_main
  2. gtest
  3. pthread

Write the code

We'll use the classic example of copying a file from source to destination. Just paste the following code inside the main file (note that there's no main() function - this is deliberate, as gtest_main.a linked library will provide one for us. Also note that the code shown below isn't good style C++ in terms of header-source separation and namespace using - let's leave it for now, ok? ;-)).

#include <gtest/gtest.h>
#include "hippomocks.h"
using namespace std;

class File
{
};

class Source
{
public:
  virtual File* getData() = 0;
};

class Destination
{
public:
  virtual void write(File* data) = 0;
};

class CopyOperation
{
public:
  void Perform(
    Source* source,
    Destination* destination)
  {
    destination->write(source->getData());
  }
};

TEST(CopyOperationSpecification,
    ShouldCopyFileFromSourceToDestination)
{
  MockRepository mocks;
  auto source = mocks.InterfaceMock<Source>();
  auto destination = mocks.InterfaceMock<Destination>();
  auto file = mocks.Mock<File>();
  CopyOperation copy;

  mocks.ExpectCall(source, Source::getData).Return(file);
  mocks.ExpectCall(destination, Destination::write).With(file);

  copy.Perform(source, destination);
}

Create new run configuration and execute it

Running this example in CDT unit test runner requires you to create a new Run Configuration. Just right-click on the project, select Run As->Run Configurations and double click C++ Unit. On the C/C++ Testing tab, select Google Test Runner from the dropdown list and click Run. If everything goes well, you should see the following screen:

Summary

The unit test runner is a feature I was waiting for a long time and I'm glad to have it, with support for three popular unit testing frameworks. Good job, CDT developers!

That's it. In my next post, I'll be going back to C# and dealing with constructor parameter maintenance. Stay tuned!