Tuesday 25 September 2012

A kata challenge to try your TDD skills on - failure and what we can learn from it

As promised, today I'd like to show you a way to come up with a suboptimal solution to the kata described earlier.

I'll be using xUnit.NET as my unit test runner and NSubstitute as my mocking library of choice throughout this exercise. In general, I'll be following good practices most of the time and indicate where I'll be "accidentially" losing sight of them.

Specification 1: Creating new instances

We're starting outside-in, from the inputs. In my system, I'll model the most "outside" part of the module as a MessageProcessing class. We should be able to create instances of this class, so the first specification will tell just this:

[Fact]
public void ShouldAllowCreatingNewInstancesWithoutExceptions()
{
  Assert.DoesNotThrow(() => new MessageProcessing());
}

This specification creates a need to have a MessageProcessing class:

public class MessageProcessing
{
}

Ok, so far so good. This, by the way, is a simple example of a Creational Specification. Now let's take on a more complex behavior.

Specification 2: Submitting frame to validation

Ok, so now's the time to use the Given-When-Then analysis. We already have the MessageProcessing class - also, we've got the Frame class. How do they interact with each other?

After giving it a short thought, we can come up with this obvious behavior description:

GIVEN any frame
WHEN message processing is performed for it
THEN it should submit the frame to validation

Cool, let's get to implementing it. The first thing we want to do is to just translate the behavior description above to the code and see which abstractions and methods are missing - then supply them.

The first line:

GIVEN any frame

can be translated to code like this:

var anyFrame = Any.InstanceOf<Frame>();

Then the second line:

WHEN message processing is performed for it

can be written as:

var messageProcessing = new MessageProcessing();
messageProcessing.PerformFor(frame);

Great! Now the last line:

THEN it should submit the frame to validation

and its translation into the code (note the Received() extension method, which is an NSubstitute construct):

validation.Received().PerformFor(anyFrame);

Ok, so we've made the translation, now let's summarize this and see what's missing to make this code compile:

[Fact] public void 
ShouldSubmitFrameToValidationWhenPerformedForThatFrame()
{
  //GIVEN
  var anyFrame = Any.InstanceOf<Frame>();

  //WHEN
  var messageProcessing = new MessageProcessing();
  messageProcessing.PerformFor(frame);

  //THEN
  validation.Received().PerformFor(anyFrame);
}

What's missing? We have to somehow create the validation and pass it to MessageProcessing, since it's the entity responsible for performing the validation. And so, the corrected version looks like this.

[Fact] public void 
ShouldSubmitFrameToValidationWhenPerformedForThatFrame()
{
  //GIVEN
  var anyFrame = Any.InstanceOf<Frame>();
  var validation = Substitute.For<Validation>();  

  //WHEN
  var messageProcessing = new MessageProcessing(validation);
  messageProcessing.PerformFor(frame);

  //THEN
  validation.Received().PerformFor(anyFrame);
}

As you can see, we've discovered one new abstraction - a validation, and two new methods: one for message processing and one for validation (accidentially, both named PerformFor()). Thus, we have created a need for all this new stuff to exist. Additionally, we made a mistake when dividing responsibilities between classes, and we'll see why later.

By the way, this is what's called a Work-Flow Specification. Our workflow is very primitive, since it consists of forwarding a method call to another object. In real-world scenarios (and in the correct solution to this kata) workflows are more worthwile.

After creating all the necessary types and methods, the implementation looks like this:

public class MessageProcessing
{
  public MessageProcessing(Validation validation)
  {
    throw new NotImplementedException();
  }

  public void PerformFor(Frame frame)
  {
    throw new NotImplementedException();
  }
}

//TODO implement
public interface Validation
{
  void PerformFor(Frame frame); 
}

Also, we've got to correct the first specification to cater for new construction argument:

[Fact]
public void ShouldAllowCreatingNewInstancesWithoutExceptions()
{
  Assert.DoesNotThrow(
    () => new MessageProcessing(Any.InstanceOf<Validation>())
  );
}

Note that I added a TODO next to the Validation interface - this is because I'll need to go back later and actually implement it. While I don't write much about a TODO list in this post, it's actually one of TDDer's best friends (maybe I'll write a post about it someday).

Let's leave the validation for now. To make this specification I just wrote pass, I'll need to change the MessageProcessing class:

public class MessageProcessing
{
  private readonly Validation _validation;

  public MessageProcessing(Validation validation)
  {
    _validation = validation;
  }

  public void PerformFor(Frame frame)
  {
    _validation.PerformFor(frame);
  }
}

Now it's all green, so let's examine our TODO list. The only item on the list is the Validation interface implementation, so let's go ahead and create...

Specification 3: new instances of concrete validation.

So we need a concrete implementation of the Validation interface. Let's call it BasicValidation, since the logic in there is gonna be really simple.

[Fact]
public void ShouldAllowCreatingNewInstancesWithoutExceptions()
{
  Assert.DoesNotThrow(
    () => new SanityValidation()
  );
}

and now we've created a need for a class named SanityValidation. The code for SanityValidation class looks like this now:

public class SanityValidation : Validation
{
  public void PerformFor(Frame frame)
  {
    throw new NotImplementedException();
  }
}

Some tools, like Resharper, put occurences of NotImplementedException on the TODO list, which is a great feature, because replacing it with concrete implementation actually IS something "to do".

Specification 4: Ignoring valid frames

Starting from this chapter, we're going to feel the pain of my bad design decision more and more. In the grand finale, I'll show you what mess we've got ourselves into. But first things first, let's write the first specification of validation, which says:

GIVEN a validation and a frame
AND the frame has all its fields valid
WHEN the frame is submitted to validation
THEN no exception should be thrown

which, translated into code, looks like this:

[Fact] public void 
ShouldNotThrowAnyExceptionWhenPerformedForFrameWithAllFieldsValid()
{
  //GIVEN
  var frame = new Frame()
  { 
    Speed = SanityValidation.MinValidInteger,
    Age = SanityValidation.MinValidInteger,
    Sender = Any.NonNullNonEmptyString()
  };

  var validation = new SanityValidation();

  //WHEN - THEN
  Assert.DoesNotThrow(
    () => validation.PerformFor(frame)
  );
}

(By the way, this is called a Functional Specification.)

Interesting... we had to know exactly what "all fields" mean. I wonder what will be the effect of adding new field to the frame in the future... We'll see more of that with upcoming specs. For now, let's just type in the implementation which is to remove throwing the NotImplementedException() from PerformFor() method. Oh, I almost forgot, we introduced a need to create a constant named MinValidInteger, so we have to add it to the SanityValidation class:

public class SanityValidation : Validation
{
  public const MinValidInteger = 12345; //TODO
  public void PerformFor(Frame frame)
  {
  }
}

Note that we're not assigning this new constant any particular value - 12345 is just a dummy number. Moreover, we're putting a TODO next to the constant, since we'll want to revisit it later and actually write a specification on what the value of the constant should be. But for now - done. Time for the next one.

Specification 5: throwing exception on invalid Speed

Let's describe the expected behavior:

GIVEN a validation and a frame
AND the frame has all its fields valid except for the Speed
WHEN the frame is submitted to validation
THEN an exception should be thrown

Note the "all fields" again... this is becoming very interesting. let's see what comes up when we put the spec into code:

[Fact] public void 
ShouldThrowAnExceptionWhenPerformedForFrameWithInvalidSpeed()
{
  //GIVEN
  var frame = new Frame()
  { 
    Speed = SanityValidation.MinValidInteger - 1,
    Age = SanityValidation.MinValidInteger,
    Sender = Any.NonNullNonEmptyString()
  };

  var validation = new SanityValidation();

  //WHEN - THEN
  Assert.Throws<Exception>(
    () => validation.PerformFor(frame)
  );
}

Another Functional Specification. Note that we're repeating assignments to all of the frame fields. Also, we're repeating the knowledge on what it means to be "valid" for each field except the speed. Keep calm, the grand finale is getting nearer and nearer. For now, let's just pretend that nothing happened and add the necessary code to make this spec pass:

public class SanityValidation : Validation
{
  public const MinValidInteger = 12345; //TODO
  public void PerformFor(Frame frame)
  {
    if(frame.Speed < MinValidInteger)
      throw new Exception();
  }
}

Specification 6: throwing exceptions on invalid Age

To be honest, nothing really interesting happens here, since it's almost the same as in the previous case. Let's just write the spec:

[Fact] public void 
ShouldThrowAnExceptionWhenPerformedForFrameWithInvalidAge()
{
  //GIVEN
  var frame = new Frame()
  { 
    Speed = SanityValidation.MinValidInteger,
    Age = SanityValidation.MinValidInteger - 1,
    Sender = Any.NonNullNonEmptyString()
  };

  var validation = new SanityValidation();

  //WHEN - THEN
  Assert.Throws<Exception>(
    () => validation.PerformFor(frame)
  );
}

and the code necessary to make it pass:

public class SanityValidation : Validation
{
  public const MinValidInteger = 12345; //TODO
  public void PerformFor(Frame frame)
  {
    if(frame.Speed < MinValidInteger)
      throw new Exception();
    if(frame.Age < MinValidInteger)
      throw new Exception();
  }
}

By the way, is it only me, or is there a pattern recurring in both specs and production code? Oh, and the conclusions on bad design drawn during the previous specification are only reinforced here!

Specification 7: Sender cannot be null

Also, this one is analogous to the two previous ones, only this time we'll be checking for null instead of a numeric boundary:

[Fact] public void 
ShouldThrowAnExceptionWhenPerformedForFrameWithNullSender()
{
  //GIVEN
  var frame = new Frame()
  { 
    Speed = SanityValidation.MinValidInteger,
    Age = SanityValidation.MinValidInteger,
    Sender = null
  };

  var validation = new SanityValidation();

  //WHEN - THEN
  Assert.Throws<Exception>(
    () => validation.PerformFor(frame)
  );
}

and here's the code:

public class SanityValidation : Validation
{
  public const MinValidInteger = 12345; //TODO
  public void PerformFor(Frame frame)
  {
    if(frame.Speed < MinValidInteger)
      throw new Exception();
    if(frame.Age < MinValidInteger)
      throw new Exception();
    if(frame.Sender == null)
      throw new Exception();
  }
}

...and the last validation:

Specification 8: Sender cannot be empty

Almost exactly the same as the previous one. Comments are not needed. Here's the spec:

[Fact] public void 
ShouldThrowAnExceptionWhenPerformedForFrameWithEmptySender()
{
  //GIVEN
  var frame = new Frame()
  { 
    Speed = SanityValidation.MinValidInteger,
    Age = SanityValidation.MinValidInteger,
    Sender = string.Empty
  };

  var validation = new SanityValidation();

  //WHEN - THEN
  Assert.Throws<Exception>(
    () => validation.PerformFor(frame)
  );
}

and production code that fulfills it:

public class SanityValidation : Validation
{
  public const MinValidInteger = 12345; //TODO
  public void PerformFor(Frame frame)
  {
    if(frame.Speed < MinValidInteger)
      throw new Exception();
    if(frame.Age < MinValidInteger)
      throw new Exception();
    if(string.IsNullOrEmpty(frame.Sender))
      throw new Exception();
  }
}

and that's it for the validation logic - we've got all the behaviors nailed. The only thing left to do before I explain where (and why) I failed, is a specification for the constant that pollutes our TODO list:

Specification 9: Minimum Valid Integer for validations should be 1

The spec is really simple:

[Fact] public void 
ShouldUse1AsMinimumValidInteger()
{
  Assert.Equal(1, SanityValidation.MinValidInteger);
}

This thing is called a Constant Specification. To make it pass (because now it's failing - remember that we put 12345 as the value for MinValidInteger), we'll just have to change the value of the constant to 1:

public class SanityValidation : Validation
{
  public const MinValidInteger = 1;

  public void PerformFor(Frame frame)
  {
    if(frame.Speed < MinValidInteger)
      throw new Exception();
    if(frame.Age < MinValidInteger)
      throw new Exception();
    if(string.IsNullOrEmpty(frame.Sender))
      throw new Exception();
  }
}

Ok, all behaviors nailed down, all specifications written, now it's time for a short retrospective.

Where (and why) did I go wrong?

The first mistake I made was in Specification 2. I just passed the whole frame to validation. What's wrong with that? By doing this, I failed to encapsulate the knowledge about the structure of the frame and eventually led to coupling all objects of the application to the Frame type. The frame is a third party class (you can't control how it changes), plus it's a data class (and data classes are code smells) - never couple your application to something as horrible as this!. Imagine the next version of the module would introduce compression and we'd handle this by introducing another class called Compression, then pass whole frame to it. Now every change to the Frame would impact both Validation and Compression object in the same way, introducing redundancy in dependencies and a maintainability pain - stronger with each new class that would need to operate on the frame. One of the goals of object oriented design is to keep data together with behaviors related to that data and that's exactly what I failed to achieve.

The other mistake was to further fail to encapsulate the frame structure when it got into the Validation object. If you look at specifications 4, 5, 6, 7 and 8 - all of them contain both the knowledge on which fields to validate and what "valid" means for each field. This led to many repetitions in the specs (adding a new field to Frame class would require changes in five specs plus adding a new one - a real horror!) and exposed smells in the production code itself, which are: violation of the Single Responsibility Principle (again, Validation is responsible for deciding which fields to validate as well as what the validations mean, making it bear two responsibilities) and redundancy (note that Age and Speed are validated the same way - the specification in my previous post did not assign a validation rule to fields, but to types - and I failed to eliminate the duplication between validations of two fields that are of the same type). And this is only for three fields! To give you a feel of the maintenance horror I mentioned, let's imagine how the truth table would look like if we had ten fields:

Spec 1 Spec 2 Spec 3 Spec 4 Spec 5 Spec 6 Spec 7 Spec 8 Spec 9 Spec 10 Spec 11
Field 1 V X V V V V V V V V V
Field 2 V V X V V V V V V V V
Field 3 V V V X V V V V V V V
Field 4 V V V V X V V V V V V
Field 5 V V V V V X V V V V V
Field 6 V V V V V V X V V V V
Field 7 V V V V V V V X V V V
Field 8 V V V V V V V V X V V
Field 9 V V V V V V V V V X V
Field 10 V V V V V V V V V V X

You get the issue, right? This code should be refactored, but I'm not going to do this, because I want to show you the correct solution next time. In the meantime, you can try to refactor yourself and see what you end up with. As always, comments are welcome.

But until then... see ya!

Saturday 22 September 2012

A kata challenge to try your TDD skills on

Although the idea of doing code kata to practice TDD is quite popular, I couldn't get on this train for a long time. The reason was that I was striving to find a kata that would help me practice or demonstrate a mock-based TDD, at the same time being simple and showing all the testing categories, as well as test reflexology in practice.

This was until I ran into a situation that, with few simplifications, could be turned into a compelling kata to show off the power of mocks and Need-Driven Development.

I've decided to present the kata in a series of posts. This one is going to be about the challenge only.

The challenge

Imagine the following situations: you're creating a message processing module for a system that receives data in a form of frames from the network. The frame parsing is done using a proprietary third party library and your module receives the frames as deserialized objects. As the class for the objects is stored in the third-party library, you cannot change it (e.g. add or remove methods or change any signature).

There is only one type of frame, described below:

public class Frame
{
  int Speed { get; set; }
  int Age { get; set; }
  string Sender { get; set; }
}

Your task is to implement the message processing. So far, it only consists of validation and nothing more. Each field of each incoming frame is validated. The following examples describe the validation rules:

type of field Correctness criteria When satisfied When not satisfied
int greater than 0 do nothing throw exception
string not null and not empty do nothing throw exception

Of course, you have to do this test-first.

Two more posts to come - one for failure, one for success

I believe that failure and success are both great opportunities to learn something valuable. So in the next post, I'll try to show you how to screw up :-) and what we can learn from the failure by listening to the tests.

In the meantime, you can try to solve the kata yourself (coming up with any solutions should be trivial) and share your observations.

Good night, everyone!

Saturday 15 September 2012

Does TDD lead to explosion of unnecessary abstractions and bad readability?

Thanks go to Grzegorz Sławecki for inspiration.

Exemplary problem

Imagine we're in a different reality, where tests are written after the code. we have written the following code (by the way, it's not perfect as an example of using WCF, but it's out of scope of this post):

public class SendingFromBuffer
{
  Buffer _buffer;

  public SendingThroughBuffer(Buffer buffer)
  {
    _buffer = buffer;
  }
    
  public void Perform()
  {
    var destination = CreateDestination ();
    while (_buffer.HasPackets()) 
    {
      destination.Send (_buffer.GetPacket ());
    }
  }
    
  public Destination CreateDestination()
  {
    var binding = new BasicHttpBinding();
    var endpoint = new EndpointAddress(
      "http://localhost/MyService/Something");
    var channelFactory = new ChannelFactory<Destination>(
      binding, endpoint);
    return channelFactory.CreateChannel();
  }
}

And now we would like to write a unit test for it. In particular - for the sending logic.

Actually, we could do it easily if not for the result of CreateDestination() method. It's a WCF channel, so sending anything through it without actually connecting to the server yields an exception. To execute the code as is, we would need set up a server and use the operating system networking capabilities and (remember!) we want to be isolated from the operating system as much as we can during unit test. Also setting up a WCF server impacts unit test performance, making it slower.

As the code wasn't written test-first, it's already untestable, so we'll have to refactor it (wow, and imagine it was created just about an hour ago!) to isolate ourselves from the WCF channel. Thankfully, we have the channel creation encapsulated in its own method, so it's very easy to extract it to a separate class and use dependency injection to inject either a real object or a mock into the SendingFromBuffer class.

After the refactoring, the code looks like this:

public class SendingFromBuffer
{
  Buffer _buffer;
  DestinationFactory _destinationFactory;

  public SendingThroughBuffer(
    Buffer buffer, DestinationFactory factory)
  {
    _buffer = buffer;
    _destinationFactory = factory;
  }
    
  public void Perform()
  {
    var destination = _destinationFactory.Create();
    while (_buffer.HasPackets()) 
    {
      destination.Send(_buffer.GetPacket ());
    }
  }
}

public interface DestinationFactory
{
  Destination Create();
}
  
public class WcfDestinationFactory : DestinationFactory
{
  public Destination Create()
  {
    var binding = new BasicHttpBinding();
    var endpoint = new EndpointAddress(
      "http://localhost/MyService/Something");
    var channelFactory = new ChannelFactory<Destination>(
      binding, endpoint);
    return channelFactory.CreateChannel();
  }
}

Note that we have introduced a factory object with one method that has just a few lines. this yields the two following questions:

  1. Aren't we introducing an unnecessary abstraction that bloats the design?
  2. Aren't we hurting readability with this additional object?

Let's take those questions on one by one.

1. Aren't we introducing an unnecessary abstraction that bloats the design?

Ok, think with me - is this factory really necessary? I mean, it's just a few lines of code! Why the heck would we want to delegate that to a separate class?

Well, we do and I'll tell you why. It's because the reason for creating classes is not to move lines of code. You see, the goal of introducing all this object oriented stuff into programming languages in the first place was to enable us to do the following:

  1. encapsulate something
  2. loosen coupling
  3. attain better cohesion
  4. get rid of redundancy
  5. make the class easier for testing (ok, this one came later, but it's equally important)
  6. enhance readability
  7. provide a better chance for reuse

Do you see "lines of code" anywhere on this list? No? Good, because me neither :-). Well, maybe the readability part, but we'll deal with that later. Sadly, this is a common myth that many junior programmers repeat endlessly - that the amount of lines of code is a criteria for something to be sealed inside a class or a method. The truth is, one can even introduce a class that doesn't have ANY executable code (many Null Objects are implemented this way).

I mean, if we really don't care about the things outlined above, we can just as well write everything in procedural style - who needs all these objects, classes, interfaces and stuff? No one, because they're a design tool, not an implementation tool. And as "lines of code" is not a design quality, it has nothing to do with design.

So, let's evaluate the decision to introduce a factory in the context of real design qualities described above, not those made up, like "lines of code":

Encapsulation

Introduction of a factory encapsulates the knowledge on what library are we actually using for communication. By doing this, we can get rid of another "tiny bit of code", that impacts this class significantly:

using System.ServiceModel;

you see, not having this line inside our class file frees us from dragging a heavy dependency around every time we want to reuse the class. This is a great benefit, since getting rid of such heavy dependencies enhances reuse.

Coupling

By decoupling the class from WCF, we open it for extensions - we can put in another factory that creates different implementation using another remote procedure call mechanism. We gain reuse and extensibility in return.

Cohesion

By getting the knowledge about creating communication channel away from the SendingFromBuffer class, we make it less prone to change, because any change in the way communication occurs does not impact this code. Cohesive classes make finding defects and issues very easy. E.g. When the app fails with WCF connection, we've got a very limited area to search for problems. We don't have to wonder at all whether a code fulfilling another responsibility in the same class is affected by the WCF channel creation somehow or not.

Why do I think I have the right to say that the WCF part is a "separate responsibility"? Because there's a strict boundary between this code and the rest of the class. Just look at the table below:

Question Wcf part The rest of the class
Need to unit test? No * Yes
Part of application domain logic? No Yes
Can be changed to allow easier unit testing? No Yes

* - well, I'd probably write a unit test to specify what parameters should be set on the channel after creation and that the created instance is not null, but I wouldn't try to send anything through it.

So as you see, these two aspects of the class are different and hence should be separated.

Redundancy

Let's assume that this isn't the only place where we're opening up a connection and sending a message through it. In such case, without the factory, we'd have to introduce a method for creating the channel in every class that needs it. Every time the logic for creating a channel changes, we have to make corrections in all of these methods. This is asking for trouble.

Testability

Using dependency injection and a factory opens the class for unit testing - which we wouldn't be able to do otherwise. Of course, we could make the creation method virtual and override it in a subclass with a stub implementation, but is really more straightforward or less effort than the factory solution? By the way, TDD IS about design and analysis, so when a test tells you that you need an additional abstraction, then you do. That's what you write the tests for - to discover them.

Readability

we'll take care of it in a second.

Reuse

This is a result of all the other points. A class that is easy to reuse encapsulates its responsibility well, dooes not have accidential coupling, is very cohesive, does not include redundant aspects, is documented with set of simple and well-written unit tests that describe all its behaviors and is readable.

Ok, that pretty much represents my thinking on the issue, the only thing left is question number 2:

2. Aren't we hurting readability with this additional object?

How do you define readability? Many people mistakenly think that readability is about looking at the code and being able to tell everything that happens from it - such code would make everything it does explicit. If so, then it would be best not to partition the code in any way, but just use a single function or a method that does all the stuff. Thinking this eay leads us to the conclusion that each time we're introducing a method or a variable, we hurt readability, because things become less explicit. e.g. this:

int a = pow(2,2);

is less explicit than:

var a = 2*2;

And yet, we'd rather use the first form rather than the second, because it's more intention-revealing (although the second does a better job in telling what actually happens).

So as you might have already guessed - for me, the main trait of readability is not explicitness. It's revealing intentions.

Let's take a look at the two following pieces of code:

public void Process(Frame frame)
{
  validation.PerformFor(frame);
  compression.PerformOn(frame);
  encryption.PerformOn(frame);
}
...and:
public void Process(Frame frame)
{
  if(frame.Destination == null)
  {
    throw new Exception();
  }
  
  for(int i = 0 ; i i < frame.NumberOfWords ; ++i)
  {
    var word = frame.Words[i];
    frame.Words[i] = word.Trim();
  }
  
  for(int i = 0 ; i i < frame.NumberOfWords ; ++i)
  {
    var word = frame.Words[i];
    frame.Words[i] = word.Reverse();
  }
}

Which one is more explicit? The second one, of course. Which is more readable? I'd argue that the first one - not only does it make clear what steps are performed, it also makes a point about their order. This is crucial, because encrypting a compressed content would give you a different result than compressing an encrypted content. If we're sending the frame to any other system, it can probably interpret only one of these two options. That's why the order is probably something that is enforced by requirements and it's great that it's reflected in the code.

By the way, one could argue, that we could just do this:

public void Process(Frame frame)
{
  validate(frame);
  compress(frame);
  encrypt(frame);
}

That's true, although, when reading this code, I'd most probably jump into each of these methods to see what they do - after all, it's the same class and I'm trying to grasp it. This forces me to get my head around more details than I need to, because everytime I'm reading this class, there is always "that other code", that's doing some fancy things and may modify some fields that may be used somewhere else. For example, how do I know that compression does not set a flag somewhere to trigger different encryption in the encrypt() method depending on the compression ratio?

My brain tends to work differently when I see other responsibilities pushed into separate objects. Then, I focus only on the behavior of the class I'm reading, since the objects I'm delegating to cannot affect anything more than what I pass to them. I can quickly grasp the behaviors my class expects from its collaborators and instantly tell whether the responsibility of the class is fulfilled well or not. For example, I don't care what validation is - the only thing I know is that I need to call it and it will just "handle stuff".

As a side note - there are some people that hate design patterns and say that they hurt readability. While I agree that "pattern religion" is bad, I have actually seen many, many, many situations, when using design patterns enhanced readability, because they would just "fit in" into the solution. E.g. when I want to add tracing of every method call made against an object, a decorator is an obvious choice. Then, looking at the decorator, one can instantly see that the intention of the code is to provide tracing over a method call - it doesn't have to be digged for.

Ok, that's everything I have for today. Feel free to comment with your thoughts!

Saturday 8 September 2012

What is the scope of a "unit test" in TDD?

I often get asked the question: "What is the scope of a "test" in TDD? Is it method scope, class scope, feature scope?".

Let's try to answer the question by examining some TDD "unit tests":

Is it class scope?

Let's see the first example and try to answer this question:

[Test] public void
ShouldThrowValidationExceptionWithFatalErrorLevelWhenPassedStringIsEmpty()
{
  var emptyString = string.Empty;
  var validation = new Validation();

  var exceptionThrown = Assert.Throws<CustomException>(
    () => validation.Perform(emptyString) 
  );
  
  Assert.IsTrue(exceptionThrown.IsError);
}

Ok, so let's see... how many real classes take part in this spec? Three: a string, an exception and the validation. So the class scope is not the most accurate description.

Or a method scope?

So, maybe the scope covers a single method, meaning a spec always exercises one method of a specified object?

Let's consider the following example:

[Test] public void 
ShouldBeFulfilledWhenEventOccursThreeTimes()
{
  var rule = new FullQueueRule();
  rule.Queued();
  rule.Queued();
  rule.Queued();
  Assert.IsTrue(rule.IsFulfilled);
}

Count with me: how many methods are called? Depending on how we count, it's two (Queued() and IsFulfilled) or four (Queued(), Queued(), Queued(), IsFulfilled). In any case, not one. So it's not method scope either.

It's the scope of class behavior!

The proper answer is: behavior! Each TDD test specifies a single behavior. Amir Kolsky and Scott Bain even teach that each TDD test should "introduce a behavioral distinction not existing before".

It may look that "behavior" scope is actually broader than method or class levels, since such test can span multiple classes and multiple methods. This is only partially true. That's because e.g. tests with method scope can span multiple behaviors. Let's take a look at an example:

[Test] public void 
ShouldReportItCanHandleStringWithLengthOf3ButNotOf4AndNotNullString()
{
  var bufferSizeRule = new BufferSizeRule();
  Assert.IsTrue(bufferSizeRule.CanHandle("aaa"));
  Assert.IsFalse(bufferSizeRule.CanHandle("aaaa"));
  Assert.IsFalse(bufferSizeRule.CanHandle(null));
}

Note that it specifies three (or two - depending on how you count) behaviors: acceptance of string of allowed size, refusal of handling string above the allowed size and a special case of null string. This is an antipattern by the way and is sometimes called a "check-it-all test". The issue with this kind of test is that it can fail for at least two reasons - when the allowed string size changed and when null handling is done in another way.

Let's see how this works out with class-scope test:

[Test] public void
ShouldReportItIsStartedAndItDoesNotYetTransmitVoiceWhenItStarts()
{
  var call = new DigitalCall();
  call.Start();
 
  Assert.IsTrue(call.IsStarted);

  Assert.Throws<Exception>(
    () => call.Transmit(Any.InstanceOf<Frame>()));
}

Again, there are two behaviors here: reporting the call status after start and not being able to transmit frames after start. That's why this test should be split into two.

How to catch that you're writing a test that specifies more than one behavior? First, take a look at the test name - if it looks strange and contains some "And" or "Or" words, it may (but does not have to) be about more than one behavior. Another way is to write the description of a behavior in a Given-When-Then way. If you have more than one item in the "When" section - that's also a signal.

So, it's all about behaviors! Of course, if you examine the styles of some famous TDD authorities, most of them understand "behavior" and "scope" a little bit different - so it's always good to read some books or blogs to get a grasp of the big picture.

See ya and have a good weekend!