Saturday 14 July 2012

You may have been practicing test-first without knowing it. Congratulations!

This post is going to be mostly about seeing TDD in a larger picture and is mostly inspired by some materials by Ken Pugh on Acceptance Test Driven Development (although it does not exactly echo Ken's opinions).

What is a requirement?

Imagine you've got technical requirements, one of which looks like this:

All incoming e-mail messages that are directed to recipient called emergency@my.mail.com shall be broadcasted to all accounts in the my.mail.com domain

Here, the technical requirement defines a special emergency e-mail address that may be used as a kind of emergency notification mechanism (like when an error occurs on your website). What's usually expected from the development team is to take this requirement and implement, so that the requirement is fulfilled. In other words, this is a request to perform an amount of work.

But how do you know that this amount of work needs to be performed? How do you know that you need to state such a requirement and send it to the development team?

Let's stop for a minute and consider these questions. Thinking about it, we can arrive at the following hypothetical situations:

  1. A customer (be it business representative or real end-user) is familiar with another system, where such notification mechanism worked and worked well. Going to our system, he expects it to have a similar functionality. However, in a situation when he expects an emergency to be communicated to all staff, nothing happens.
  2. A customer is familiar with our system and knows what it can and cannot do. He comes up with an idea that having such feature would greatly increase visibility of website errors, and recognizes that up to now, the system does not implement it.
  3. A customer not familiar with our system reads a user manual and discovers that this kind of functionality, which he knows he'll need, is not described in there, so looks like it does not exist.
  4. A customer is about to buy our system and as one of the feature he wants the system to support, he mentions emergency notification solution, but the sales representative tells him that the system does not support this kind of functionality

Test-first

We can imagine that one of the situations described above may have led to this new requirement being specified. But what are these situations? Let's try to rephrase them a little, adding a small comment in braces after each one that should clear the situation a bit:

  1. A customer leads to a situation where he expects everybody to be notified of the emergency situation, but they're not notified (FAIL).
  2. A customer searches his memory expecting to find the fact that our system implements the feature, but cannot find the fact because it's not true (FAIL).
  3. A customer expects the user manual to contain information that such a feature exists, but the manual says that it doesn't (FAIL).
  4. A customer expects the sales representative to tell him that their system supports the desired feature, but the representative tells him otherwise (FAIL)

Yes, you've guessed it - these are tests! Well, maybe not JUnit/NUnit/Rspec/whateverUnit tests, but they're tests anyway. What's more, they're tests that fail, which makes us "repair" this "failure" by providing a technical requirement and then implementing it.

What got lost?

Also, note that the tests tell nothing about e-mail and the broadcast address - they tell us that our customer expects everybody to be notified of emergency situation. The e-mail part is what's been added in the requirement. At the same time, another piece of information was dropped by translating test into technical requirement - the user's intention.

It's because this requirement was HOW the user expectation should be satisfied.

The sad part is that, although these tests contain some information that got "lost in translation" to technical requirements, usually such tests are never even stored, not mentioning automating them (which, in some cases, could be possible).

There are two important lessons to learn from this story:

  1. It's a pity that we don't value the tests that come before requirements
  2. It's a pity that we can't see that tests come first. If we could, it would let us understand better why unit-level Test Driven Development makes perfect sense, because it's the same process of specifying what we expect to have, seeing it not fulfilled and then providing it.

Ok, that's it for today. Have fun!

No comments: