Tuesday, 7 May 2013

When a 'test' becomes something else

(this post is adapted from my work-in-progress open source TDD tutorial)

Everyday life example

I studied in Łódź, a large city in the center of Poland. As probably all other students in all other countries, we've had lectures, exercises and exams. The exams were pretty hard, especially considered that my computer science group was on the faculty of electronic and electric engineering, so we had to grasp a lot of classes that didn't have anything to do with programming, for instance electrotechnics, solid-state physics or electronic and electrical metrology.

Knowing that the exams were difficult and that it was hard to learn everything during preparation, the lecturers would give us exemplary exams from previous years. The questions were different than during actual exams, but the structure and types of questions asked (practice vs. theory etc.) was similar. We would usually get these exemplary questions before we started learning really hard (which was usually at the end of semester). Guess what happened then? As you might suspect, we did not use the tests we received just to 'verify' or 'check' our knowledge after we finished learning. Those tests were actually the first thing we went to before even starting to learn. Why was that so? What use were the tests when we knew we wouldn't know most of the answers?

I guess my lecturers would disagree with me, but I find it quite amusing that what we were really doing back then was 'Lean'. Lean is an approach where, among others, there's a rigorous emphasis on eliminating waste. Every feature or product that's produced while not being needed by anyone is considered waste. That's because if something's not needed, there's no reason to assume it will ever be needed. In such case the entire feature or product is a waste - it has no value. Even if it WILL be ever needed, it will require some rework anyway to fit the real customer needs. In this case, the initial amount of work that went into the parts of solution that had to be replaced by another parts anyway is a waste - it never brought any money (I'm not talking about such things as customer demos - their value lies somewhere else).

So, in order to eliminate waste, there's huge pressure nowadays to "pull features from demand" instead of "pushing them" into the product "for future". In other words, every feature is there to satisfy concrete need. If not, the effort is considered wasted and the money drown.

Going back to the exams, why the approach of first looking through the exemplary tests can be considered 'lean'? That's because, when we treat passing an exam as our goal, then everything that does not put us closer to this goal is considered a waste. Let's suppose the exam is theory only - why then practice the exercises? It would probably pay off a lot more to study theoretical side of the topics. Such knowledge could be obtained from those exemplary tests. So, the tests were a kind of specification of what was needed to pass the exam, letting us pull the value (i.e. our knowledge) from demand (information obtained from a realistic tests) rather that pushing it from implementation (i.e. learning everything in a course book chapter after chapter).

So the tests became something else. They proved very valuable before the 'implementation' (i.e. learning for the exam) because:

  1. they helped us focus on what was needed to reach our goal
  2. they brought our attention away from what was not needed to reach our goal

That was the value of a test before learning. Note that the tests we would usually receive were not exactly what we'd encounter at the time of exam, so we still had to guess. Still, the role of a test as specification of a need was already visible.

Taking It To The Software Development Land

I chose this lengthy metaphor to show you that 'test' is really another way of specifying a requirement or a need and that it's not something that's counter-intuitive - it occurs in our everyday lives. This is also true in software development. Let's take the following 'test' and see what kind of needs it specifies:

var reporting = new ReportingFeature();
var anyPowerUser = Any.Of(Users.Admin, Users.Auditor);
Assert.True(reporting.CanBePerformedBy(anyPowerUser));

(In this example, we used Any.Of() method that returns any enumeration value from the specified list. Here, we say "give me a value that's either Users.Admin or Users.Auditor".)

Let's look at those (only!) three lines of code, imagining that the production code that makes this 'test' pass does not exist yet. What can we learn from these three lines about what the code needs to supply? Count with me:

  1. We need a reporting feature
  2. We need to support a notion of users and privileges
  3. We need to support a domain concept of power user, who is either an administrator or an auditor
  4. Power users will need to be privileged to use the reporting feature (note that it does not specify which other users should or should not be able to use this feature - we'd need a separate 'test' for that).

Also, we are already after the phase of designing an API that will fulfill the need. Don't you think it's pretty much information about the application from just three lines of code?

A Specification Instead of a Test Suite

I hope that you can see now that what we called 'a test' is really a kind of specification. This "discovery" is quite recent, so there isn't a clear terminology on it yet. Some like to call the process of using tests as specifications: Specification By Example, to say that the tests are really examples that help specify and clarify the behavior of developed part of functionality. The terminology is still not rock-solid, so you might encounter different naming for different things. For example, a 'test' is often referred to as 'spec', or an 'example', or a 'behavior description', or a 'specification statement' or a 'fact about the developed system' (the xUnit.NET framework marks each 'test' with a [Fact] attribute, suggesting that by writing it, we're stating a single fact about developed code. By the way, xUnit.NET also allows us to state 'theories' about our code, but let's leave it for now).

From your experience, you may know paper or word specifications, written in plain English or other spoken language. Our specification is different than these specifications in at least few ways:

  1. it's written one statement at a time.
  2. it's executable - you can run it to see whether the code adheres to the specification or not.
  3. it's written in source code rather than in spoken language - which is both good (less room for misunderstanding - source code is the most formal and structured way of writing specifications) and bad (great care must be taken to keep such specification readable).

1 comment:

Grzegorz Gałęzowski said...

Thanks, I hope you like the full tutorial. I'm actively working on. It's a project for few years, but the progress is good and you can always download the latest version or view the diffs at the github page.