Today, I'd like to tell you two stories, draw a conclusion and then go on with the third story which I want to leave you with to think about.
Story 1: How unit test failures can look like compile time failures
Let me offer this as an introduction: ever heard of Continuous Testing for Visual Studio 2010? This is a nice little Visual Studio extension that runs all unit tests after each build automatically. No big deal, you can say, since adding a post-build action to a Visual Studio project will give you the same. This, however, is not the feature that made me think (and some nice heuristics did not as well). It does another thing that brought my attention.
It adds failed unit tests to Visual Studio compilation errors dialog.
So, each time a unit test fails, its results are displayed as you would expect compile time errors to appear. Only this time, instead of "Wrong syntax xyz", the message is "Assert.IsTrue(xyz) failed".
Story 2: How compile time checking becomes less important with unit tests around
Remember the post Bruce Eckel made somewhere in 2003 called Strong typing vs. strong testing? Bruce made a bold statement then, claiming that he doesn't need compile-time type checking when he's got a comprehensive suite of unit tests to back him up. This way he could move from Java to python without worrying too much about the type checking.
The conclusion
What's the conclusion? Let's paraphrase these two stories a little, then come up with the third story. The first story is about how unit testing output was "added" to the compiler toolchain. The second one is about how running unit tests replaced some of the program compilation benefits.
This begs the question: is there really such a sharp boundary between what we know as "compilation" and what we know as "testing"?
I wrote a compiler once. If we look at compilation phases, there are different kinds of checks involved: syntactic analysis, semantic analysis etc. What if we add another compilation step called "behavioral analysis" which consists of running your suite of executable specifications (AKA unit tests)? Sure, this kind of analysis is up to you more than any other, since you define the criteria for failure and success, but when it comes to running, it's just as any other compilation phase. As I mentioned, there are even tools that will add the result of this "behavioral analysis" in the same place as syntax errors (like misplacing a coma) or semantic errors (like trying to use a type that does not exist), so that it is indistinguishable. And, in my opinion, this is how running unit tests should be treated - as a part of build process. When unit tests are disabled, you disable one kind of analysis that is run on your code. Do you care? Well, when developing a program, if you could disable type checking in, let's say, C++, would you go for it? Even for a compilation speed up?
Ok, then, now that I've drawn my conclusions, it's time for the third story.
Story 3
Someone comes over to your desk and tells you "We couldn't get a successful build because of failing unit tests, so we disabled them".
Good night everyone and sweet dreams!
No comments:
Post a Comment