Notes from episode 2
Notes from episode 3/4/5
Notes from episode 6
I just finished watching episode 1 of Uncle Bob and Sandro Mancuso's London vs Chicago Comparative Design Case Study and enjoyed it. I really recommend this case study to anyone who wants to learn more about TDD.
This episode 1 was about Sandro beginning writing a twitter-like application and the first tests. Sandro claims to use the London Style TDD school, which is heavily based on mocking and outside-in approach. The school originated (AFAIK) from the Growing Object Oriented Software Guided By Tests book. Because of that and because I prefer this approach to design as well, I thought it could be interesting to write down, even if only for the "future me", the notes on the differences I think I can see between what Sandro did and what GOOS would do and what I would do. Of course, this is only based on episode 1, so some things may change during the next episodes.
Disclaimer
- This is a set of notes, not a full blown article, so its structure might be a bit too dense and unsolicited. I welcome discussion and requests to clarify stuff. If you didn't see the episode, there is a high chance some of what I write will not make sense to you.
- This post is not a critique of Sandro's style. I am not claiming that I am better in TDD than Sandro or better in this particular style of TDD. While I may sometimes sound as if I find some ideas silly, it's just for showing this stuff from the perspective of my biases and default choices (this is why sometimes I will describe my feelings, not only rational judgement). I am by no means suggesting that my choices are/would be better. I am sharing this because I think that sharing thoughts (and feelings) leads to interesting discussions. If you find me being disrespectful, please point it out so that I can correct it.
- While I point some of the differences I think I see between GOOS and Sandro's TDD, it's not that I do EVERYthing like in GOOS. I am not including the differences between my TDD and GOOS TDD because I figured nobody would be interested in that :-).
- I am not an authority on what is and is not compliant with GOOS style. I just write my subjective opinion based on my observations that may be wrong. If you see me missing the point, please point it out so that I can correct it and learn from the exercise.
Differences between Sandro's TDD and GOOS:
- GOOS worked example started with a single acceptance test, then some code was written (without unit tests) just to make this acceptance test pass, resulting with a walking skeleton. The next step would be to refactor the existing code based only on the acceptance test and then introduce some first abstractions and place to insert mocks for future behaviors. Sandro does two things differently:
- He has all the acceptance tests written up-front (Update: this was explained by Sandro as being an input condition for the whole exercise)
- He drives even the very first feature with unit tests already.
- GOOS advises writing the acceptance tests in a domain language. For that, it recommends building a testing DSL that isolate a test from the implementation details like delivery mechanism. Sandro uses plain RestAssured library calls without (almost) any abstractions on top of it. This may be because Sandro's tests are not really end-to-end, but they are tests for HTTP contracts between the front-end and the back-end (Update: this was explained by Sandro as being an input condition for the whole exercise). Also, Sandro calls his tests "integration tests", which is a term that is used for a different kind of tests in GOOS, but that might not be an issue since, as I believe, the "integration" is more about the goal of a test, not about a level or kind.
- GOOS advises usage of strict mocking framework (JMock), while Sandro uses a more-loose-by-default mocking framework (Mockito). Usage of strict mocking framework by GOOS was not merely a choice dictated by the availability of tools at the time the book was written. On the GOOS mailing list, there is a comment by Steve Freeman that he considers strictness-by-default part of his style (cannot find it now though).
- GOOS advises mocking interfaces, while Sandro up to now has only mocked classes. As above, the choice to introduce and mock interfaces in GOOS was not dictated by the limitation of tools at that time. It was a deliberate choice to think of interfaces as roles that objects play and GOOS authors claim they are aggressive in refactoring interfaces to make them smaller and more abstract.
Now time for the differences between Sandro's TDD and my TDD:
- Sandro starts all his tests from a name and then an assertion. As this is one of my ways to start a failing tests, it's not the only one that I use. In my book, I describe 4 different approaches to starting with a failing test, starting from assertion being one of them.
- Sandro uses underscores in test names and lower camel case in production code method names. It's just a matter of preference, but I find that confusing - I find it better to stick to a single convention, which has the added value of simplifying the configuration of static analysis tools that enforce naming convention and takes one dilemma away. Still, this is not a big deal, just a matter of style.
- Sandro tends to prefer realistic test data even where this is not needed or required. In the first test, where he passes data for user registration, he chooses a realistic name, (semi)realistic password and the ID is generated as UUID and converted to string. This is not needed at all, because the object under test does not handle any validation and, as we find out later, the only class that knows that the user ID is actually a UUID is the IdGenerator - in the rest of the cases, any string would do. Personally I use dummy data wherever validating this data is not the responsibility of the object under test because I find that by doing it, I describe the contract of my class with its collaborators more precisely. Whenever I see a specific/realistic data, I interpret it as if it was required by the tested object. Also, I am confused about one more thing - if I use another ID generator or change the implementation of existing one, should I change the "realistic data" in the tests where it doesn't matter?
- Also, I wouldn't establish relationship between test data fragments where it is clearly not required. For example, Sandro creates a registration object with "name", "password" and "about" fields and sets up a mock of a service object to return a User object with the same three values. Personally, I don't see why. The responsibility for establishing relationship between input and output values is clearly pushed to another class that is mocked (a service class), so in my mind, this is leaking responsibilities of another class to tests of class that clearly does not care. And as I understand GOOS compositional approach to design and its principle of context independence, a class and its tests should not assume in what context the class is used.
- Sandro moves parts of the code to the setup method and parts of the data to fields of class with tests, arguing that this is cleaner. I wrote a blog post long ago explaining why I don't like this idea and why I find such tests harder to read and maintain with the "Testcase class per class" organization approach.. Looking at the video, the setup method is called "initialise", which I find a meaningless name and usually read it as "bag for everything I don't want to look at". Thus I consider it a smell and I also feel it takes away a bit of the pressure from tests to improve the design. When I see some logic that I strongly feel could be moved to a setup method and fields, this leads me to searching for a way to improve the design. I find setup methods hiding these issues for me. I understand that most probably, Sandro has a way of compensating for all the issues I described and maybe even making them irrelevant.
- The method of UserAPI class that Sandro test-drives clearly violates the principle of command-query separation (it is a command that returns a value as well). This is often what frameworks require us to do, unfortunately. Sandro chooses to push this violation further into the application/domain layer (specifically - the UserService class). Personally, I tend to cut away from this kind of violation on controller level using a Collecting Parameter pattern and then I only deal with commands, which I find makes the rest of my tests easier.
- In the controller test, Sandro includes parsing in the scope. I would probably push the parsing responsibility to a collaborator. I can see that Sandro can succeed with his approach with a simple, non-nested JSON with three fields, but I wonder if his approach would be the same if it was a bigger structure with several nesting levels, optional sections etc.
2 comments:
Could you provide an example of "Personally, I tend to cut away from this kind of violation on controller level using a Collecting Parameter pattern and then I only deal with commands, which I find makes the rest of my tests easier" how would you use the Collecting Parameter pattern here? I believe he returns the user created in the case you are referring to, correct?
Hi, Anonymous.
You can see an example of me using a collecting parameter in the following file: https://github.com/grzesiek-galezowski/TrainingExamples/blob/master/CSharp/VotingSystem/Bootstrap/Controllers/UsersController.cs in the RegisterUser() method.
Yes, I think he is returning a created user.
Post a Comment