Hard Lessons Learned About Test Driven Development (TDD)
Michael K. Campbell shares his experiences with Test Driven Development and explains best practices for approaching this development methodology.
July 11, 2013
Related: "C# Book Reviews"
There's something to be said for lessons learned the hard way. Although it's best not to learn things the hard way, it's common to find that the things you've learned the hard way also tend to stick with you the most. To that end, I'm sharing a few lessons that I've sadly learned over the past year relating to Test Driven Development (TDD).
Lessons Learned the Easy Way
I have to put things in perspective before I get started. Automated testing has always made sense to me, and I've been doing some form of it for years now. As such, lessons that I've learned the hard way about TDD really relate to a question of how committed I was to TDD or Red-Green-Refactor (RGR) as a religion. Consequently, I'm sharing what's more or less a conversion story -- or how I came to see the error of my ways and learn that RGR is not really worth shortcutting.
Stated differently, I initially took the approach of bypassing RGR and wrote my tests after the fact, which is something I'm guessing that many developers do. For me, this approach didn't require much change to how I coded and provided some immediate and excellent benefits. In particular, writing tests after my code let me lock-in my code and be made aware of any breaking changes from then on. Although the benefits of this approach are tangible, I've recently come to see just how much I was selling myself short -- without realizing it.
Lessons Learned the Hard Way
For the past year I've been working on a very complex and non-trivial solution that involves lots of distributed components operating across several different platforms, hosts, and systems. After fighting more than a few times with ugly bugs that kept cropping up in unexpected ways when all of these systems were interacting with each other, I decided that I needed a way to avoid time consuming and difficult-to-diagnose bugs that were happening during initial integration testing. In my mind, that meant it was time for me to finally step up to the plate and implement a lot more tests.
Related: "Simplify HTML Coding with Emmet for Visual Studio"
So I made another critical mistake and decided to start wrapping way more of my existing code in tests that were written after the fact. To put this into perspective, I went from having about 20 to 30 percent code coverage to spending a long time banging away on unit tests (or so I thought) that were letting me get up to 80 to 100 percent coverage of large swaths of existing code. In fact, I actually put integration testing completely on hold as I backtracked and spent a ton of effort explicitly trying to achieve close to 100 percent coverage across all of my existing code.
That's where I learned the following lessons -- the hard way:
100 Percent Code Coverage Doesn't Equate to Bug-Free Code. I've now come to terms with the fact that writing tests after the fact is patently dumb. Stated differently: Just because a third-party code coverage tool talks to the compiler and says that some of your unit tests are passing through every line of your code doesn't mean that every line of your code has been properly validated. In my experience, I found in too many cases that complex and non-trivial interactions might have been getting covered by my tests, but that obscene amounts of code wasn't being tested.
Testing Only One Feature is a Bad Idea. In my initial race to cover all sorts of code with tests, I went against some advice that I knew I should have listened to and only tested a single feature with each test. Again, my original and lazy thinking was that I'd seen great benefits from the haphazard testing I was doing previously, so I thought that I was still the exception to the rule and it couldn't hurt in this regard, right? Not only did violating this rule fail to solve my problems, but also changing my code after making this mistake became a full-on nightmare as minor changes here and there would cause my test-runner to light-up like a Christmas tree in all sorts of random locations because I had violated this hard/fast rule of TDD.
RGR Yields Better Code
Another lesson I learned the hard way is that RGR really and truly does yield insanely better code. After I realized I had painted myself into a corner, I decided to take the approach of starting from scratch and adhering to as many best practices as I could possibly read about. (Roy Osherove's book The Art of Unit Testing is still a great resource that I've had for years, and it's amazing how much it helps you when you actually implement the advice he offers instead of reading through it and assuming you're special.) As I started from scratch, I decided that I'd apply RGR across the board. In doing so, I found that I spent much more time upfront by focusing on what I'd call negative aspects of my code or creating tests that dictated what would happen when my code wasn't being executed through happy-path callers or consumers. To that end, I found that RGR really helped me focus more on interfaces and code usage -- making my resulting code insanely more stable and reliable.
Along the way, I also found that focusing on a single unit or tiny handful of lines of code via RGR at a time made me much more reliant upon fakes. In turn, this meant that my version 2.0 code ended up adhering much more faithfully to the Single Responsibility Principle (SRP). And this in turn ended up being a huge win because it meant that I had better encapsulation, which yielded in dramatically fewer side effects when making subsequent changes or modifications later on and as needed.
Unit Testing and My Muse
Another huge lesson I learned on my path to TDD enlightenment was that I can still unit test and not be forced to give up my muse or what I like to refer to as that point where something brilliant forms in my brain and needs to immediately be translated into code before its lost. Specifically, what I've found is that in cases such as this, where I find that I don't know what I'm doing exactly and need to pound-out some full-blown integration logic to see if I can get things to even work like I'm needing them to, was that I can simply create a new project from scratch right inside of an existing solution. Doing this lets me immediately leverage existing and validated code to the point where I can easily transcribe my muse or pound out some complex orchestrations or syntax to see if I can get things to work as desired. When I'm done with my scratch project, then I can take a breather, backtrack a bit, and spin up real code that starts from scratch via RGR to slowly and faithfully implement whatever it was that I just validated in my scratch project and then remove the scratch project once completed with it. In each case that I've done this, I've found lots of immediate ways to optimize that new, real code, along the way, which is just one more reason that I'm now sold on doing TDD the correct way because of all of the lessons I've sadly learned the hard way.
About the Author
You May Also Like