In his article "Are Bad Tests Worse Than No Tests At All?", olivstor writes:
Are the drawbacks to bad tests worse than having no coverage at all? I think the answer is that in the short term, even bad tests are useful. Trying to squeeze a extra life out of them beyond that, however, pays diminishing returns. Just like other software, your tests should be built for maintenance, but in a crunch, you can punch something in that works. It's better to have bad tests than to have untested code.
Tests are like any other code: They can go bad.
In my career, I've found that it's surprisingly hard to write good tests if you have no experience in doing so. People starting to write tests make them too complex, too long, let them have too many dependencies and they take too long to run.
If you're in such a situation, you have to face the fact that you just programmed yourself in a corner and you must spent the effort to get out of there. Tests are no magic silver bullet. They are code and follow the usual rules of coding: When it hurts, something is broken and it won't stop hurting unless you fix it.
So in this sense, I agree that bad tests are better than no tests because they tell you early that you need to fix something. That's what their core purpose is.
Management might argue that you're spending too much time on testing. I've never had a problem to sell myself to them. I usually figure that I spend 50% of my time (or more!) writing tests and 50% actual coding - and I'm still much faster than those who write code 80% of the time or more. What's more: when my code goes into production, it's is rock solid or at least easy to fix when something comes up. In 99% of the cases, the things I need to fix were those which I didn't test.
This is a positive reinforcement loop which drives me to test more and more because it stops the hurting. If your tests cost more than they seem to return, you need to fix them until you get the same positive feeling when you think about them.
No comments:
Post a Comment