Tests can make it harder to make changes to your Django site. Not just *missing* tests, but slow tests and bad tests which cause programmer pain, misdirected efforts, and false confidence. What do these four flavors of test look like and how can you correct for the troubled ones?
I wanted to just chat a little bit today about tests and how tests, or the lack thereof, can make it difficult to make changes to your existing Django site.
I previously mentioned, and I have before, how I say there are four flavors of tests: great tests, no tests, bad tests and slow tests. Sometimes I describe it in a different order, but just having tests isn’t always enough. Sometimes the tests you have can be worse than no tests at all.
So I want to dive into these reasons why it can be difficult to work on a site because of these tests and some of the strategies we can take to ameliorate these issues. To solve the problem if you will.
Great tests are tests that are probably fast, hopefully, or as close to as fast as they can be, that have extensive coverage, and test in sufficient depth, your codebase and the logic in it.
No tests, descriptively, it’s pretty obvious. It means there are no tests. It could also mean that you have a few or some really trivial tests. I’ve seen codebases where developers left in those default tests stubs that Django adds when you create a new app with the management command, the Start app, Management Command. Yes, that means you have tests, but they are trivial and this falls into the no test category.
When you have no tests, or you have these trivial tests, or really just maybe a few good tests, but no coverage at all, what this means is it is really difficult to make changes with confidence to other parts of your application, to your Django site. You’re going to be testing in production, you’re going to hopefully have some sort of great exception handling service, or a logging, to see if there are errors, but that’s the way you’re going to be testing. If, in fact, the result you’re looking for are exceptions and they may just be the wrong decision being made. So, no tests are bad.
We’re going to get to how you deal with this in a minute.
Slow tests, I want to describe these first. Slow tests are just that, they are dog slow. This is a relative term, but usually this means that it’s a meaningful time period to run a test suite that maybe isn’t that big. So, if you’ve got a test suite that is under 1,000 tests, and it’s taking even a couple of minutes to run, that might seem like a long time. And it can be, especially if they are unit tests.
Generally, we’re talking more than a few minutes, maybe talking 10 minutes, half an hour. Now we’re getting into really long durations for a test suite to run and it could be several hours or more. This is a challenge. This doesn’t mean that the tests are necessarily bad, but it dampens the cadence of your development cycle. You’re going to have to wait for every little change. Or you can’t make a change a little change and then run the test suite. Certainly not the whole test suite. So, slow tests make it difficult to make changes to a site.
Now, bad tests is a broader category and it could encompass slow tests, but not necessarily. When I talk about bad tests, what I’m really talking about are tests that result in errors, that are failing when they shouldn’t be failing or that test something spuriously.
So, for example, you could have a test that just tests this function that gets a value and just tests against this one other value. That’s nice, but what happens when a different type of value is passed in? This isn’t a statically typed language. Are you testing for different types of exceptions? Are you testing for different ranges of values? So, that’s not a meaningful test. So, that’s a bad test. Or not a great test, at the very least.
Any of these latter three categories that you face mean that it’s difficult to work with your existing app.
In the case of the test suite with errors, that something that you’re going typically encounter as a new developer, you’re onboarded to a project or a client has a project, and no one has run the test suite for awhile. That’s why. And that’s the other thing, is if course, if you have a test suite with errors, it’s not being run because it has this low signal-to-noise ratio.
So, let’s talk about fixes. The way to fix a test suite that doesn’t have any tests is to add tests. For a large application this is non-trivial. So where do you start? The best advice that I’ve ever heard about this is to start with bugs.
You don’t start writing tests for everything in the codebase, you start with bugs. You find a bug, you write a test case for it, you fix it, make sure the tests pass and go on from there. As you touch new pieces of code, you write tests for this.
Eventually, you’re going to have pretty extensive test coverage. You go in this step-wise manner, starting with bugs, new features, anything you touch and that’s the way that you can do this in a manner that’s not overwhelming and doesn’t stop feature development.
As far as slow tests go, there are a number of things you can do and it’s going to depend on why the tests are slow. Tests could be slow because the underlying code is slow and if you can speed that up that does a great deal to speeding up the tests. If you’re making too much use of the database in ways that you don’t need to, for example not using a select-related or pre-fetch calls.
And, if you’re not mocking services. You could have tests that are actually making calls to third-party systems, that are maybe unit tests that are making calls to database when they don’t need. There’s all kinds of things you can do there to speed these tests up. Start not saving model instances if you don’t need to. Use tox! There are solutions, too, if you do need to use the database. You can make sure you’re using a test runner that is going to be not dropping the database, so you don’t have to recreate the database every single time. Again, this is going to be scenario specific as far as how you solve for slow tests.
Now, bad tests are a different ball game. Again, I mentioned this low signal-to-noise ratio. The point of tests is to provide you with information. The information that tests should provide you is some sort of indication whether something is broken or not. It’s going to provide you with a confidence level that the code is correct and your application works right.
So, if you have bad tests, if there are errors in the tests, that is pure noise and not signal. The solution here is to quiet these. Get rid of the tests. Take every test that has an error and doesn’t pass because of an error, and silence it. Don’t delete it, just skip it. You can add the Skip Decorator if you’re using Unit Test, the same thing from PiTest. And come back to these. So you want to make sure you have a test suite that is healthy and running, and then come back to these on a one-by-one basis and start to look at where the errors are.
You might find that the errors are in the codebase and that is significant, but if the errors are in the test suite, then you can slowly start going through and figuring out where the errors are in the test suite, how you can fix these, and how you can start making the rest of the test suite healthy again.
Again, it’s an iterative process. The same thing with tests that are failing where they shouldn’t be. Where is a test failing that it shouldn’t be, you ask. Great question. The way that this can happen is if developers are making changes to the features, and they weren’t keeping up with tests. There was no tester in development at all, they weren’t running the tests. Someone wasn’t running the test, they were making changes to the feature set and now there is a mismatch between the tests and the application code. The same thing goes for those tests, you want to silence those and come back to them on a one-by-one basis.
And with tests like that, you probably just want to, again, fix the issue and get it to the point where your entire test suite runs with all of the tests passing in a reasonable amount of time.