title: ‘How do you tie tests back to documentation? (This Old Pony #56)’ layout: newsletter published: true date: ‘2018-07-24T10:45:00.000Z’

This week’s edition is going to be a bit different (if this is your first issue please note!). I’d like to present a problem I’ve been thinking about for the past few weeks and my initial idea for solving, and then I’d really like to hear back from you all about whether this is reinventing the wheel or you have a better way of doing things.

In short, I want to be able to tie described functionality in human-focused documentation directly to tests in the test suite.
 

What are we solving for?

I like project documentation. Not only is it helpful when working on a project to know where things are and perhaps _why _as well, but it’s also a useful method for understanding requirements and finding holes. 

Documentation driven development, as the name implies, is development that starts with documentation, whether just in a README or somewhere else. It’s not a discipline like test driven development, or faux discipline as is often the case. It’s just a handy tool. And it’s needn’t represent waterfall style development either, it can be done in small batches in an agile style.

At any rate, using human language to first describe a feature means not only are you forced to “rubber duck”[x] from the start, but you end up with documentation when you’re done.

The _problem _with documentation though is that it’s really hard to keep in sync with the software as code changes, especially on a quick moving web application. So what I thought would be helpful would be to directly associate software tests with specific features or feature descriptions in the documentation. Thus allowing you to identify which features were not yet tested, which tests covered a piece of functional documentation, and some kind of measure of “documentation coverage”, whatever that might be.
 

A first pass solution

My first idea for a solution looks like a combination fo Sphinx plugin[x] and pytest plugin[x]. First, we’d use a _directive _in the documentation with either a coded unique identifier and/or a written identifier like a feature story name (“A user should be able to reset their password”). 

Now in the tests - and I’m using pytest as the example here - you’d _mark _the tests associated with this feature as described in the documentation, e.g.
 

@pytest.mark.documented("A user should be able to reset their password") def test\_password\_reset():     # testing here

Which is obviously rather verbose with the full feature explanation. But this would [somehow!] allow the pytest plugin to generate some output describing the tests and their documented features in such a way that the documentation could then link to these tests. 
 

What you’d get

The end result would be a way of looking through your described features and seeing which tests cover them, and how successfully, and also have a _very clear _understanding of why those tests are there. Yes, test names and docstrings are helpful, but this should be validation that said tests are still meaningful.

It’s like of like creating something sort of like BDD using tools that you’re already using.

Given all of this, does this sound like a process you’ve encountered before? Is there some method or tooling I’m missing? And what’s your solution for linking tests to user-facing documentation?

Validly yours,
Ben

[0] Rubber duck: https://blog.codinghorror.com/rubber-duck-problem-solving/
[1] Sphinx extensions let you add functionality to your favorite documentation builder! http://www.sphinx-doc.org/en/stable/extdev/index.html#dev-extensions
[2] pytest has a robust hook system for plugin development: https://docs.pytest.org/en/latest/writing_plugins.html