Getting all your teammates to quit giving all their tests names like "testTheThing" is darn near impossible. It's socially painful to be the one constantly nagging people about names, but it really does take constant nagging to keep the quality high. As soon as the nagging stops, someone invariably starts cutting corners on the test names, and after that everyone who isn't a pedantic weenie about these things will start to follow suit.
Which is honestly the sensible, well-adjusted decision. I'm the pedantic weenie on my team, and even I have to agree that I'd rather my team have a frustrating test suite than frustrating social dynamics.
Personally - and this absolutely echoes the article's last point - I've been increasingly moving toward Donald Knuth's literate style of programming. It helps me organize my thoughts even better than TDD does, and it's earned me far more compliments about the readability of my code than a squeaky-clean test suite ever does. So much so that I'm beginning to hold hope that if you can build enough team mass around working that way it might even develop into a stable equilibrium point as people start to see how it really does make the job more enjoyable.
Nine times out of ten this is the only test, which is mostly there to ensure the code gets exercised in a sensible way and returns a thing, and ideally to document and enforce the contract of the function.
What I absolutely agree with you on is that being able to describe this contract alongside the function itself is far more preferable. It’s not quite literate programming but tools like Python’s doctest offer a close approximation to interleaving discourse with machine readable implementation:
def double(n: int) -> int:
“””Increase by 100%
>>> double(7)
14
“””
return 2 * n
What do test names have to do with quality? If you want to use it as some sort of name/key, just have a comment/annotation/parameter that succinctly defines that, along with any other metadata you want to add in readable English. Many testing frameworks support this. There's exactly zero benefit toTryToFitTheTestDescriptionIntoItsName.
Readability of the code makes a lot of it's quality. A working code that is not maintainable will be refactored. A non working cofe that is maintainable will be fixed.
Also, are you a fan of nesting test classes? Any opinions? Eg:
Class fibrulatatorTest {
Class highVoltages{
Void tooMuchWillNoOp() {}
Void maxVoltage() {}
}
}I don't know if any test tools that work like that though.
The quality of the tests.
If we go by the article, specifically their readability and quality as documentation.
It says nothing about the quality of the resulting software (though, presumably, this will also be indirectly affected).
People grab the first word they think of. And subconsciously they know if they obsess about the name it’ll have an opportunity cost - dropping one or more of the implementation details they’re juggling in their short term memory.
But if “slow” is the first word you think of that’s not very good. And if you look at the synonyms and antonyms you can solidify your understanding of the purpose of the function in your head. Maybe you meant thorough, or conservative. And maybe you meant to do one but actually did another. So now you can not just chose a name but revisit the intent.
Plus you’re not polluting the namespace by recycling a jargon word that means something else in another part of the code, complicating refactoring and self discovery later on.
If anything, in this scenario, I wouldn't even bother printing the test names, and would just give them generated identifier names instead. Otherwise, isn't it a bit like expecting git hashes to be meaningful when there's a commit message right there?
I've been wishing for a long time that the industry would move towards this, but it is tough to get developers to write more than performative documentation that checks an agile sprint box, much less get product owners to allocate time test the documentation (throw someone unfamiliar with the code to do something small with it armed with only its documentation, like code another few necessary tests and document them, and correct the bumps in the consumption of the documentation). Even tougher to move towards the kind of Knuth'ian TeX'ish-quality and -sophistication documentation, which I consider necessary (though perhaps not sufficient) for taming increasing software complexity.
I hoped the kind of deep technical writing at large scales supported by Adobe Framemaker would make its way into open source alternatives like Scribus, but instead we're stuck with Markdown and Mermaid, which have their place but are painful when maintaining content over a long time, sprawling audience roles, and broad scopes. Unfortunate, since LLM's could support a quite rich technical writing and editing delivery sitting on top of a Framemaker-feature'ish document processing system oriented towards supporting literal programming.