Hacker News new | past | comments | ask | show | jobs | submit
I did not agree with most of the advice. Here are some examples:

> Unit tests explain [expected] code behavior

Unit tests rarely evaluate performance, so can't explain why something is O(n) vs O(n^2), or if it was supposed to be one or the other.

And of course the unit tests might not cover the full range of behaviors.

> Unit tests are always in sync with the code

Until you find out that someone introduced a branch in the code, eg, for performance purposes (classic refactor step), and forgot to do coverage tests to ensure the unit tests exercised both branches.

> Unit tests cover edge cases

Note the True Scotsman fallacy there? 'Good unit tests should also cover these cases' means that if it didn't cover those cases, it wasn't good.

I've seen many unit tests which didn't cover all of the edge cases. My favorite example is a Java program which turned something like "filename.txt" into "filename_1.txt", where the "_1" was a sequence number to make it unique, and ".txt" was required.

Turns out, it accepted a user-defined filename from a web form, which could include a NUL character. "\x00.txt" put it in an infinite loop due to it's incorrect error handling of "", which is how the Java string got interpreted as a filename.

> Descriptive test name

With some test systems, like Python's unittest, you have both the test name and the docstring. The latter can be more descriptive. The former might be less descriptive, but easier to type or select.

> Keep tests simple

That should be 'Keep tests understandable'. Also, 'too many' doesn't contribute information as by definition it's beyond the point of being reasonable.