You should spend time writing meaningful unit tests and not meaningless ones. In this post I'm not going to quote any specific papers, but please do look into whatever evidence there is, good starting points are https://www.hillelwayne.com/talks/what-we-know-we-dont-know/, Software engineering's greatest hits, https://neverworkintheory.org/ and Making Software

What good do unit tests do?

I'm sure many think that unit tests help reduce bugs, but I'm afraid there is no clear evidence that they do. When you write some code, you think of all the cases it needs to handle, maybe you try to run them. If you're expected to write unit tests you will write tests for some, or all, of the cases you thought of. How is that ever going to catch a bug, you already thought of the case when you wrote the code?

Someone may now object that you're supposed to do TDD and write the tests first if you do agile right. Well, I think there are some benefits to TDD that I'll go into later, but as far as bugs go, there are studies that show that you get just as many bugs when you write the tests first as when you write the tests after the code. TDD is definitely not about bugs.

If you want to reduce bugs, you need to get other people involved to think of cases you missed. Code reviews are proven to reduce bugs. Pair programming seems to help for junior developers but perhaps not for seniors, where it might actually be slightly negative (although pair programming does help to transfer domain knowledge). A testing QA phase with skilled testers trying to break the code reduces bugs. (Side note: if you really cared about bugs, you would start coding in Ada. You only get 4 bugs in Ada for every 50 bugs in an equivalent Java program. Not because of unit tests, but, among other things, because of a better type system and contracts that catch a larger space of possible inputs than you would think to try in a unit test.)

If unit tests do not reduce bugs, why would we want to write them?

Unit tests can provide a signal when they fail, so what do we want that signal to say? The normal answer would probably be that it alerts a future maintainer if a change breaks the tested functionality. Good, I agree! That can begin to guide us to an evaluation of whether the test is meaningful or meaningless.

Consider a test that breaks (or needs changing) for every refactoring, even if the new code actually provides the desired functionality. In that case it seems it might only be testing how the code was implemented, something you can already tell from looking at the code. That test is a waste of time for everybody who has to deal with it, just don't write it!

So, to be meaningful, a test needs to tell you something that is not evident from the code. Also, I think we only want it to fail if an actual requirement is broken. Sometimes tests are overspecified, which leads to confusion for the maintainer, so be sure to only assert on things that are truly required.

If we now also make sure to name the test to reflect the requirement and write the tests to be readable, the tests help document the design and requirements to make life easier for readers of the code. Remember that this different purpose from implementation code means that test code should be written differently from implementation code.

What, if anything, might be gained from writing tests first?

The research has very mixed results on whether it's useful to apply TDD and write tests first. Since we've established that unit tests are not about bugs, we can ignore all results that measure bug rates. Unfortunately, it doesn't generally seem to help you code faster, either, although I suspect that if the studies went on long enough, a large set of unit tests would help projects to avoid slowing down and grinding into a halt in the face of unmaintainable code. But that would of course apply equally to tests written last, if they have the same quality.

IIRC, one paper did show a better focus and faster coding with TDD, but that it could perhaps be attributed to the general agile mentality of breaking things down into smaller, more focused, tasks. But the TDD process, strictly applied as:

  1. Write a test until it fails, and no longer.
  2. Write implementation code until it passes, and no longer.
  3. Refactor if needed and go back to 1.

will certainly reinforce the focus on smaller parts. One downside of breaking things into smaller tasks is that you can lose the higher-level view and you risk punting on the hard stuff which you should really deal with early to reduce risk, so you need to be aware of that and take appropriate measures, but now we're going off track.

When thinking about a unit test having the ability to provide a signal, we can see that writing the test first uses that signal to immediate benefit. Since you need to write a failing test, you use the immediate signal to prove the assumption that your code doesn't handle this case yet. Then you use the signal again to tell you when you're done coding.

I think that writing the test first also helps you focus on actual requirements rather than on how you implemented the code, so the test written first would more naturally become documentation. Another benefit that has been claimed is that your code will be better designed since you have to think about how to use the code before it's written (i.e. in the test).

Another aspect is that writing tests first is fun, although perhaps it's not for everybody. I love the steady rhythm and the feeling of progress it creates. I don't always do it, especially if the code is mostly about shovelling data back and forth, but I probably should overcome my laziness and at least write some component test with acceptance criteria. When I do exploratory coding, I will usually comment out the code I finally developed and try to write TDD tests that force me to uncomment the lines again.

Whether you decide to TDD or not, I hope you've gotten some food for thought on how to improve the quality of your unit tests.