Posts Tagged ‘Testing oversights’

h1

Useful test cases

January 27, 2009

So what makes a test useful, how can we make our tests improve our code?

It is easy to fall fail of creating tests that are not testing what you actually want them to or even don’t do anything. My other observation is how easy it is to get lost in test paralyse. One of the most common reasons for this is due to people not missing some of the key concepts behind TDD, that makes things a whole lot easier, those of which I’ve outlined below.

  • Commenting out tests is evil.
  • Test dependencies are evil.
  • Overuse of Mocks/Stubs/Wrappers.
  • Tests should instruct implementation.
  • 100% coverage != 100% complete.
  • Test for the unexpected.
  • Adding test cases not covered by specs.
  • Maintain a list of test cases to write.
  • YAGNI
  • KISS
  • Meaningful unit test names
  • One assertion at a time

Commenting out tests is evil
This usually means one of a few of thing:

  1. See Test dependencies are evil
  2. See YAGNI
  3. See KISS

Test dependencies are evil
Setting up objects & their dependencies within our test cases can easily introduce unexpected errors within our implementation code. A common example of this would be setting up sessions with our setup functions, especially when using frameworks like which take advantage of the MVC model, in either case these types of things should be done behind the scenes.

Overuse of Mocks/Stubs/Wrappers
It is very easy to over use this principles which in turn can breaking the pass/fail relationship we have between our implementation code & our test cases. We should only use wrappers,mocks,stubs to emulate hard to test functionality (see testing unexpected), never use them just test a methods that can in turn be tested in simpler ways (see KISS).

Tests should instruct implementation
Having this paradigm in mind will help create code that is not only easy to test but reusable & flexible. The use and knowledge of dependency injection, mocking, stubbing & fixtures will help to improve this.

100% coverage != 100% complete
Just because our test cases have 100% coverage does not stipulate that we have a implemented code that is secure, robust or following the specs. In light of this it is also important to remember that just because the test cases all pass, doesn’t not mean that our code is up follows the spec or is bug free in any way. So don’t be mislead in thinking that your code will be bug free, your code is just as good as your tests.

Test for the unexpected
It is all well and good just simply following specification or thinking that if you just test for things that we expect, our code will be complete but this way of thinking opens up gaps for bugs, security issues & other types of flaws (crashes if we’re lucky). Test boundaries, invalid input, what happens if the DB isn’t available, the configuration file doesn’t exist or the settings are incorrect. Doing so will save a lot of headache and debugging in the future.

Adding test cases not covered by specs
Often the specs will only go as far as giving you the bare minimum of what the system requires to be functionality. This being said it is often possible to miss possible oversights (see Test for the unexpected).

Maintain a list of test cases to write

From experience the best thing to do is before hand look over the specifications and formulate a list of test cases, sorting them in order of the quickest to test to longest (which usually when complete stipulates that the session is finished). Once complete, check the list adding any tests that you may are missing (see Adding test cases covered by specs). Write down any new tests that come to mind within the session (0rdering where needed), this will help you keep track of what needs to be tested next. Each time you start working on the test case again, rewrite up the list, this will help you to get back into the state of mind you were in last session. With a little practice this will be an indispensable tool in your toolkit.

YAGNI
Just because a you may need something later is not good enough reason for implementing it now, Remove it if ‘You Aren’t Going to Need It’.

KISS
Keep tests and implementation as code simple as possible. Think to yourself has this already been done before within our code base or with our toolkit (ZendFramework/Cake what have you). This should help save you time reinventing the wheel (9-10 times preexisting code will have been tested more).

Meaningful unit test names
To help decrease debugging time and the ‘WTF’ factor from other developers, it is always a good thing to give your unit tests meaningful names, if your unit test is to check that your project object has a title, then the name of your unit test function should  be something like projectHasATitleTest(), this will help yourself & other quickly see what the test is supposed to do & not marvel over cryptic test names like mustHaveParams().

One assertion at a time
This  is another helpful tip, brought to my attention by Dave Bishop, I totally by passed this as one of the crucial time savers. There are times when you think to yourself, ‘hey I could just put this assertion in with this unit test’, as someone that has placed multiple assertions in a unit test. This is not a good idea, when a unit test fails with multiple assertions it is not takes time to determine which assertion is failing but you also can’t guarentee that that assertion is the only failing assertion. To save yourself this headache, keep to one assertion a unit test.

Credits
Thanks again to Dave Bishop for his comments on ‘Meaningful unit test names’ & ‘One assertion at a time’.