Posts Tagged ‘TDD’

h1

Lighting up the tunnErl Pt.8 – Testing processes

March 13, 2009

Well after months of trying to get my head around running units around a process I think I have it sussed.

Simple unit tests
Well I’m sure you have seen these type of erlang tests before

-module(test_foo).
-include_lib("eunit/include/eunit.hrl").
foo_test_() ->
[?_assertEqual("foo",say("foo")),
?_assertEqual("bar",say("bar"))].

The above code basically asserts that the method say returns what we expect. Okay so now what if we want to setup a process, lets say chatterl:start() and test its functionality, how would we do that? well it took me a while to get my head around this part.

I have noticed that if any processes are left alive during a unit test, the process will still be active whilst the other tests run, so if for some reason you find previous test cases failing where they were initially passing, first check that you have killed all the necessary processes in all units.

Creating simple units was a walk in the park and nothing really different from the standard units I’ve created in various C clones. The doozy was creating units that focus on a specific process rather than a method or function.

Testing processes

chatterl_mid_man_basics_test_() ->
[{setup, fun() ->
chatterl:start() end,
fun(_) ->
chatterl:stop() end,
[{timeout, 5000,
fun() ->
?assertEqual(<<"Illegal content type!">>,
check_json(mochijson2:decode(chatterl_mid_man:user_list("text/json")))),
?assertEqual([],chatterl_serv:list_users()),
?assertEqual({struct,[{<<"clients">>,[]}]},
check_json(mochijson2:decode(chatterl_mid_man:user_list(["text/json"]))))
end}]}].

The above test is from chatterl’s test cases, the test starts chatterl on setup, waits 5000 msecs using a timeout & runs the units within the final fun/0. Once our tests have run we need to stop chatterl (which will drop all its connected processes). I’ve used a timeout because it seems the processes I am working with need some time to be dropped before the next test is started, I’ve noticed that if this wasn’t done I’d get all kinds of unrelated errors or complaining that the process is still alive.

Though this process seems cumbersome, especially as I’m used to a single setup/teardown method which does the same thing each time a unit is run, the above handles our tests pretty well.

For the extra curious, you can find test cases used for chatterl/libs/chatter/src/chatterl_test.erl @ github.

You can also checkout Kevin Smith’s code as he seems pretty up to scratch with his tests membox is a good place to start.

Advertisements
h1

TDD Patterns

January 28, 2009

Here are a few tips that I use myself to help keep myself on track, Like the other TDD related notes, I’ll keep these updated as time goes by.

It is often easy to get overwhelmed by a task & lose sight of what you as supposed to be testing and how to go about it, below are some pointers/patterns to help you run through the TDD process.

One Step Test
Each test should represent 1 step towards our overall goal.
If a test can not be found to do this, create one.

Starter Test
What test do we start with? Pick up a test that will teach you something about the system & will be quick to get working.
The One Step Test plays a hand in this, after realising the starter test it becomes easier to realise other test cases.

Learning Test
Use tests to help yourself learn about a particular architecture, by testing a library/framework you can find yourself becoming quite accustomed to it uses, not to mention checking that the API works as you expected.

Another Test
If a new idea is realised, write a test for it, its easy to get taken off of track so by writing down new tests. We retain the ideas but keep on with our present task at hand.

Regression Test
When a defect is reported the first things we do is, write the smallest possible test that will fail, once run will be repaired.
Gives the client a concrete way of explaining what is wrong and what they expect.
On a small scale regression testing can help you to improve your testing.

Break
Having a problem realising a solution or implementing a test, take a break, take a walk, get a drink, have a rest. Anything to allow you to momentarily detach yourself from the problem at hand. This normally alleviates the issue of hitting a brick wall. Generally the more fatigued the worse judgement calls, spiralling into worse your decisions and the issues that arise because of it.

h1

SimpleTest Vs PHPUnit

January 28, 2009

Ideology

We want to be creating tests for every piece of functionality being developed. This will help us to keep our project scalable as well as alerting us to any state or behavioural errors/smells that may arise over the projects life time.

Tests are typically used as a way of making a project as stable as possible with the view of spending as little time as possible on debugging and error finding.

As you may know from personal experience, it can be more than a pain to figure out where a particular error is coming from, not to mention, what fired it off in the first place. Test cases are there to relieve this, firstly by way of testing each expected and unexpected response to the test case. The other way is allowing us to create test situations that would rarely take place in untested situations, allowing us to deal with those issues before they come up as opposed to waiting for them to appear later on in development or worst yet, when in production after a compromise.

Test suites are not just used as a safety net, they can be used to give the developer a better understanding of the implementation of the system, as well as documentation for other developers, describing what the developer does and doesn’t expect from the case.

The idea is to incrementally create test cases & then accompany them with the actual implementation of the functionality in questions. The PHPUnit tutorials explain this procedure pretty well so I will not reiterate (see http://www.phpunit.de/pocket_guide/3.2/en/test-first-programming.html for more info). Developing this way not only helps find bugs as soon as they appear, it also helps to find them later down the line it. TDD also helps to realise over sights in the design and implementation of the system allowing us to deal with them as soon as they appear.

Ideals

  • Test incrementally, creating test first, then the implementation.
  • Have a testing suite that allows us to run test via a web browser and command line (needs to be possible with no change to code).
  • Tests are integrated into phing, so tests are run before system is deployed or updated.
  • Tests are able to run separately, a group & as a whole.
  • Able to customise results front end so we can view pass & fail results ( useful to ascertain that we actually have the data we expect.
  • Use Reflection API to test a classes structure (properties, access type, etc.).
  • Test for the unexpected as well as expected results & errors.
  • Test of exceptions & exception handling.

Findings

I’ve been looking into both PHPUnit3 & SimpleTest to determine the best test suite for us to use. Both are pretty good suites at a glance but there a few fundamental differences to be noted.

PHPUnit3

Iis the most widely used and the most popular to date though it does present a few problems. Since version 3 mock objects have been introduced but still lacks the power that SimpleTest possesses. It can also only be run via a command prompt so view-ability can be an issue, especially when the suite grows. This can be eleviated with the use of reports which can be generated once a test is run, allowing for testers to view the results without needing to know the actual command to run the suite by itself. As of ZFW 1.6 Zend_Test_PHPUnit is now integrated allowing us to test our zend application explicitly with PHPUnit. This is an obvious attraction as Zend_Test_PHPUnit will have functionality specific to the framework, allowing us to spend time on the actual tests and not creating the functionality for them.

pros

  • Widely used, part of ZFW.
  • Loads of example on-line.
  • Extended by Zend_Test_PHPUnit as of 1.6RC1
  • Able to test controllers with no further extending.
  • Can create various type of reports.
  • Customisable tests results.

cons

  • Mock objects not as fluent as SimpleTest
  • Can not run directly via a web browser.
  • Less functional than SimpleTest.

SimpleTest is the not as widely used as the above but has some fundamental differences. It allows us to not just test an objects validity but also test our application in varying ways (check its state, behaviour). With SimpleTest we are able to not just test the back end integrity but we can also test that the front end also deals with situations as we expect it to.

pros

  • Can be used with PHPUnit.
  • Can custom output.
  • Can be run via both command line and browser.
  • Can test both states & behaviour.
  • Customisable tests results.
  • Can test both state, behaviour & front end functionality.

cons

  • Not as well documented as PHPUnit
  • Will need to extend to use with ZFW
  • Not naturally a part of ZFW.

Over the past few years I’ve used both suites quite extensively and found that SimpleTest is by far the most flexible. First off we’ll be able to customise the display of our results so we can properly determine whether a test is correctly passed or not, I’ve found that sometimes, though a test passes, it can sometimes be a false positive. SimpleTest allows us to not just display the test result, but also display the actual result data. Mock objects are also exceptionally powerful in SimpleTest, as mentioned before mock objects allow us to create instances of an object and set its return values. Once this is done, we can then test to make sure a method is only run ‘x’ amount of times, as well as being able to test for results, behaviour & states as well as property types. On top of all that it lessen the dependency issues that can arise from having to use real objects to test other objects (see http://simpletest.sourceforge.net/en/mock_objects_documentation.html for more info).

Conclusion

Both suites can be used with Zend framework (SimpleTest needing some extending), as well both having an Eclipse plugin (PHPUnit with ZFE out of the box) which has a feature allowing developers to run unit tests within the IDE. Both need to be downloaded and placed somewhere PHP can see it (include_path/webroot). As well as both frameworks will allow us to test a systems state plus its behaviours.

After initially going for SimpleTest, ZFW released 1.6RC1 (19/07/08), which now includes testing framework that allows us to test our controllers easier. This is a large factor in the decision making as it now means that by using SimpleTest, we will have to create a simular wrapper to which is already implemented within ZFW already using PHPUnit3. For this reason I prefer to work with PHPUnit, along with ZFW 1.7 giving me functionality needed to test ZF based applications.

I’ve added a couple of links to better explain the concept of stub and mock objects.

Resource Links

http://martinfowler.com/articles/mocksArentStubs.html – An excellent article explaining the difference between stubs & mocks

http://simpletest.org/api/– SimpleTest

http://phpunit.de/pocket_guide/3.3/en/ – PHPUnit3

h1

Useful test cases

January 27, 2009

So what makes a test useful, how can we make our tests improve our code?

It is easy to fall fail of creating tests that are not testing what you actually want them to or even don’t do anything. My other observation is how easy it is to get lost in test paralyse. One of the most common reasons for this is due to people not missing some of the key concepts behind TDD, that makes things a whole lot easier, those of which I’ve outlined below.

  • Commenting out tests is evil.
  • Test dependencies are evil.
  • Overuse of Mocks/Stubs/Wrappers.
  • Tests should instruct implementation.
  • 100% coverage != 100% complete.
  • Test for the unexpected.
  • Adding test cases not covered by specs.
  • Maintain a list of test cases to write.
  • YAGNI
  • KISS
  • Meaningful unit test names
  • One assertion at a time

Commenting out tests is evil
This usually means one of a few of thing:

  1. See Test dependencies are evil
  2. See YAGNI
  3. See KISS

Test dependencies are evil
Setting up objects & their dependencies within our test cases can easily introduce unexpected errors within our implementation code. A common example of this would be setting up sessions with our setup functions, especially when using frameworks like which take advantage of the MVC model, in either case these types of things should be done behind the scenes.

Overuse of Mocks/Stubs/Wrappers
It is very easy to over use this principles which in turn can breaking the pass/fail relationship we have between our implementation code & our test cases. We should only use wrappers,mocks,stubs to emulate hard to test functionality (see testing unexpected), never use them just test a methods that can in turn be tested in simpler ways (see KISS).

Tests should instruct implementation
Having this paradigm in mind will help create code that is not only easy to test but reusable & flexible. The use and knowledge of dependency injection, mocking, stubbing & fixtures will help to improve this.

100% coverage != 100% complete
Just because our test cases have 100% coverage does not stipulate that we have a implemented code that is secure, robust or following the specs. In light of this it is also important to remember that just because the test cases all pass, doesn’t not mean that our code is up follows the spec or is bug free in any way. So don’t be mislead in thinking that your code will be bug free, your code is just as good as your tests.

Test for the unexpected
It is all well and good just simply following specification or thinking that if you just test for things that we expect, our code will be complete but this way of thinking opens up gaps for bugs, security issues & other types of flaws (crashes if we’re lucky). Test boundaries, invalid input, what happens if the DB isn’t available, the configuration file doesn’t exist or the settings are incorrect. Doing so will save a lot of headache and debugging in the future.

Adding test cases not covered by specs
Often the specs will only go as far as giving you the bare minimum of what the system requires to be functionality. This being said it is often possible to miss possible oversights (see Test for the unexpected).

Maintain a list of test cases to write

From experience the best thing to do is before hand look over the specifications and formulate a list of test cases, sorting them in order of the quickest to test to longest (which usually when complete stipulates that the session is finished). Once complete, check the list adding any tests that you may are missing (see Adding test cases covered by specs). Write down any new tests that come to mind within the session (0rdering where needed), this will help you keep track of what needs to be tested next. Each time you start working on the test case again, rewrite up the list, this will help you to get back into the state of mind you were in last session. With a little practice this will be an indispensable tool in your toolkit.

YAGNI
Just because a you may need something later is not good enough reason for implementing it now, Remove it if ‘You Aren’t Going to Need It’.

KISS
Keep tests and implementation as code simple as possible. Think to yourself has this already been done before within our code base or with our toolkit (ZendFramework/Cake what have you). This should help save you time reinventing the wheel (9-10 times preexisting code will have been tested more).

Meaningful unit test names
To help decrease debugging time and the ‘WTF’ factor from other developers, it is always a good thing to give your unit tests meaningful names, if your unit test is to check that your project object has a title, then the name of your unit test function should  be something like projectHasATitleTest(), this will help yourself & other quickly see what the test is supposed to do & not marvel over cryptic test names like mustHaveParams().

One assertion at a time
This  is another helpful tip, brought to my attention by Dave Bishop, I totally by passed this as one of the crucial time savers. There are times when you think to yourself, ‘hey I could just put this assertion in with this unit test’, as someone that has placed multiple assertions in a unit test. This is not a good idea, when a unit test fails with multiple assertions it is not takes time to determine which assertion is failing but you also can’t guarentee that that assertion is the only failing assertion. To save yourself this headache, keep to one assertion a unit test.

Credits
Thanks again to Dave Bishop for his comments on ‘Meaningful unit test names’ & ‘One assertion at a time’.