Archive for the ‘TDD’ Category

h1

TDD Patterns

January 28, 2009

Here are a few tips that I use myself to help keep myself on track, Like the other TDD related notes, I’ll keep these updated as time goes by.

It is often easy to get overwhelmed by a task & lose sight of what you as supposed to be testing and how to go about it, below are some pointers/patterns to help you run through the TDD process.

One Step Test
Each test should represent 1 step towards our overall goal.
If a test can not be found to do this, create one.

Starter Test
What test do we start with? Pick up a test that will teach you something about the system & will be quick to get working.
The One Step Test plays a hand in this, after realising the starter test it becomes easier to realise other test cases.

Learning Test
Use tests to help yourself learn about a particular architecture, by testing a library/framework you can find yourself becoming quite accustomed to it uses, not to mention checking that the API works as you expected.

Another Test
If a new idea is realised, write a test for it, its easy to get taken off of track so by writing down new tests. We retain the ideas but keep on with our present task at hand.

Regression Test
When a defect is reported the first things we do is, write the smallest possible test that will fail, once run will be repaired.
Gives the client a concrete way of explaining what is wrong and what they expect.
On a small scale regression testing can help you to improve your testing.

Break
Having a problem realising a solution or implementing a test, take a break, take a walk, get a drink, have a rest. Anything to allow you to momentarily detach yourself from the problem at hand. This normally alleviates the issue of hitting a brick wall. Generally the more fatigued the worse judgement calls, spiralling into worse your decisions and the issues that arise because of it.

h1

SimpleTest Vs PHPUnit

January 28, 2009

Ideology

We want to be creating tests for every piece of functionality being developed. This will help us to keep our project scalable as well as alerting us to any state or behavioural errors/smells that may arise over the projects life time.

Tests are typically used as a way of making a project as stable as possible with the view of spending as little time as possible on debugging and error finding.

As you may know from personal experience, it can be more than a pain to figure out where a particular error is coming from, not to mention, what fired it off in the first place. Test cases are there to relieve this, firstly by way of testing each expected and unexpected response to the test case. The other way is allowing us to create test situations that would rarely take place in untested situations, allowing us to deal with those issues before they come up as opposed to waiting for them to appear later on in development or worst yet, when in production after a compromise.

Test suites are not just used as a safety net, they can be used to give the developer a better understanding of the implementation of the system, as well as documentation for other developers, describing what the developer does and doesn’t expect from the case.

The idea is to incrementally create test cases & then accompany them with the actual implementation of the functionality in questions. The PHPUnit tutorials explain this procedure pretty well so I will not reiterate (see http://www.phpunit.de/pocket_guide/3.2/en/test-first-programming.html for more info). Developing this way not only helps find bugs as soon as they appear, it also helps to find them later down the line it. TDD also helps to realise over sights in the design and implementation of the system allowing us to deal with them as soon as they appear.

Ideals

  • Test incrementally, creating test first, then the implementation.
  • Have a testing suite that allows us to run test via a web browser and command line (needs to be possible with no change to code).
  • Tests are integrated into phing, so tests are run before system is deployed or updated.
  • Tests are able to run separately, a group & as a whole.
  • Able to customise results front end so we can view pass & fail results ( useful to ascertain that we actually have the data we expect.
  • Use Reflection API to test a classes structure (properties, access type, etc.).
  • Test for the unexpected as well as expected results & errors.
  • Test of exceptions & exception handling.

Findings

I’ve been looking into both PHPUnit3 & SimpleTest to determine the best test suite for us to use. Both are pretty good suites at a glance but there a few fundamental differences to be noted.

PHPUnit3

Iis the most widely used and the most popular to date though it does present a few problems. Since version 3 mock objects have been introduced but still lacks the power that SimpleTest possesses. It can also only be run via a command prompt so view-ability can be an issue, especially when the suite grows. This can be eleviated with the use of reports which can be generated once a test is run, allowing for testers to view the results without needing to know the actual command to run the suite by itself. As of ZFW 1.6 Zend_Test_PHPUnit is now integrated allowing us to test our zend application explicitly with PHPUnit. This is an obvious attraction as Zend_Test_PHPUnit will have functionality specific to the framework, allowing us to spend time on the actual tests and not creating the functionality for them.

pros

  • Widely used, part of ZFW.
  • Loads of example on-line.
  • Extended by Zend_Test_PHPUnit as of 1.6RC1
  • Able to test controllers with no further extending.
  • Can create various type of reports.
  • Customisable tests results.

cons

  • Mock objects not as fluent as SimpleTest
  • Can not run directly via a web browser.
  • Less functional than SimpleTest.

SimpleTest is the not as widely used as the above but has some fundamental differences. It allows us to not just test an objects validity but also test our application in varying ways (check its state, behaviour). With SimpleTest we are able to not just test the back end integrity but we can also test that the front end also deals with situations as we expect it to.

pros

  • Can be used with PHPUnit.
  • Can custom output.
  • Can be run via both command line and browser.
  • Can test both states & behaviour.
  • Customisable tests results.
  • Can test both state, behaviour & front end functionality.

cons

  • Not as well documented as PHPUnit
  • Will need to extend to use with ZFW
  • Not naturally a part of ZFW.

Over the past few years I’ve used both suites quite extensively and found that SimpleTest is by far the most flexible. First off we’ll be able to customise the display of our results so we can properly determine whether a test is correctly passed or not, I’ve found that sometimes, though a test passes, it can sometimes be a false positive. SimpleTest allows us to not just display the test result, but also display the actual result data. Mock objects are also exceptionally powerful in SimpleTest, as mentioned before mock objects allow us to create instances of an object and set its return values. Once this is done, we can then test to make sure a method is only run ‘x’ amount of times, as well as being able to test for results, behaviour & states as well as property types. On top of all that it lessen the dependency issues that can arise from having to use real objects to test other objects (see http://simpletest.sourceforge.net/en/mock_objects_documentation.html for more info).

Conclusion

Both suites can be used with Zend framework (SimpleTest needing some extending), as well both having an Eclipse plugin (PHPUnit with ZFE out of the box) which has a feature allowing developers to run unit tests within the IDE. Both need to be downloaded and placed somewhere PHP can see it (include_path/webroot). As well as both frameworks will allow us to test a systems state plus its behaviours.

After initially going for SimpleTest, ZFW released 1.6RC1 (19/07/08), which now includes testing framework that allows us to test our controllers easier. This is a large factor in the decision making as it now means that by using SimpleTest, we will have to create a simular wrapper to which is already implemented within ZFW already using PHPUnit3. For this reason I prefer to work with PHPUnit, along with ZFW 1.7 giving me functionality needed to test ZF based applications.

I’ve added a couple of links to better explain the concept of stub and mock objects.

Resource Links

http://martinfowler.com/articles/mocksArentStubs.html – An excellent article explaining the difference between stubs & mocks

http://simpletest.org/api/– SimpleTest

http://phpunit.de/pocket_guide/3.3/en/ – PHPUnit3

h1

Useful test cases

January 27, 2009

So what makes a test useful, how can we make our tests improve our code?

It is easy to fall fail of creating tests that are not testing what you actually want them to or even don’t do anything. My other observation is how easy it is to get lost in test paralyse. One of the most common reasons for this is due to people not missing some of the key concepts behind TDD, that makes things a whole lot easier, those of which I’ve outlined below.

  • Commenting out tests is evil.
  • Test dependencies are evil.
  • Overuse of Mocks/Stubs/Wrappers.
  • Tests should instruct implementation.
  • 100% coverage != 100% complete.
  • Test for the unexpected.
  • Adding test cases not covered by specs.
  • Maintain a list of test cases to write.
  • YAGNI
  • KISS
  • Meaningful unit test names
  • One assertion at a time

Commenting out tests is evil
This usually means one of a few of thing:

  1. See Test dependencies are evil
  2. See YAGNI
  3. See KISS

Test dependencies are evil
Setting up objects & their dependencies within our test cases can easily introduce unexpected errors within our implementation code. A common example of this would be setting up sessions with our setup functions, especially when using frameworks like which take advantage of the MVC model, in either case these types of things should be done behind the scenes.

Overuse of Mocks/Stubs/Wrappers
It is very easy to over use this principles which in turn can breaking the pass/fail relationship we have between our implementation code & our test cases. We should only use wrappers,mocks,stubs to emulate hard to test functionality (see testing unexpected), never use them just test a methods that can in turn be tested in simpler ways (see KISS).

Tests should instruct implementation
Having this paradigm in mind will help create code that is not only easy to test but reusable & flexible. The use and knowledge of dependency injection, mocking, stubbing & fixtures will help to improve this.

100% coverage != 100% complete
Just because our test cases have 100% coverage does not stipulate that we have a implemented code that is secure, robust or following the specs. In light of this it is also important to remember that just because the test cases all pass, doesn’t not mean that our code is up follows the spec or is bug free in any way. So don’t be mislead in thinking that your code will be bug free, your code is just as good as your tests.

Test for the unexpected
It is all well and good just simply following specification or thinking that if you just test for things that we expect, our code will be complete but this way of thinking opens up gaps for bugs, security issues & other types of flaws (crashes if we’re lucky). Test boundaries, invalid input, what happens if the DB isn’t available, the configuration file doesn’t exist or the settings are incorrect. Doing so will save a lot of headache and debugging in the future.

Adding test cases not covered by specs
Often the specs will only go as far as giving you the bare minimum of what the system requires to be functionality. This being said it is often possible to miss possible oversights (see Test for the unexpected).

Maintain a list of test cases to write

From experience the best thing to do is before hand look over the specifications and formulate a list of test cases, sorting them in order of the quickest to test to longest (which usually when complete stipulates that the session is finished). Once complete, check the list adding any tests that you may are missing (see Adding test cases covered by specs). Write down any new tests that come to mind within the session (0rdering where needed), this will help you keep track of what needs to be tested next. Each time you start working on the test case again, rewrite up the list, this will help you to get back into the state of mind you were in last session. With a little practice this will be an indispensable tool in your toolkit.

YAGNI
Just because a you may need something later is not good enough reason for implementing it now, Remove it if ‘You Aren’t Going to Need It’.

KISS
Keep tests and implementation as code simple as possible. Think to yourself has this already been done before within our code base or with our toolkit (ZendFramework/Cake what have you). This should help save you time reinventing the wheel (9-10 times preexisting code will have been tested more).

Meaningful unit test names
To help decrease debugging time and the ‘WTF’ factor from other developers, it is always a good thing to give your unit tests meaningful names, if your unit test is to check that your project object has a title, then the name of your unit test function should  be something like projectHasATitleTest(), this will help yourself & other quickly see what the test is supposed to do & not marvel over cryptic test names like mustHaveParams().

One assertion at a time
This  is another helpful tip, brought to my attention by Dave Bishop, I totally by passed this as one of the crucial time savers. There are times when you think to yourself, ‘hey I could just put this assertion in with this unit test’, as someone that has placed multiple assertions in a unit test. This is not a good idea, when a unit test fails with multiple assertions it is not takes time to determine which assertion is failing but you also can’t guarentee that that assertion is the only failing assertion. To save yourself this headache, keep to one assertion a unit test.

Credits
Thanks again to Dave Bishop for his comments on ‘Meaningful unit test names’ & ‘One assertion at a time’.

h1

Zend_PHPUnit_Fixtures uploaded to GitHub

December 4, 2008

Well I’ve been working on this project (Zend PHPUnit Fixtures) along with a few collegues over the past few months, It was my intension to submit it to Zend some point in the new year but thought I’d put it out in the wild for others to pick at.

Well I use TDD (PHPUnit is my friend) on a daily basis within ZendFramework and soon found it cumbersome to create test data (fixtures) ending up in huge test cases with more test data setup than actual assertions, this matched up with the fact that I hate integrating DB into my tests (mainly because the tables would still contain old test data & fail depending on whether the data was added in previous tests or not) So I decided to do some digging, the best solution seemed to be to create a fixture handling system that plugs into ZendFramework, leaving myself and others with more time actually testing and less time building up test data and removing fixtures from a database (which can be near impossible when it comes to automated testing).

With this realisation I decided to create my own which I’ve reversed engineered from CakePHP. When I have the time (hopefully over the holidays) I’ll go into how the mini framework can be used. For now I’ll just describe the features or each class and what they are for:-

PHPUnit_Fixtures

Basic fixture handler, used for creating test data that does not interact with a DB. With this object we are able to create basic fixtures that we can use for dummy data with our test cases. Each piece of test data can have an alias (‘ALIAS’) with the aliases name as the value, doing so will allow us to use the PHPUnit_Fixtures::find($aliasName) which will retrieve the desired fixture.

PHPUnit_Fixtures_DB

Has the same functionality as PHPUnit_Fixtures but used specifically for DB centric tests, DB test data will be added to our ‘_test’ DB, and cleaned up (truncated) on each test case, to make sure that we have the expected data.

PHPUnit_Fixtures_DynamicDB

Has the same functionality as PHPUnit_Fixture_DB, with the added functionality of being able to create tables setup my MySQL Workbench. With an child object of this class we are able to specify retrieving all schema or a specific on (denoted by the schema table name).

DevelopmentHandler

Used to handle our development environments, there are times when we want to quickly place test data on our staging DB for functionality testing and the such like, this class along with one of our PHPUnit_Fixtures, will easily allow us to populate this environment with the data we have been using for our unit tests, making it quicker to migrate test data from one place to another.

You can find the project at GitHub, If anyones interested in adding to this project or have any comments/questions drop me a line.

h1

Why test anyway?

October 27, 2008

Before I start I’d like to point out that this is purely from my perspective and pretty much a subjective point of view. There are so many articles out there from an objective stand point, that I thought I do this from my own, seeing as personal accounts are pretty much one sided & usually only highlight the good points. Here, though I’m obviously a TDD advocate I will try to address the issues I have come across whilst working with the methodology.

I know I’m not the only one who hates spending hours trying to track a bug & have phrases like: “where did that bug come from?” and “i didn’t change that” swish around my brain.

Over the years I’ve learnt that there is no such thing as a perfect solution but have adopted TDD purely because I have always felt that software projects lacked the flexibility of car designers (who test our concepts before making a final decision). Being security conscious and taking pride in my work I thought it would be of benefit to explore the practice first hand.

I’ve always hated pros & cons, especially as some peoples pros are others cons, so I’ll just bullet each methodologies attributes that I am aware of, make of it what you will.

No tests

  • Quick to initially code.
  • No need for test suite.
  • Cab be hard to trace errors & debug.
  • Can leads to misconceptions in design.
  • Changes could break existing code, without knowing and/or introducing new bugs.
  • Developers feel uneasy making changes to the system.
  • Debugging gets left to last or even out entirely due to time restraints.
  • Functionality can get left out or reproduced.

Tests

  • Need a test suite at hand.
  • Able to test each module as its built.
  • Easy to created dependencies within test cases.
  • Ideas can be played with without actually implement production code.
  • Need to adopt a different way of approaching problems.
  • Easy to track when the application is deliverable.
  • Errors/bugs can be picked up quicker.
  • There is always something to test.
  • Easy to recreate rare scenario’s/exception.
  • Can take time to learn how to create useful tests.

With those points out of the way, I’ll explain my reasons for taking the testing route.

Though software development is maturing at a pace we still have an urgent need to track change & check our assumptions. The quicker we can check our assumptions are correct the quicker we’ll find bugs in our software, I’m the first to admit I get bored quick, when my attention wains so does the software I’m coding, whilst testing I find that I become more productive & my focus is more on problem at hand, rather than tracking errors or debugging as most of the errors are expected and handled quickly.

Below is a list of my loves & hates in regards to the testing process, like I mentioned earlier I don’t believe in perfect solutions so I think it prudent not to gloss over the fact that just because something is tested doesn’t mean it does what it is supposed to or that a project will end up better.

Loves

  • Sped up my understanding of the problem at hand.
  • Alerted me to error I’ve over seen.
  • Easy to replicate hard to detect errors & exceptions.
  • Allows me to test ideas before implementing them.
  • Very handy when doing exploratory development (not sure how something works).

Hates

  • Creating useful tests can take time to learn.
  • Just because a test passes, doesn’t mean it is bug free.
  • Is easy to create useless tests.
  • If not careful planned, can find yourself testing more than creating production code.

From personal experience I found that once I grasped the concepts of testing (more so the TDD process, which I’ll go over in later blogs), the quicker I was able to implement functional code, whilst allowing me to test out assumptions I may have had. This along with a few other techniques (mainly design patterns & refactoring) has evolved my programming allowing me to create flexible, reusable code which I end up using in a range of projects.

h1

Up and coming blogs

October 20, 2008

Well for a while I’ve been affected by the TDD bug, initially as I hate debugging, I also liked the notion of being able to catch errors before they arise.

I quickly noticed a lack of information on the practice within PHP (as that’s the main language I code with at work). So I decided a while ago to put my notes together, that soon turned into a couple of documents which will now be transformed into articles focused primarily on the subject.

Im far from an expert on anything but feel the need to share this information with all, as it matures. Hopefully after a few months there should be a nice little collection of articles concentrating on the subject.

I’m trying my best to as objective as possible but also understand that I’m a pragmatist some of the time and a passionate hypocrite the rest of the time, so some of what I write may be biases (one always knows best), I’ll rectify anything I later find to be false or am enlighten to a better way.