Ruthlessly Helpful

Stephen Ritchie's offerings of ruthlessly helpful software engineering practices.

Category Archives: Automated Testing

Four Ways to Fake Time

Are your unit tests failing because the code-under-test is coupled to the system clock? In other words, does the method you are testing use the System.DateTime.Now property, and that dependency is making it hard to properly unit test the code?

Calendar

This series of posts presents four ways to fake time as a means of improving testability. Specifically, we’ll look at these four techniques:

  1. The Optional ‘clock’ Parameter
  2. Brute Force Property Injection
  3. Inject The IClock Interface
  4. Mock Isolation Framework

Time Is Bumming Me Out

Time is so rigid. It keeps on ticking, ticking … into the future. The code becomes dependent on the system clock. That dependency makes it hard to properly test the code. Worse, the test code cannot find a lurking bug … until *boom* … the bug blows the system up.

Perhaps an example would be helpful …

public bool Allowed()
{
    // Start date's month & day come from settings
    var startDate = new DateTime(
        DateTime.Now.Year,
        _settings.StartMonth,
        _settings.StartDay);

    // End date is 1 month after the start date
    var endDate = new DateTime(
        DateTime.Now.Year,
        _settings.StartMonth + 1, // This is the lurking bug!
        _settings.StartMonth);

    if (DateTime.Now >= startDate &&
        DateTime.Now < endDate)
    {
        return true;
    }

    return false;
}

Notice the lurking bug in this code? Well, if the start month is December then a defect emerges at runtime. The error message you would see looks something like this:

System.ArgumentOutOfRangeException : Year, Month, and Day parameters describe an un-representable DateTime.

This defect is not too hard to notice, if you envision the value in the _settings.StartMonth property is 12. However, the problem with lurking bugs, they go unnoticed until … bang! At some future date (all too often in production) the software hiccups.

In the following code sample, the defect will be found when the continuous integration server runs the tests in December. This is a fairly typical unit testing approach, intended to test the Allowed method.

[Test]
public void Allowed_WhenCurrentDateIsOutsideModificationWindow_ExpectFalse()
{
    // Arrange
    var startMonth = DateTime.Now.Month;

    var settings = new Mock<IModificationWindowSettings>();
    settings
        .SetupGet(e => e.StartMonth)
        .Returns(startMonth + 1);
    settings
        .SetupGet(e => e.StartDay)
        .Returns(1);

    var classUnderTest = new ModificationWindow(settings.Object);

    // Act
    var result = classUnderTest.Allowed();

    // Assert
    Assert.AreEqual(false, result);
}

You don’t want to go into work on 1-Dec and find that this test is suddenly failing. The test method fails in December because that’s when startMonth is 12 and the defect is revealed. You’ll have a sickening thought, “Wait a second … is it failing in production?”

The production code is not working as intended, and worse than that, the test code never found the issue before the defect went into production.

If you revise the test code to include a few test cases, like the ones shown below, then the boundary conditions are covered and the defect is found every time these test cases are run. However, this test method has another flaw. It cannot pass for all three cases because the code-under-test has a dependency on DateTime.Now.

[TestCase(1)]
[TestCase(5)]
[TestCase(12)]
public void Allowed_WhenCurrentDateIsOutsideModificationWindow_ExpectFalse(
    int startMonth)
{
    // Arrange
    var settings = new Mock<IModificationWindowSettings>();
    settings
        .SetupGet(e => e.StartMonth)
        .Returns(startMonth + 1);
    settings
        .SetupGet(e => e.StartDay)
        .Returns(1);

    var classUnderTest = new ModificationWindow(settings.Object);

    // Act
    var result = classUnderTest.Allowed();

    // Assert
    Assert.AreEqual(false, result);
}

Before we investigate any further, let’s go back and revise the code-under-test to fix the bug. Here is an improved implementation of the Allowed method.

public bool Allowed()
{
    // Start date's month & day come from settings
    var startDate = new DateTime(
        DateTime.Now.Year,
        _settings.StartMonth,
        _settings.StartDay);

    // End date is 1 month after the start date
    var endDate = startDate.AddMonths(1);

    if (DateTime.Now >= startDate &&
        DateTime.Now < endDate)
    {
        return true;
    }

    return false;
}

Imagine that today is Friday the 13th of January 2012. When we run the test cases the first case passes, but the other two fail. We want all three test cases to pass every time they are run.

Fake Time 1: The Optional ‘Clock’ Parameter

Let’s change the code-under-test so that time is provided as an optional parameter. This approach is made possible by .NET Framework 4.0, which allows C# developers to define optional parameters.

public bool Allowed(DateTime? clock)
{
    var now = clock ?? DateTime.Now; // If no clock is provided then use Now.

    // Start date's month & day come from settings
    var startDate = new DateTime(
        now.Year,
        _settings.StartMonth,
        _settings.StartDay);

    // End date is 1 month after the start date
    var endDate = startDate.AddMonths(1);

    if (now >= startDate &&
        now < endDate)
    {
        return true;
    }

    return false;
}

Now that the code-under-test accepts this ‘clock’ parameter, the test method is modified, as follows. All three test cases now pass.

[TestCase(1)]
[TestCase(5)]
[TestCase(12)]
public void Allowed_WhenCurrentDateIsInsideModificationWindow_ExpectTrue(
    int startMonth)
{
    // Arrange
    var settings = new Mock<IModificationWindowSettings>();
    settings
        .SetupGet(e => e.StartMonth)
        .Returns(startMonth);
    settings
        .SetupGet(e => e.StartDay)
        .Returns(1);

    var currentTime = new DateTime(
        DateTime.Now.Year,
        startMonth,
        13);

    var classUnderTest = new ModificationWindow(settings.Object);

    // Act
    var result = classUnderTest.Allowed(currentTime);

    // Assert
    Assert.AreEqual(true, result);
}

Pros:

  1. The simplest thing that could possibly work; a KISS approach
  2. Minimal impact to method callers
  3. Isolated changes, lower risk
  4. Testability is greatly improved

Cons:

  1. Only works with .NET Framework 4
  2. Method callers are able to pass improper dates and times, invalidating the method’s expected behavior
  3. Improves testability only one method at a time
  4. Adds testing-use-only parameters to method signatures

I recommend this approach when working with legacy or Brownfield code, which has been brought up to .NET 4, and a minimally invasive, very isolated technique is indicated.

In the next part of this Fake Time series we’ll look at a brute force, property injection approach.

Advertisement

The Prime Directive

When creating test cases, I find that using prime numbers helps avoid coincidental arithmetic issues and helps make debugging easier.

A common coincidental arithmetic problem occurs when a test uses the number 2. These three expressions: 2 + 2, 2 * 2, and System.Math.Pow(2, 2) are all equal to 4. When using the number 2 as a test value, there are many ways the test falsely passes. Arithmetic errors are less likely to yield an improper result when the test values are different prime numbers.

Consider a loan that has a principal of $12,000.00, a term of 360 payments, an annual interest rate of 12%, and, of course, don’t forget that there are 12 months in a year. Because the coincidental factor is 12 in all these numbers, this data scenario is a very poor choice as a test case.

In this code listing, the data-driven test cases use prime numbers and prime-derived variations to create uniqueness.

[TestCase(7499, 1.79, 0, 72.16)]
[TestCase(7499, 1.79, -1, 72.16)]
[TestCase(7499, 1.79, -73, 72.16)]
[TestCase(7499, 1.79, int.MinValue, 72.16)]
[TestCase(7499, 1.79, 361, 72.16)]
[TestCase(7499, 1.79, 2039, 72.16)]
[TestCase(7499, 1.79, int.MaxValue, 72.16)]
public void ComputePayment_WithInvalidTermInMonths_ExpectArgumentOutOfRangeException(
    decimal principal,
    decimal annualPercentageRate,
    int termInMonths,
    decimal expectedPaymentPerPeriod)
{
    // Arrange
    var loan =
        new Loan
            {
                Principal = principal,
                AnnualPercentageRate = annualPercentageRate,
            };

    // Act
    TestDelegate act = () => loan.ComputePayment(termInMonths);

    // Assert
    Assert.Throws<ArgumentOutOfRangeException>(act);
}

Here’s a text file that lists the first 1000 prime numbers: http://primes.utm.edu/lists/small/1000.txt

Here’s a handy prime number next-lowest/next-highest calculator: http://easycalculation.com/prime-number.php

Also, I find that it’s often helpful to avoid using arbitrary, hardcoded strings. When the content in the string is unimportant, I use Guid.NewGuid.ToString(), or I write a test helper method like TestHelper.BuidString() to create random, unique strings. This helps avoid same-string coincidences.

Problem Prevention; Early Detection

On a recent project there was one single line of code that a developer added intended to fix a reported issue. It was checked in with a nice, clear comment. It was an innocuous change. If you saw the change yourself then you’d probably say it seemed like a reasonable and well-considered change. It certainly wasn’t a careless change, made in haste. But it was a ticking time bomb; it was a devilish lurking bug (DLB).

Before we started our code-quality initiative the DLB nightmare would have gone something like this:

  • The change would have been checked in, built and then deployed to the QA/testing team’s system-test environment. There were no automated tests, and so, the DLB would have had cover-of-night.
  • QA/testing would have experienced this one code change commingled with a lot of other new features and fixes. The DLB would have had camouflage.
  • When the DLB was encountered the IIS service-host would have crashed. This would have crashed any other QA/testing going on at the same time. The DLB would have had plenty of patsies.
  • Ironically, the DLB would have only been hit after a seldom-used feature had succeeded. This would have created a lot of confusion. There would have been many questions and conversations about how to reproduce the tricky DLB, for there would have been many red herrings.
  • Since the DLB would seem to involve completely unrelated functionality, all the developers, testers and the PM would never be clear on a root cause. No one  would be sure of its potential impact or how to reproduce it; many heated debates would ensue. The DLB would have lots of political cover.
  • Also likely, the person who added the one line of code would not be involved in fixing the problem because time had since passed and misdirection led to someone else working on the fix. The DLB would never become a lesson learned.

With our continuous integration (CI) and automated testing in place, here is what actually happened:

  • First thing that morning the developer checked in the change and pushed it into the Mercurial integration repository.
  • The TeamCity Build configuration detected the code push, got the latest, and started the MSBuild script that rebuilt the software.
  • Once that rebuild was successful, the Automated Testing configuration was triggered and ran the automated test suite with the NUnit runner. Soon there were automated tests failing, which caused the build to fail and TeamCity notified the developer.
  • A few minutes later that developer was investigating the build failure.
  • He couldn’t see how his one line of code was causing the build to fail. I was brought in to consult with him on the problem. I couldn’t see any problem. We were perplexed.
  • Together the developer and I used the test code to guide our debugging. We easily reproduced the issue. Of course we did, we were given a bulls-eye target on the back of Mr. DLB. We  quickly identified a fix for the DLB.
  • Less than an hour and a half after checking in the change that created the DLB, the same developer who had added that one line made the proper fix to resolve the originally reported issue without adding the DLB.
  • Shortly thereafter, our TeamCity build server re-made the build and all automated tests passed successfully.
  • The new build, with the proper fix, was deployed to the QA/testers and they began testing; having completely avoided an encounter with the DLB.

Mr. DLB involved a method calling itself recursively, a death-spiral, a very pernicious bug. Within just that one line of added code a seed was planted that sent the system into a stack overflow that then caused the service-host to crash.

Just think about it, having our CI server in place and running automated tests is of immense value to the project:

  • Greatly reduces the time to find and fix issues
  • Detection of issues is very proximate in time to when the precipitating  change is made
  • The developer who causes the issue [note: in this case it wasn’t carelessness] is usually responsible for fixing the issue
  • QA/testing doesn’t experience the time wasted; being diverted and distracted trying to isolate, describe, reproduce, track or otherwise deal with the issue
  • Of the highest value, the issue doesn’t cause contention, confrontation or communication-breakdown between the developers and testers; QA never sees the issue

CI and automated testing are always running and vigilantly. They’re working to ensure that the system deployed to the system-test environment actually works the way the developers intend it to work.

If there’s a difference between the way the system works and the way the developer wants it to work then the team’s going to burn and waste a lot of time, energy and goodwill coping with a totally unintended and unavoidable conflict. If you want problem prevention then you have to focus on early detection.