Ruthlessly Helpful

Stephen Ritchie's offerings of ruthlessly helpful software engineering practices.

Category Archives: General

Boundary Analysis

For every method-under-test there is a set of valid preconditions and arguments. It is the domain of all possible values that allows the method to work properly. That domain defines the method’s boundaries. Boundary testing requires analysis to determine the valid preconditions and the valid arguments. Once these are established, you can develop tests to verify that the method guards against invalid preconditions and arguments.

Boundary-value analysis is about finding the limits of acceptable values, which includes looking at the following:

  • All invalid values
  • Maximum values
  • Minimum values
  • Values just on a boundary
  • Values just within a boundary
  • Values just outside a boundary
  • Values that behave uniquely, such as zero or one

An example of a situational case for dates is a deadline or time window. You could imagine that for a student loan origination system, a loan disbursement must occur no earlier than 30 days before or no later than 60 days after the first day of the semester.

Another situational case might be a restriction on age, dollar amount, or interest rate. There are also rounding-behavior limits, like two-digits for dollar amounts and six-digits for interest rates. There are also physical limits to things like weight and height and age. Both zero and one behave uniquely in certain mathematical expressions. Time zone, language and culture, and other test conditions could be relevant. Analyzing all these limits helps to identify boundaries used in test code.

Note: Dealing with date arithmetic can be tricky. Boundary analysis and good test code makes sure that the date and time logic is correct.

Invalid Arguments

When the test code calls a method-under-test with an invalid argument, the method should throw an argument exception. This is the intended behavior, but to verify it requires a negative test. A negative test is test code that passes if the method-under-test responds negatively; in this case, throwing an argument exception.

The test code shown here fails the test because ComputePayment is provided an invalid termInMonths of zero. This is test code that’s not expecting an exception.


[TestCase(7499, 1.79, 0, 72.16)]
public void ComputePayment_WithProvidedLoanData_ExpectProperMonthlyPayment(
  decimal principal,
  decimal annualPercentageRate,
  int termInMonths,
  decimal expectedPaymentPerPeriod)
{
  // Arrange
  var loan =
    new Loan
    {
      Principal = principal,
      AnnualPercentageRate = annualPercentageRate,
    };

  // Act
  var actual = loan.ComputePayment(termInMonths);

  // Assert
  Assert.AreEqual(expectedPaymentPerPeriod, actual);
}

The result of the failing test is shown, it’s output from Unexpected Exception.


LoanTests.ComputePayment_WithProvidedLoanData_ExpectInvalidArgumentException : Failed
System.ArgumentOutOfRangeException : Specified argument was out of the range of valid
values.

Parameter name: termInPeriods
at
Tests.Unit.Lender.Slos.Model.LoanTests.ComputePayment_WithProvidedLoanData_ExpectInvalidArgu
mentException(Decimal principal, Decimal annualPercentageRate, Int32 termInMonths, Decimal
expectedPaymentPerPeriod) in LoanTests.cs: line 25

The challenge is to pass the test when the exception is thrown. Also, the test code should verify that the exception type is InvalidArgumentException. This requires the method to somehow catch the exception, evaluate it, and determine if the exception is expected.

In NUnit this can be accomplished using either an attribute or a test delegate. In the case of a test delegate, the test method can use a lambda expression to define the action step to perform. The lambda is assigned to a TestDelegate variable within the Act section. In the Assert section, an assertion statement verifies that the proper exception is thrown when the test delegate is invoked.

The invalid values for the termInMonths argument are found by inspecting the ComputePayment method’s code, reviewing the requirements, and performing boundary analysis. The following invalid values are discovered:

  • A term of zero months
  • Any negative term in months
  • Any term greater than 360 months (30 years)

Below the new test is written to verify that the ComputePayment method throws an ArgumentOutOfRangeException whenever an invalid term is passed as an argument to the method. These are negative tests, with expected exceptions.


[TestCase(7499, 1.79, 0, 72.16)]
[TestCase(7499, 1.79, -1, 72.16)]
[TestCase(7499, 1.79, -2, 72.16)]
[TestCase(7499, 1.79, int.MinValue, 72.16)]
[TestCase(7499, 1.79, 361, 72.16)]
[TestCase(7499, 1.79, int.MaxValue, 72.16)]
public void ComputePayment_WithInvalidTermInMonths_ExpectArgumentOutOfRangeException(
  decimal principal,
  decimal annualPercentageRate,
  int termInMonths,
  decimal expectedPaymentPerPeriod)
{
  // Arrange
  var loan =
    new Loan
    {
      Principal = principal,
      AnnualPercentageRate = annualPercentageRate,
    };

  // Act
  TestDelegate act = () => loan.ComputePayment(termInMonths);

  // Assert
  Assert.Throws<ArgumentOutOfRangeException>(act);
}

Invalid Preconditions

Every object is in some arranged state at the time a method of that object is invoked. The state may be valid or it may be invalid. Whether explicit or implicit, all methods have expected preconditions. Since the method’s preconditions are not spelled out, one goal of good test code is to test those assumptions as a way of revealing the implicit expectations and turning them into explicit preconditions.

For example, before calculating a payment amount, let’s say the principal must be at least $1,000 and less than $185,000. Without knowing the code, these limits are hidden preconditions of the ComputePayment method. Test code can make them explicit by arranging the classUnderTest with unacceptable values and calling the ComputePayment method. The test code asserts that an expected exception is thrown when the method’s preconditions are violated. If the exception is not thrown, the test fails.

This code sample is testing invalid preconditions.


[TestCase(0, 1.79, 360, 72.16)]
[TestCase(997, 1.79, 360, 72.16)]
[TestCase(999.99, 1.79, 360, 72.16)]
[TestCase(185000, 1.79, 360, 72.16)]
[TestCase(185021, 1.79, 360, 72.16)]
public void ComputePayment_WithInvalidPrincipal_ExpectInvalidOperationException(
  decimal principal,
  decimal annualPercentageRate,
  int termInMonths,
  decimal expectedPaymentPerPeriod)
{
  // Arrange
  var classUnderTest =
  new Application(null, null, null)
  {
    Principal = principal,
    AnnualPercentageRate = annualPercentageRate,
  };

  // Act
  TestDelegate act = () => classUnderTest.ComputePayment(termInMonths);

  // Assert
  Assert.Throws<InvalidOperationException>(act);
}

Implicit preconditions should be tested and defined by a combination of exploratory testing and inspection of the code-under-test, whenever possible. Test the boundaries by arranging the class-under-test in improbable scenarios, such as negative principal amounts or interest rates.

Tip: Testing preconditions and invalid arguments prompts a lot of questions. What is the principal limit? Is it $18,500 or $185,000? Does it change from year to year?

More on boundary-value analysis can be found at Wikipedia https://en.wikipedia.org/wiki/Boundary-value_analysis

Advertisement

Better Value, Sooner, Safer, Happier

Jonathan Smart says agility across the organization is about delivering Better Value, Sooner, Safer, Happier. I like that catchphrase, and I’m looking forward to reading his new book, Sooner Safer Happier: Antipatterns and Patterns for Business Agility.

But what does this phrase mean to you? Do you think other people buy it?

Better Value – The key word here is value. Value means many things to many people; managers, developers, end-users, and customers.

In general, executive and senior managers are interested in hearing about financial rewards. These are important to discuss as potential benefits of a sustained, continuous improvement process. They come from long-term investment. Here is a list of some of the bottom-line and top-line financial rewards these managers want to hear about:

  • Lower development costs
  • Cheaper to maintain, support and enhance
  • Additional products and services
  • Attract and retain customers
  • New markets and opportunities

Project managers are usually the managers closest to the activities of developers. Functional managers, such as a Director of Development, are also concerned with the day-to-day work of developers. For these managers, important values spring from general management principles, and they seek improvements in the following areas:

  • Visibility and reporting
  • Control and correction
  • Efficiency and speed
  • Planning and predictability
  • Customer satisfaction

End-users and customers are generally interested in deliverables. When it comes to quantifying value they want to know how better practices produce better results for them. To articulate the benefits to end-users and customers, start by focusing on specific topics they value, such as:

  • More functionality
  • Easier to use
  • Fewer defects
  • Faster performance
  • Better support

Developers and team leads are generally interested in individual and team effectiveness. Quantifying the value of better practices to developers is a lot easier if the emphasis is on increasing effectiveness. The common sense argument is that by following better practices the team will be more effective. In the same way, by avoiding bad practices the team will be more effective. Developers are looking for things to run more smoothly. The list of benefits developers want to hear about includes the following:

  • Personal productivity
  • Reduced pressure
  • Greater trust
  • Fewer meetings and intrusions
  • Less conflict and confusion

Quantifying value is about knowing what others value and making arguments that make rational sense to them. For many, hard data is crucial. Since there is such variability from situation to situation, the only credible numbers are the ones your organization collects and tracks. It is a good practice to start collecting and tracking some of those numbers and relating them to the development practices that are showing improvements.

Beyond the numbers, there are many observations and descriptions that support the new and different practices. It is time to start describing the success stories and collecting testimonials. Communicate how each improvement is showing results in many positive ways.

Fakes, Stubs and Mocks

I’m frequently asked about the difference between automated testing terms like fakes, stubs and mocks.

The term fake is a general term for an object that stands in for another object; both stubs and mocks are types of fakes. The purpose of a fake is to create an object that allows the method-under-test to be tested in isolation from its dependencies, meeting one of two objectives:

1. Stub — Prevent the dependency from obstructing the code-under-test and to respond in a way that helps it proceed through its logical steps.

2. Mock — Allow the test code to verify that the code-under-test’s interaction with the dependency is proper, valid, and expected.

Since a fake is any object that stands in for the dependency, it is how the fake is used that determines if it is a stub or mock. Mocks are used only for interaction testing. If the expectation of the test method is not about verifying interaction with the fake then the fake must be a stub.

BTW, these terms are explained very well in The Art of Unit Testing by Roy Osherove.

There’s more to learn on this topic at stackoverflow https://stackoverflow.com/questions/24413184/difference-between-mock-stub-spy-in-spock-test-framework

First, the mental creation

With physical things, like buildings and devices, etc. people seem to be generally okay with generating strong specifications, such as blueprints, CAD drawings, etc. These specifications are often about trying to perfect the mental creation before the physical creation gets started. In these cases, the physical thing you’re making is not at all abstract, and it could be very expensive to make. It’s hard to iterate when you’re building a bridge that’s going to be part of an interstate roadway.

missed-it-by-that-much

What I’ve seen in the world of software, the physical creation seems abstract, and engineers writing software appears inexpensive when compared to things like steel and concrete. Many people seem to want to skip the mental creation step, and they ask that the engineers jump right into coding the thing up.

If the increment of the thing that your team is to write software for is ready to be worked on, and you have a roadmap, then an iterative and incremental approach will probably work. In fact, the idea that complex problems require iterative solutions underpins the values and principles described in the Manifesto for Agile Software Development. The Scrum process framework describes this as product increments and iterations.

However, all too often it’s a random walk guided by ambiguous language (either written or spoken) that leads to software that lacks clarity, consistency, and correctness.

Sadly and all too often, the quality assurance folks are given little time to discover and describe the issues, let alone the time to verify that the issues are resolved. During these review sessions, engineers pull out their evidence that the software is working as intended. They have a photo of a whiteboard showing some hand-wavy financial formulas. They demonstrate that they’re getting the right answer with the math-could-not-be-easier example: a loan that has a principal of $12,000.00, a term of 360 payments, an annual interest rate of 12%, and, of course, don’t forget that there are 12 months in a year. And through anomalous coincident, the software works correctly using that example. Hilarity ensues!

banks-care-about-rounding

Unfortunately, QA is often squeezed. They are over-challenged and under-supported. They are given an absurd goal, and they persevere despite grave doubts about whether a quality release is achievable. Their hopelessness comes from knowing that the team is about to inflict this software on real people, good people, and the QA team doesn’t have the time or the power to stop the release.

the-dirty-remains

What are the key things to take away:

  • When the physical creation of software seems to be cheap, comparable to the mental creation of writing down and agreeing to light-weight requirements, then the temptation exists to hire nothing but software engineers and to maximize the number of people who “just write code”. In the long run, however, the rework costs are often extremely expensive.
  • In the end, the end-users are going to find any defects.
  • Better to have QA independently verify and validate that the software works as intended, of course, what is intended needs to be written down:  clearly, consistently and correctly.
  • QA ought to find issues that can to be resolved before the users get the software — issue reports that show clear gaps in required behavior.
  • QA conversations with engineers ought never degenerate into a difference of opinion, which happens when there are no facts about the required behavior. Very often these discussions (rhymes with concussion) escalate into a difference of values — “you don’t understand modern software development”.

Interestingly, the software’s users are going to find the most important defects. The users are the ultimate independent validators and verifiers. End-users, their bosses, and the buyer can be ruthless reporters of defects. In fact, they might very well reject the product.

You got to know your limitations

Engineers must only be limited by their intellect and available time, subject to a sustainable pace. Maturity and experience are important, too.

This quote is a paraphrase of something a boss from long ago said to me. Here are the things I did to change my professional life based on this revelation, I tried to:

  • Develop my intellect: Stretch my brain. Learn new skills. Acquire new knowledge. Play brain games. Start taking notes instead of always relying on my recall. Read a lot.
  • Increase my available time and set a sustainable pace: Debug my personal software process (see PSP). Read some relevant books: Personal Kanban. First Things First. Learn how to say no, especially to things that are not important. Define what’s important.
  • Raise my maturity level: Read some key books: Raising Your Emotional Intelligence, Working With Emotional Intelligence, I’m OK — You’re OK, Go to cognitive behavioral therapy (CBT) for my professional and personal challenges, especially to better cope with difficult people.
  • Have many and varied experiences: Don’t get the same 1-year of experience for 20 years. Switch jobs. Work with people who challenge me. Don’t make the same mistake once (learn from other people’s prior mistakes). Read ahead in life by learning about other people through their biography and memoire; especially their failures.

In the end, all of these things that I tried to do I continue to do. I find them to be very helpful. I also find that I am able to be helpful to others, if I’m working to be my best.

MSDN: .NET Framework Best Practices

For years now, Microsoft Developer Network (MSDN) has provided free online documentation to .NET developers. There is a lot of individual .NET best practices topics, which are described at the high level at this MSDN link:
MSDN: .NET Framework Best Practices

This is a great MSDN article to read and link to bookmark if you’re interested in.NET best practices.

Best Practices for Strings

Just take a look at all the information within the MSDN topic of Best Practices for Using Strings in the .NET Framework. I am not going to be able to duplicate all of that. If you are developing an application that has to deal with culture, globalization, and localization issues then you need to know much of this material.

Before I go any further, let me introduce you to Jon Skeet. He wrote an awesome book, C# In Depth. I think you might enjoy reading his online article on .NET Strings: http://csharpindepth.com/Articles/General/strings.aspx

Okay, let’s get back to the MSDN article. Below I have highlighted a few of the Strings best practices that I’d like to discuss.

1. Use the String.ToUpperInvariant method instead of the String.ToLowerInvariant method when you normalize strings for comparison.

In the .NET Framework, ToUpperInvariant is the standard way to normalize case. In fact, the Visual Studio Code Analysis has rule CA1308 in the Globalization category that can monitor this.

This is a really easy practice to follow once you know it.

Here is the key point I picked up from rule CA1308:

It is safe to suppress [this] warning message [CA1308] when you are not making security decision based on the result (for example, when you are displaying it in the UI).

In other words, take care to uppercase strings when the code is making a security decision based on normalized string comparison.

2. Use an overload of the String.Equals method to test whether two strings are equal.

Some of these overloads require a parameter that specifies the culture, case, and sort rules that are to be used in the comparison method. This just makes the string comparison you are using explicit.

3. Do not use an overload of the String.Compare or CompareTo method and test for a return value of zero to determine whether two strings are equal.

In the MSDN documentation for comparing Strings the guidance is quite clear:

The Compare method is primarily intended for use when ordering or sorting strings.

All-In-One Code Framework

If you have not had a chance to take a look at the All-In-One Code Framework then please take a few minutes to look it over.

The Microsoft All-In-One Code Framework is a free, centralized code sample library driven by developers’ needs.

It is Microsoft Public License (Ms-PL), which is the least restrictive of the Microsoft open source licenses.

What’s relevant to this article is the All-In-One Code Framework Coding Standards document. You can find the download link at the top of this page: http://1code.codeplex.com/documentation

In that document, they list a very relevant and useful list of String best practices.

  • Do not use the ‘+’ operator (or ‘&’ in VB.NET) to concatenate many strings. Instead, you should use StringBuilder for concatenation. However, do use the ‘+’ operator (or ‘&’ in VB.NET) to concatenate small numbers of strings.
  • Do use overloads that explicitly specify the string comparison rules for string operations. Typically, this involves calling a method overload that has a parameter of type StringComparison.
  • Do use StringComparison.Ordinal or StringComparison.OrdinalIgnoreCase for comparisons as your safe default for culture-agnostic string matching, and for better performance.
  • Do use string operations that are based on StringComparison.CurrentCulture when you display output to the user.
  • Do use the non-linguistic StringComparison.Ordinal or StringComparison.OrdinalIgnoreCase values instead of string operations based on CultureInfo.InvariantCulture when the comparison is linguistically irrelevant (symbolic, for example). Do not use string operations based on StringComparison.InvariantCulture in most cases. One of the few exceptions is when you are persisting linguistically meaningful but culturally agnostic data.
  • Do use an overload of the String.Equals method to test whether two strings are equal.
  • Do not use an overload of the String.Compare or CompareTo method and test for a return value of zero to determine whether two strings are equal. They are used to sort strings, not to check for equality.
  • Do use the String.ToUpperInvariant method instead of the String.ToLowerInvariant method when you normalize strings for comparison.

This post is part of my Compendium .NET Best-Practices series.

Where’s CAT.NET 2.0?

If you go to the Microsoft Security Development Lifecycle implementation page, you read about performing static analysis with CAT.NET. If you follow one of the download links it takes you to CAT.NET v1 CTP.

About a year ago the Beta version of CAT.NET 2.0 was out from the Microsoft Security Tools team. It looked very promising. Today, I am having trouble finding the download for CAT.NET 2.0. The link on the team’s CAT.NET 2.0 – Beta blog post is broken.

There is very little information on the Information Security Tools team’s Connect site.

Does Microsoft have an update on the Security Development Lifecycle tools?

Four Ways to Fake Time, Part 2

In Part 1 of this four part series you learned how a code’s implicit dependency on the system clock can make the software difficult to test. The first post presented a very simple solution, pass in the clock as a method parameter. It is effective, however, adding a new parameter to every method of a class isn’t always the best solution.

Fake Time 2: Brute Force Property Injection

Here is a second way to fake time. It is brute force in the sense that it is rudimentary. Using full-blown dependency injection with an IoC container is left as an exercise for the reader. The goal of this post is to illustrate the principle and provide you with a technique you can use today.

Perhaps an example would be helpful …

using System;
using Lender.Slos.Utilities.Configuration;

namespace Lender.Slos.Financial
{
    public class ModificationWindow
    {
        private readonly IModificationWindowSettings _settings;

        public ModificationWindow(
            IModificationWindowSettings settings)
        {
            _settings = settings;
        }

        // This property is for testing use only
        private DateTime? _now;
        public DateTime Now
        {
            get { return _now ?? DateTime.Now; }
            internal set { _now = value; }
        }

        public bool Allowed()
        {
            var now = this.Now;

            // Start date's month & day come from settings
            var startDate = new DateTime(
                now.Year,
                _settings.StartMonth,
                _settings.StartDay);

            // End date is 1 month after the start date
            var endDate = startDate.AddMonths(1);

            if (now >= startDate &&
                now < endDate)
            {
                return true;
            }

            return false;
        }
    }
}

In this example code, the Allowed method changed very little from how it was written at the end of the first post. The primary difference is that there isn’t any clock optional argument. The value of the now variable comes from the new class property named Now.

Let’s take a closer look at the Now property. First, it has a backing variable named _now, which is declared as a nullable DateTime. Second, since _now defaults to null, this means that the Now property getter will return System.DateTime.Now if the property is never set. In other words, if the Now property is never set then that property behaves like a call to System.DateTime.Now.

Note that the null coalescing operator (??) expression in the getter can be rewritten as follows:

get
{
    return _now == null ? DateTime.Now : _now.Value;
}

And so, if our test code sets the Now property to a specific DateTime value then that property returns that DateTime value, instead of System.DateTime.Now. This allows the test code to “freeze the clock” before calling the method-under-test.

The following is the revised test method. It sets the Now property to the currentTime value at the end of the arrangement section. This, in effect, fakes the Allowed method, and establishes a known value for the clock.

[TestCase(1)]
[TestCase(5)]
[TestCase(12)]
public void Allowed_WhenCurrentDateIsInsideModificationWindow_ExpectTrue(
    int startMonth)
{
    // Arrange
    var settings = new Mock<IModificationWindowSettings>();
    settings
        .SetupGet(e => e.StartMonth)
        .Returns(startMonth);
    settings
        .SetupGet(e => e.StartDay)
        .Returns(1);

    var currentTime = new DateTime(
        DateTime.Now.Year,
        startMonth,
        13);

    var classUnderTest = new ModificationWindow(settings.Object);

    classUnderTest.Now = currentTime; // Set the value of Now; freeze the clock

    // Act
    var result = classUnderTest.Allowed();

    // Assert
    Assert.AreEqual(true, result);
}

There is one more subtlety to mention. The test method cannot set the class-under-test’s Now property without being allowed access. This is accomplished by adding the following line to the end of the AssemblyInfo.cs file in the Lender.Slos.Financial project, which declares the class-under-test.

[assembly: InternalsVisibleTo("Tests.Unit.Lender.Slos.Financial")]

The use of InternalsVisibleTo establishes a friend assembly relationship.

Pros:

  1. A straightforward, KISS approach
  2. Can work with .NET Framework 2.0
  3. No impact on class-users and method-callers
  4. Isolated change, minimal risk
  5. Testability is greatly improved

Cons:

  1. Improves testability only one class at a time
  2. Adds a testing-use-only property to the class

I use this approach when working with legacy or Brownfield code. It is a minimally invasive technique.

In the next part of this Fake Time series we’ll look at the IClock interface and a constructor injection approach.

Crossderry Interview

Earlier in the month, Crossderry interviewed me about my book Pro .NET Best Practices. Below is the entire four-part interview. Reprinted with the permission of @crossderry.

Project Mgmt and Software Dev Best Practice

Q: Your book’s title notwithstanding, you’re keen to move people away from the term “best practices.” What is wrong with “best practices”?

A: My technical reviewer, Paul Apostolescu, asked me the same question. Paul often prompted me to really think things through.

I routinely avoid superlatives, like “best”, when dealing with programmers, engineers, and other left-brain dominant people. Far too often, a word like that becomes a huge diversion with heated discussions centering on the topic of what is the singularly best practice. It’s like that old saying, the enemy of the good is the best. Too much time is wasted searching for the best practice when there is clearly a better practice right in front of you.

A “ruthlessly helpful” practice is my pragmatist’s way of saying, let’s pick a new or different practice today because we know it pays dividends. Over time, iteratively and incrementally, that incumbent practice can be replaced by a better practice, until then the team and organization reaps the rewards.

As for the title of book, I originally named it “Ruthlessly Helpful .NET”. The book became part of an Apress professional series, and the title “Pro .NET Best Practices” fits in with a prospective reader and booksellers’ expectations for books in that series.

Why PM Matters to Developers

Here we focus on why he spent so much time on PM-relevant topics:

Q: One of the pleasant surprises in the book was the early attention you paid to strategy, value, scope, deliverables and other project management touchstones. Why so much PM?

A: I find that adopting a new and different practice — in the hope that it’ll be ruthlessly helpful one — is an initiative, kinda like a micro-project. This can happen at so many levels … an individual developer, a technical leader, the project manager, the organization.

For the PM and for the organization, they’re usually aware that adopting a set of better practices is a project to be managed. For the individual or group, that awareness is often missing and the PM fundamentals are not applied to the task. I felt that my book needed to bring in the relevant first-principles of project management to raise some awareness and guide readers toward the concepts that make these initiatives more successful.

Ruthlessly Helpful Project Management

We turn to the project manager’s role:

Q: Can you give an example or three of how project managers can be “ruthlessly helpful” to their development teams?

A: Here are a few:

1) Insist that programmers, engineers and other technical folks go to the whiteboard. Have them draw out and diagram their thinking. ”‘Can you draw it up for everyone to see?” Force them to share their mental image and understanding. You will find that others were making bad assumptions and inferences. Never assume that your development team is on the same page without literally forcing them to be on the same page.

2) Verify that every member of our development team is 100% confident that their component or module works as they’ve intended it to work. I call this: “Never trust an engineer who hesitates to cross his own bridge.” Many developer’s are building bridges they never intend to cross. I worked on fixed-asset accounting software, but I was never an accountant. The ruthlessly helpful PM asks the developer to demonstrate their work by asking things like “… let me see it in action, give it a quick spin, show me how you’re doing on this feature …”. These are all friendly ways to ask a developer to show you that they’re willing to cross their own bridge.

3) Don’t be surprised to find that your technical people are holding back on you. They’re waiting until there are no defects in their work. Perfectionists wish that their blind spots, omissions, and hidden weakness didn’t exist. Here’s the dilemma; they have no means to find the defects that are hidden to them. The cure they pick for this dilemma is to keep stalling until they can add every imaginable new feature and uncover any defect. The ruthlessly helpful PM knows how to find effective ways to provide the developers with dispassionate, timely, and non-judgmental feedback so they can achieve the desired results.

Common Obstacles PMs Introduce

This question — about problems project managers impose on their projects — wraps up my interview with Stephen Ritchie.

Q: What are common obstacles that project managers introduce into projects?

A: Haste. I like to say, “schedule pressure is the enemy of good design.” During project retrospectives, all too often, I find the primary technical design driver was haste. Not maintainability, not extensibility, not correctness, not performance … haste. This common obstacle is a silent killer. It is the Sword of Damocles that … when push comes to shove … drives so many important design objectives underground or out the window.

Ironically, the haste is driven by an imagined or arbitrary deadline. I like to remind project managers and developers that for quick and dirty solutions … the dirty remains long after the quick is forgotten. At critical moments, haste is important. But haste is an obstacle when it manifests itself as technical debt, incurred carelessly and having no useful purpose.

Other obstacles include compartmentalization, isolation, competitiveness, and demotivation. Here’s the thing. Most project managers need to get their team members to bring creativity, persistence, imagination, dedication, and collaboration to their projects if the project is going to be successful. These are the very things team members *voluntarily* bring to the project.

Look around the project; anything that doesn’t help and motivate individuals to interact effectively is an obstacle. Project managers must avoid introducing these obstacles and focus on clearing them.

[HT @crossderry Thank you for the interview and permission to reprint it on my blog.]

NuGet Kickstart Package

I want to use NuGet to retrieve a set of content files that are needed for the build. For example, the TeamCity build configuration runs a runner.msbuild script, however, that script needs to import a Targets file, like this:

<Import Condition="$(BuildPath)\ImportTargets\MSBuild.Lender.Common.Targets"
        Project="$(BuildPath)\ImportTargets\MSBuild.Lender.Common.Targets"
        />

The plan is to create a local NuGet feed that has all the prerequisite files for the build script. Using the local NuGet feed, install the “global build” package as the first build task. After that, the primary build script can find the import file and proceed normally. Here is the basic solution strategy that I came up with.

To see an example, follow these steps:

1. Create a local NuGet feed. Read more information here: http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds

2. Write a NuGet spec file and name it Lender.Build.nuspec. This is simply an XML file. The schema is described here: http://docs.nuget.org/docs/reference/nuspec-reference

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <metadata>
    <id>_globalBuild</id>
    <version>1.0.0</version>
    <authors>Lender Development</authors>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Lender Build</description>
  </metadata>
  <files>
    <file src="ImportTargets\**" target="ImportTargets" />
  </files>
</package>

Notice the “file” element. It specifies the source files, which includes in the MSBuild.Lender.Common.Targets file when the ImportTargets folder is added.

3. Using the NuGet Package Explorer, I opened the Lender.Build.nuspec file and saved the package in the LocalNuGetFeed folder. Here’s how that looks:

NuGet_Package_Explorer_1211-12-21

4. Save the package to the local NuGet feeds folder. In this case, it is the C:\LocalNuGetFeeds folder.

5. Now let’s move on over to where this “_globalBuild” dependency is going to be used. For example, the C:\projects\Lender.Slos folder.  In that folder, create a packages.config file and add it to version control. That config file looks like this:

<?xml version="1.0" encoding="utf-8"?>
<packages>
  <package id="_globalBuild" version="1.0.0" />
</packages>

This references the package with the id of “_globalBuild”, which is found in the LocalNuGetFeeds package. It is one of the available package sources because it was added through Visual Studio, under Tools >> Library Package Manager >> Package Manager Settings.

Library_Package_Manager_settings_2011-12-21

6. From MSBuild, the CI server calls the “Kickstart” target before running the default script target. The Kickstart target uses the NuGet.exe command line to install the global build package. Here is the MSBuild script:

<Project DefaultTargets="Default"
         xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
         ToolsVersion="4.0"
         >
  <PropertyGroup>
    <RootPath>.</RootPath>
    <BuildPath>$(RootPath)\_globalBuild.1.0.0\ImportTargets</BuildPath>
    <CommonImportFile>$(BuildPath)\MSBuild.Lender.Common.Targets</CommonImportFile>
  </PropertyGroup>

  <Import Condition="Exists('$(CommonImportFile)')"
          Project="$(CommonImportFile)"
          />

  <Target Name="Kickstart" >
    <PropertyGroup>
      <PackagesConfigFile>packages.config</PackagesConfigFile>
      <ReferencesPath>.</ReferencesPath>
    </PropertyGroup>
    <Exec Command="$(NuGetRoot)\nuget.exe i $(PackagesConfigFile) -o $(ReferencesPath)" />
  </Target>

  <!-- The Rebuild or other targets belong here -->
  <Target Name="Default" >
    <PropertyGroup>
      <ProjectFullName Condition="$(ProjectFullName)==''">(undefined)</ProjectFullName>
    </PropertyGroup>

    <Message Text="Project name: '$(ProjectFullName)'"
             Importance="High"
             />
  </Target>

</Project>

7. In this way, the MSBuild script uses NuGet to bring down the ImportTargets files and places them under the _globalBuild.1.0.0 folder. This can happen on the CI server with multiple build steps. For the sake of simplicity here are the lines in a batch file that simulates these steps:

%MSBuildRoot%\msbuild.exe "runner.msbuild" /t:Kickstart
%MSBuildRoot%\msbuild.exe "runner.msbuild"

With the kickstart bringing down the prerequisite files, the rest of the build script performs the automated build using the common Targets properly imported.