Ruthlessly Helpful

Stephen Ritchie's offerings of ruthlessly helpful software engineering practices.

Category Archives: General

First, the mental creation

With physical things, like buildings and devices, etc. people seem to be generally okay with generating strong specifications, such as blueprints, CAD drawings, etc. These specifications are often about trying to perfect the mental creation before the physical creation gets started. In these cases, the physical thing you’re making is not at all abstract, and it could be very expensive to make. It’s hard to iterate when you’re building a bridge that’s going to be part of an interstate roadway.

missed-it-by-that-much

What I’ve seen in the world of software, the physical creation seems abstract, and engineers writing software appears inexpensive when compared to things like steel and concrete. Many people seem to want to skip the mental creation step, and they ask that the engineers jump right into coding the thing up.

If the increment of the thing that your team is to write software for is ready to be worked on, and you have a roadmap, then an iterative and incremental approach will probably work. In fact, the idea that complex problems require iterative solutions underpins the values and principles described in the Manifesto for Agile Software Development. The Scrum process framework describes this as product increments and iterations.

However, all too often it’s a random walk guided by ambiguous language (either written or spoken) that leads to software that lacks clarity, consistency, and correctness.

Sadly and all too often, the quality assurance folks are given little time to discover and describe the issues, let alone the time to verify that the issues are resolved. During these review sessions, engineers pull out their evidence that the software is working as intended. They have a photo of a whiteboard showing some hand-wavy financial formulas. They demonstrate that they’re getting the right answer with the math-could-not-be-easier example: a loan that has a principal of $12,000.00, a term of 360 payments, an annual interest rate of 12%, and, of course, don’t forget that there are 12 months in a year. And through anomalous coincident, the software works correctly using that example. Hilarity ensues!

banks-care-about-rounding

Unfortunately, QA is often squeezed. They are over-challenged and under-supported. They are given an absurd goal, and they persevere despite grave doubts about whether a quality release is achievable. Their hopelessness comes from knowing that the team is about to inflict this software on real people, good people, and the QA team doesn’t have the time or the power to stop the release.

the-dirty-remains

What are the key things to take away:

  • When the physical creation of software seems to be cheap, comparable to the mental creation of writing down and agreeing to light-weight requirements, then the temptation exists to hire nothing but software engineers and to maximize the number of people who “just write code”. In the long run, however, the rework costs are often extremely expensive.
  • In the end, the end-users are going to find any defects.
  • Better to have QA independently verify and validate that the software works as intended, of course, what is intended needs to be written down:  clearly, consistently and correctly.
  • QA ought to find issues that can to be resolved before the users get the software — issue reports that show clear gaps in required behavior.
  • QA conversations with engineers ought never degenerate into a difference of opinion, which happens when there are no facts about the required behavior. Very often these discussions (rhymes with concussion) escalate into a difference of values — “you don’t understand modern software development”.

Interestingly, the software’s users are going to find the most important defects. The users are the ultimate independent validators and verifiers. End-users, their bosses, and the buyer can be ruthless reporters of defects. In fact, they might very well reject the product.

You got to know your limitations

Engineers must only be limited by their intellect and available time, subject to a sustainable pace. Maturity and experience are important, too.

This quote is a paraphrase of something a boss from long ago said to me. Here are the things I did to change my professional life based on this revelation, I tried to:

  • Develop my intellect: Stretch my brain. Learn new skills. Acquire new knowledge. Play brain games. Start taking notes instead of always relying on my recall. Read a lot.
  • Increase my available time and set a sustainable pace: Debug my personal software process (see PSP). Read some relevant books: Personal Kanban. First Things First. Learn how to say no, especially to things that are not important. Define what’s important.
  • Raise my maturity level: Read some key books: Raising Your Emotional Intelligence, Working With Emotional Intelligence, I’m OK — You’re OK, Go to cognitive behavioral therapy (CBT) for my professional and personal challenges, especially to better cope with difficult people.
  • Have many and varied experiences: Don’t get the same 1-year of experience for 20 years. Switch jobs. Work with people who challenge me. Don’t make the same mistake once (learn from other people’s prior mistakes). Read ahead in life by learning about other people through their biography and memoire; especially their failures.

In the end, all of these things that I tried to do I continue to do. I find them to be very helpful. I also find that I am able to be helpful to others, if I’m working to be my best.

MSDN: .NET Framework Best Practices

For years now, Microsoft Developer Network (MSDN) has provided free online documentation to .NET developers. There is a lot of individual .NET best practices topics, which are described at the high level at this MSDN link:
MSDN: .NET Framework Best Practices

This is a great MSDN article to read and link to bookmark if you’re interested in.NET best practices.

Best Practices for Strings

Just take a look at all the information within the MSDN topic of Best Practices for Using Strings in the .NET Framework. I am not going to be able to duplicate all of that. If you are developing an application that has to deal with culture, globalization, and localization issues then you need to know much of this material.

Before I go any further, let me introduce you to Jon Skeet. He wrote an awesome book, C# In Depth. I think you might enjoy reading his online article on .NET Strings: http://csharpindepth.com/Articles/General/strings.aspx

Okay, let’s get back to the MSDN article. Below I have highlighted a few of the Strings best practices that I’d like to discuss.

1. Use the String.ToUpperInvariant method instead of the String.ToLowerInvariant method when you normalize strings for comparison.

In the .NET Framework, ToUpperInvariant is the standard way to normalize case. In fact, the Visual Studio Code Analysis has rule CA1308 in the Globalization category that can monitor this.

This is a really easy practice to follow once you know it.

Here is the key point I picked up from rule CA1308:

It is safe to suppress [this] warning message [CA1308] when you are not making security decision based on the result (for example, when you are displaying it in the UI).

In other words, take care to uppercase strings when the code is making a security decision based on normalized string comparison.

2. Use an overload of the String.Equals method to test whether two strings are equal.

Some of these overloads require a parameter that specifies the culture, case, and sort rules that are to be used in the comparison method. This just makes the string comparison you are using explicit.

3. Do not use an overload of the String.Compare or CompareTo method and test for a return value of zero to determine whether two strings are equal.

In the MSDN documentation for comparing Strings the guidance is quite clear:

The Compare method is primarily intended for use when ordering or sorting strings.

All-In-One Code Framework

If you have not had a chance to take a look at the All-In-One Code Framework then please take a few minutes to look it over.

The Microsoft All-In-One Code Framework is a free, centralized code sample library driven by developers’ needs.

It is Microsoft Public License (Ms-PL), which is the least restrictive of the Microsoft open source licenses.

What’s relevant to this article is the All-In-One Code Framework Coding Standards document. You can find the download link at the top of this page: http://1code.codeplex.com/documentation

In that document, they list a very relevant and useful list of String best practices.

  • Do not use the ‘+’ operator (or ‘&’ in VB.NET) to concatenate many strings. Instead, you should use StringBuilder for concatenation. However, do use the ‘+’ operator (or ‘&’ in VB.NET) to concatenate small numbers of strings.
  • Do use overloads that explicitly specify the string comparison rules for string operations. Typically, this involves calling a method overload that has a parameter of type StringComparison.
  • Do use StringComparison.Ordinal or StringComparison.OrdinalIgnoreCase for comparisons as your safe default for culture-agnostic string matching, and for better performance.
  • Do use string operations that are based on StringComparison.CurrentCulture when you display output to the user.
  • Do use the non-linguistic StringComparison.Ordinal or StringComparison.OrdinalIgnoreCase values instead of string operations based on CultureInfo.InvariantCulture when the comparison is linguistically irrelevant (symbolic, for example). Do not use string operations based on StringComparison.InvariantCulture in most cases. One of the few exceptions is when you are persisting linguistically meaningful but culturally agnostic data.
  • Do use an overload of the String.Equals method to test whether two strings are equal.
  • Do not use an overload of the String.Compare or CompareTo method and test for a return value of zero to determine whether two strings are equal. They are used to sort strings, not to check for equality.
  • Do use the String.ToUpperInvariant method instead of the String.ToLowerInvariant method when you normalize strings for comparison.

This post is part of my Compendium .NET Best-Practices series.

Where’s CAT.NET 2.0?

If you go to the Microsoft Security Development Lifecycle implementation page, you read about performing static analysis with CAT.NET. If you follow one of the download links it takes you to CAT.NET v1 CTP.

About a year ago the Beta version of CAT.NET 2.0 was out from the Microsoft Security Tools team. It looked very promising. Today, I am having trouble finding the download for CAT.NET 2.0. The link on the team’s CAT.NET 2.0 – Beta blog post is broken.

There is very little information on the Information Security Tools team’s Connect site.

Does Microsoft have an update on the Security Development Lifecycle tools?

Four Ways to Fake Time, Part 2

In Part 1 of this four part series you learned how a code’s implicit dependency on the system clock can make the software difficult to test. The first post presented a very simple solution, pass in the clock as a method parameter. It is effective, however, adding a new parameter to every method of a class isn’t always the best solution.

Fake Time 2: Brute Force Property Injection

Here is a second way to fake time. It is brute force in the sense that it is rudimentary. Using full-blown dependency injection with an IoC container is left as an exercise for the reader. The goal of this post is to illustrate the principle and provide you with a technique you can use today.

Perhaps an example would be helpful …

using System;
using Lender.Slos.Utilities.Configuration;

namespace Lender.Slos.Financial
{
    public class ModificationWindow
    {
        private readonly IModificationWindowSettings _settings;

        public ModificationWindow(
            IModificationWindowSettings settings)
        {
            _settings = settings;
        }

        // This property is for testing use only
        private DateTime? _now;
        public DateTime Now
        {
            get { return _now ?? DateTime.Now; }
            internal set { _now = value; }
        }

        public bool Allowed()
        {
            var now = this.Now;

            // Start date's month & day come from settings
            var startDate = new DateTime(
                now.Year,
                _settings.StartMonth,
                _settings.StartDay);

            // End date is 1 month after the start date
            var endDate = startDate.AddMonths(1);

            if (now >= startDate &&
                now < endDate)
            {
                return true;
            }

            return false;
        }
    }
}

In this example code, the Allowed method changed very little from how it was written at the end of the first post. The primary difference is that there isn’t any clock optional argument. The value of the now variable comes from the new class property named Now.

Let’s take a closer look at the Now property. First, it has a backing variable named _now, which is declared as a nullable DateTime. Second, since _now defaults to null, this means that the Now property getter will return System.DateTime.Now if the property is never set. In other words, if the Now property is never set then that property behaves like a call to System.DateTime.Now.

Note that the null coalescing operator (??) expression in the getter can be rewritten as follows:

get
{
    return _now == null ? DateTime.Now : _now.Value;
}

And so, if our test code sets the Now property to a specific DateTime value then that property returns that DateTime value, instead of System.DateTime.Now. This allows the test code to “freeze the clock” before calling the method-under-test.

The following is the revised test method. It sets the Now property to the currentTime value at the end of the arrangement section. This, in effect, fakes the Allowed method, and establishes a known value for the clock.

[TestCase(1)]
[TestCase(5)]
[TestCase(12)]
public void Allowed_WhenCurrentDateIsInsideModificationWindow_ExpectTrue(
    int startMonth)
{
    // Arrange
    var settings = new Mock<IModificationWindowSettings>();
    settings
        .SetupGet(e => e.StartMonth)
        .Returns(startMonth);
    settings
        .SetupGet(e => e.StartDay)
        .Returns(1);

    var currentTime = new DateTime(
        DateTime.Now.Year,
        startMonth,
        13);

    var classUnderTest = new ModificationWindow(settings.Object);

    classUnderTest.Now = currentTime; // Set the value of Now; freeze the clock

    // Act
    var result = classUnderTest.Allowed();

    // Assert
    Assert.AreEqual(true, result);
}

There is one more subtlety to mention. The test method cannot set the class-under-test’s Now property without being allowed access. This is accomplished by adding the following line to the end of the AssemblyInfo.cs file in the Lender.Slos.Financial project, which declares the class-under-test.

[assembly: InternalsVisibleTo("Tests.Unit.Lender.Slos.Financial")]

The use of InternalsVisibleTo establishes a friend assembly relationship.

Pros:

  1. A straightforward, KISS approach
  2. Can work with .NET Framework 2.0
  3. No impact on class-users and method-callers
  4. Isolated change, minimal risk
  5. Testability is greatly improved

Cons:

  1. Improves testability only one class at a time
  2. Adds a testing-use-only property to the class

I use this approach when working with legacy or Brownfield code. It is a minimally invasive technique.

In the next part of this Fake Time series we’ll look at the IClock interface and a constructor injection approach.

Crossderry Interview

Earlier in the month, Crossderry interviewed me about my book Pro .NET Best Practices. Below is the entire four-part interview. Reprinted with the permission of @crossderry.

Project Mgmt and Software Dev Best Practice

Q: Your book’s title notwithstanding, you’re keen to move people away from the term “best practices.” What is wrong with “best practices”?

A: My technical reviewer, Paul Apostolescu, asked me the same question. Paul often prompted me to really think things through.

I routinely avoid superlatives, like “best”, when dealing with programmers, engineers, and other left-brain dominant people. Far too often, a word like that becomes a huge diversion with heated discussions centering on the topic of what is the singularly best practice. It’s like that old saying, the enemy of the good is the best. Too much time is wasted searching for the best practice when there is clearly a better practice right in front of you.

A “ruthlessly helpful” practice is my pragmatist’s way of saying, let’s pick a new or different practice today because we know it pays dividends. Over time, iteratively and incrementally, that incumbent practice can be replaced by a better practice, until then the team and organization reaps the rewards.

As for the title of book, I originally named it “Ruthlessly Helpful .NET”. The book became part of an Apress professional series, and the title “Pro .NET Best Practices” fits in with a prospective reader and booksellers’ expectations for books in that series.

Why PM Matters to Developers

Here we focus on why he spent so much time on PM-relevant topics:

Q: One of the pleasant surprises in the book was the early attention you paid to strategy, value, scope, deliverables and other project management touchstones. Why so much PM?

A: I find that adopting a new and different practice — in the hope that it’ll be ruthlessly helpful one — is an initiative, kinda like a micro-project. This can happen at so many levels … an individual developer, a technical leader, the project manager, the organization.

For the PM and for the organization, they’re usually aware that adopting a set of better practices is a project to be managed. For the individual or group, that awareness is often missing and the PM fundamentals are not applied to the task. I felt that my book needed to bring in the relevant first-principles of project management to raise some awareness and guide readers toward the concepts that make these initiatives more successful.

Ruthlessly Helpful Project Management

We turn to the project manager’s role:

Q: Can you give an example or three of how project managers can be “ruthlessly helpful” to their development teams?

A: Here are a few:

1) Insist that programmers, engineers and other technical folks go to the whiteboard. Have them draw out and diagram their thinking. ”‘Can you draw it up for everyone to see?” Force them to share their mental image and understanding. You will find that others were making bad assumptions and inferences. Never assume that your development team is on the same page without literally forcing them to be on the same page.

2) Verify that every member of our development team is 100% confident that their component or module works as they’ve intended it to work. I call this: “Never trust an engineer who hesitates to cross his own bridge.” Many developer’s are building bridges they never intend to cross. I worked on fixed-asset accounting software, but I was never an accountant. The ruthlessly helpful PM asks the developer to demonstrate their work by asking things like “… let me see it in action, give it a quick spin, show me how you’re doing on this feature …”. These are all friendly ways to ask a developer to show you that they’re willing to cross their own bridge.

3) Don’t be surprised to find that your technical people are holding back on you. They’re waiting until there are no defects in their work. Perfectionists wish that their blind spots, omissions, and hidden weakness didn’t exist. Here’s the dilemma; they have no means to find the defects that are hidden to them. The cure they pick for this dilemma is to keep stalling until they can add every imaginable new feature and uncover any defect. The ruthlessly helpful PM knows how to find effective ways to provide the developers with dispassionate, timely, and non-judgmental feedback so they can achieve the desired results.

Common Obstacles PMs Introduce

This question — about problems project managers impose on their projects — wraps up my interview with Stephen Ritchie.

Q: What are common obstacles that project managers introduce into projects?

A: Haste. I like to say, “schedule pressure is the enemy of good design.” During project retrospectives, all too often, I find the primary technical design driver was haste. Not maintainability, not extensibility, not correctness, not performance … haste. This common obstacle is a silent killer. It is the Sword of Damocles that … when push comes to shove … drives so many important design objectives underground or out the window.

Ironically, the haste is driven by an imagined or arbitrary deadline. I like to remind project managers and developers that for quick and dirty solutions … the dirty remains long after the quick is forgotten. At critical moments, haste is important. But haste is an obstacle when it manifests itself as technical debt, incurred carelessly and having no useful purpose.

Other obstacles include compartmentalization, isolation, competitiveness, and demotivation. Here’s the thing. Most project managers need to get their team members to bring creativity, persistence, imagination, dedication, and collaboration to their projects if the project is going to be successful. These are the very things team members *voluntarily* bring to the project.

Look around the project; anything that doesn’t help and motivate individuals to interact effectively is an obstacle. Project managers must avoid introducing these obstacles and focus on clearing them.

[HT @crossderry Thank you for the interview and permission to reprint it on my blog.]

NuGet Kickstart Package

I want to use NuGet to retrieve a set of content files that are needed for the build. For example, the TeamCity build configuration runs a runner.msbuild script, however, that script needs to import a Targets file, like this:

<Import Condition="$(BuildPath)\ImportTargets\MSBuild.Lender.Common.Targets"
        Project="$(BuildPath)\ImportTargets\MSBuild.Lender.Common.Targets"
        />

The plan is to create a local NuGet feed that has all the prerequisite files for the build script. Using the local NuGet feed, install the “global build” package as the first build task. After that, the primary build script can find the import file and proceed normally. Here is the basic solution strategy that I came up with.

To see an example, follow these steps:

1. Create a local NuGet feed. Read more information here: http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds

2. Write a NuGet spec file and name it Lender.Build.nuspec. This is simply an XML file. The schema is described here: http://docs.nuget.org/docs/reference/nuspec-reference

<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
  <metadata>
    <id>_globalBuild</id>
    <version>1.0.0</version>
    <authors>Lender Development</authors>
    <requireLicenseAcceptance>false</requireLicenseAcceptance>
    <description>Lender Build</description>
  </metadata>
  <files>
    <file src="ImportTargets\**" target="ImportTargets" />
  </files>
</package>

Notice the “file” element. It specifies the source files, which includes in the MSBuild.Lender.Common.Targets file when the ImportTargets folder is added.

3. Using the NuGet Package Explorer, I opened the Lender.Build.nuspec file and saved the package in the LocalNuGetFeed folder. Here’s how that looks:

NuGet_Package_Explorer_1211-12-21

4. Save the package to the local NuGet feeds folder. In this case, it is the C:\LocalNuGetFeeds folder.

5. Now let’s move on over to where this “_globalBuild” dependency is going to be used. For example, the C:\projects\Lender.Slos folder.  In that folder, create a packages.config file and add it to version control. That config file looks like this:

<?xml version="1.0" encoding="utf-8"?>
<packages>
  <package id="_globalBuild" version="1.0.0" />
</packages>

This references the package with the id of “_globalBuild”, which is found in the LocalNuGetFeeds package. It is one of the available package sources because it was added through Visual Studio, under Tools >> Library Package Manager >> Package Manager Settings.

Library_Package_Manager_settings_2011-12-21

6. From MSBuild, the CI server calls the “Kickstart” target before running the default script target. The Kickstart target uses the NuGet.exe command line to install the global build package. Here is the MSBuild script:

<Project DefaultTargets="Default"
         xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
         ToolsVersion="4.0"
         >
  <PropertyGroup>
    <RootPath>.</RootPath>
    <BuildPath>$(RootPath)\_globalBuild.1.0.0\ImportTargets</BuildPath>
    <CommonImportFile>$(BuildPath)\MSBuild.Lender.Common.Targets</CommonImportFile>
  </PropertyGroup>

  <Import Condition="Exists('$(CommonImportFile)')"
          Project="$(CommonImportFile)"
          />

  <Target Name="Kickstart" >
    <PropertyGroup>
      <PackagesConfigFile>packages.config</PackagesConfigFile>
      <ReferencesPath>.</ReferencesPath>
    </PropertyGroup>
    <Exec Command="$(NuGetRoot)\nuget.exe i $(PackagesConfigFile) -o $(ReferencesPath)" />
  </Target>

  <!-- The Rebuild or other targets belong here -->
  <Target Name="Default" >
    <PropertyGroup>
      <ProjectFullName Condition="$(ProjectFullName)==''">(undefined)</ProjectFullName>
    </PropertyGroup>

    <Message Text="Project name: '$(ProjectFullName)'"
             Importance="High"
             />
  </Target>

</Project>

7. In this way, the MSBuild script uses NuGet to bring down the ImportTargets files and places them under the _globalBuild.1.0.0 folder. This can happen on the CI server with multiple build steps. For the sake of simplicity here are the lines in a batch file that simulates these steps:

%MSBuildRoot%\msbuild.exe "runner.msbuild" /t:Kickstart
%MSBuildRoot%\msbuild.exe "runner.msbuild"

With the kickstart bringing down the prerequisite files, the rest of the build script performs the automated build using the common Targets properly imported.

Why a Book on .NET Best Practices?

I am a Microsoft .NET software developer. That explains why the book is about .NET best practices. That’s in my wheelhouse.

The more relevant question is, why a book about best practices?

When it comes right down to it, many best practices are the application of common sense approaches. However, there is something that blocks us from making the relatively simple changes in work habits that produce significant, positive results. I wanted to further explore that quandary. Unfortunately, common sense is not always common practice.

There is a gap between the reality that projects live with and the vision that the team members have for their processes and practices. They envision new and different practices that would likely yield better outcomes for their project. Yet, the project reality is slow to move or simply never moves toward the vision.

Tension between reality and vision

Tension between reality and vision

Many developers are discouraged by the simple fact that far too many projects compromise the vision instead of changing the reality. These two concepts are usually in tension. That tension is a source of great frustration and cynicism. I wanted to let people know that their project reality is not an immovable object, and the team members can be an irresistible force.

Part of moving your reality toward your vision is getting a handle on the barriers and objections and working to overcome them. Some of them are external to the team while others are internal to the team. I wanted to relate organizational behavior to following .NET best practices and to software development.

Knowledge

The team must know what to do. They need to know about the systematic approaches that help the individual and the team achieve the desired results. There are key practice areas that yield many benefits:

  • Automated builds
  • Automated testing
  • Continuous integration and delivery
  • Code analysis
  • Automated deployment

Of course, there is a lot of overlap in these areas. The management scientist might call that synergy. A common theme to these practice areas is the principle of automation. By acquiring knowledge in these practice areas you find ways to:

  • Reduce human error
  • Increase reliability and predictability
  • Raise productivity and efficiency

Know-how in these practice areas also raises awareness and understanding, creates an early warning system, and provides various stakeholders with a new level of visibility into the project’s progress. I wanted the reader to appreciate the significance and inter-relatedness of these key practice areas and the benefits each offers.

Skill

The team needs to know how to do it. Every new and different practice has a learning curve. Climbing that curve takes time and practice. The journey from newbie to expert has to be nurtured. There are no shortcuts that can sidestep the crawl-walk-run progression. Becoming skilled requires experience. Prototyping and building an archetype are two great ways to develop a skill. Code samples and structured walkthroughs are other ways to develop a skill. I wanted the book to offer an eclectic assortment of case studies, walkthroughs, and code samples.

Attitude

Team members must want to adopt better practices. Managers need to know why the changes are for the better, in terms managers can appreciate. The bottom line, it is important to be able to quantify the benefits of following new and different practices. It is also important to understand what motivates and what doesn’t. It helps to understand human biases. Appreciate the underlying principle that software projects are materially impacted by how well individuals interact. I wanted to highlight and communicate the best practices that relate to the human factors that include persuasion, motivation, and commitment.

Pro .NET Best Practices

Here are the links to Pro .NET Best Practices:

Apress: http://www.apress.com/9781430240235

Amazon: http://www.amazon.com/NET-Best-Practices-Stephen-Ritchie/dp/1430240237

Barnes and Noble: http://www.barnesandnoble.com/w/pro-net-best-practices-stephen-d-ritchie/1104143991

Cover of the book Pro .NET Best Practices