Ruthlessly Helpful

Stephen Ritchie's offerings of ruthlessly helpful software engineering practices.

Category Archives: Best Practices

Platform Engineering

Platform Engineering: The Strategic Practice Your Team Actually Needs

In previous posts, I discussed the concept of “ruthlessly helpful” practices, which are those that are practical, valuable, widely accepted, and provide clear archetypes to follow. Today, I want to apply this framework to one of the most significant developments in software engineering that have happened since I wrote “Pro .NET Best Practices”: platform engineering.

If you’re unfamiliar with the term, platform engineering is the practice of building and maintaining internal developer platforms that provide self-service capabilities and reduce cognitive load for development teams. Think of it as the evolution of DevOps, focused specifically on developer experience and productivity.

But before you dismiss this as another industry buzzword, let me walk through why platform engineering might be the most ruthlessly helpful practice your organization can adopt in 2025.

The Problem: Death by a Thousand Configurations

Modern software development requires an overwhelming array of tools, services, and configurations. Check out Curtis Collicutt’s The Numerous Pains of Programming.

A typical application today might need:

  • Container orchestration (Kubernetes, Docker Swarm)
  • CI/CD pipelines (GitHub Actions, Jenkins, Azure DevOps)
  • Monitoring and observability (Prometheus, Grafana, New Relic)
  • Security scanning (Snyk, SonarQube, OWASP tools)
  • Database management (multiple engines, backup strategies, migrations)
  • Infrastructure as code (Terraform, ARM templates, CloudFormation)
  • Service mesh configuration (Istio, Linkerd)
  • Certificate management, secrets handling, networking policies…

The list goes on. Each tool solves important problems, but the cognitive load of managing them all is crushing development teams. I regularly encounter developers who spend more time configuring tools than writing business logic.

This isn’t a tools problem. This is a systems problem. Individual developers shouldn’t need to become experts in Kubernetes networking or Terraform state management to deploy a web application. That’s where platform engineering comes in.

What Platform Engineering Actually Means

Platform engineering creates an abstraction layer between developers and the underlying infrastructure complexity. Instead of each team figuring out how to deploy applications, manage databases, or set up monitoring, the platform team provides standardized, self-service capabilities.

A well-designed internal developer platform (IDP) might offer:

  • Golden paths: Opinionated, well-supported ways to accomplish common tasks
  • Self-service provisioning: Developers can create environments, databases, and services without tickets or waiting
  • Standardized tooling: Consistent CI/CD, monitoring, and security across all applications
  • Documentation and examples: Clear guidance for common scenarios and troubleshooting

The goal isn’t to eliminate flexibility—it’s to make the common cases easy and the complex cases possible.

Evaluating Platform Engineering as a Ruthlessly Helpful Practice

Let’s apply our four criteria to see if platform engineering deserves your team’s attention and investment.

1. Practicable: Is This Realistic for Your Organization?

Platform engineering is most practicable for organizations with:

  • Multiple development teams (usually 3+ teams) facing similar infrastructure challenges
  • Repeated patterns in application deployment, monitoring, or data management
  • Engineering leadership support for investing in developer productivity
  • Sufficient technical expertise to build and maintain platform capabilities

If you’re a small team with simple deployment needs, platform engineering might be overkill. But if you have multiple teams repeatedly solving the same infrastructure problems, it becomes highly practical.

Implementation Reality Check: Start small. You don’t need a comprehensive platform on day one. Begin with the most painful, repetitive task your teams face (often deployment or environment management) and build from there.

2. Generally Accepted: Is This a Proven Practice?

Platform engineering has moved well beyond the experimentation phase. Major organizations like Netflix, Spotify, Google, and Microsoft have demonstrated significant value from platform investments. The practice has enough adoption that:

  • Dedicated conferences and community events focus on platform engineering
  • Vendor ecosystem has emerged with tools specifically for building IDPs
  • Job market shows increasing demand for platform engineers
  • Academic research provides frameworks and measurement approaches

More importantly, platform engineering builds on established practices. Particularly, practices aligned to the principles of automation, standardization, and reducing manual, error-prone processes.

3. Valuable: Does This Solve Real Problems?

Platform engineering addresses several measurable problems:

Developer Productivity: Teams report 20-40% improvements in deployment frequency and reduced time to productivity for new developers.

Cognitive Load Reduction: Developers can focus on business logic rather than infrastructure complexity.

Consistency and Reliability: Standardized platforms reduce environment-specific issues and improve overall system reliability.

Security and Compliance: Centralized platforms make it easier to implement and maintain security policies and compliance requirements.

Cost Optimization: Shared infrastructure and automated resource management typically reduce cloud costs.

The key is measuring these benefits in your specific context. Platform engineering is valuable when the cost of building and maintaining the platform is less than the productivity gains across all development teams.

4. Archetypal: Are There Clear Examples to Follow?

Yes, with important caveats. The platform engineering community has produced excellent resources:

  • Reference architectures for common platform components
  • Open source tools like Backstage, Crossplane, and Port for building IDPs
  • Case studies from organizations at different scales and maturity levels
  • Measurement frameworks for tracking platform adoption and effectiveness

However, every organization’s platform needs are somewhat unique. The examples provide direction, but you’ll need to adapt them to your specific technology stack, team structure, and business requirements.

Implementation Strategy: Learning from Continuous Integration

Platform engineering reminds me of continuous integration adoption patterns from the early 2000s. Teams that succeeded with CI followed a predictable pattern:

  1. Started with immediate pain points (broken builds, integration problems)
  2. Built basic automation before attempting sophisticated workflows
  3. Proved value incrementally rather than trying to solve everything at once
  4. Invested in team education alongside technical implementation

Platform engineering follows similar patterns. Here’s a practical approach:

Phase 1: Identify and Automate the Biggest Pain Point

Survey your development teams to identify their most time-consuming, repetitive infrastructure tasks. Common starting points:

  • Application deployment and environment management
  • Database provisioning and management
  • CI/CD pipeline setup and maintenance
  • Monitoring and alerting configuration

Choose one area and build a simple, self-service solution. Success here creates momentum for broader platform adoption.

Phase 2: Standardize and Document

Once you have a working solution for one problem:

  • Document the standard approach with examples
  • Create templates or automation for common scenarios
  • Train teams on the new capabilities
  • Measure adoption and gather feedback

Phase 3: Expand Based on Demonstrated Value

Use the success and lessons from Phase 1 to justify investment in additional platform capabilities. Prioritize based on team feedback and measurable impact.

Common Implementation Obstacles

Obstacle 1: “We Don’t Have Platform Engineers”

Platform engineering doesn’t require hiring a specialized team immediately. Start with existing engineers who understand your infrastructure challenges. Many successful platforms begin as side projects that demonstrate value before becoming formal initiatives.

Obstacle 2: “Our Teams Want Different Tools”

This is actually an argument for platform engineering, not against it. Provide standardized capabilities while allowing teams to use their preferred development tools on top of the platform.

Obstacle 3: “We Can’t Afford to Build This”

Calculate the total cost of infrastructure complexity across all your teams. Include developer time spent on deployment issues, environment setup, and tool maintenance. Most organizations discover they’re already paying for platform engineering, but they’re just doing it inefficiently across multiple teams.

Quantifying Platform Engineering Success

Measure platform engineering impact across three dimensions:

Developer Experience Metrics

  • Time from project creation to first deployment
  • Number of infrastructure-related support tickets
  • Developer satisfaction surveys (quarterly)
  • Time new developers need to become productive

Operational Metrics

  • Deployment frequency and lead time
  • Mean time to recovery from incidents
  • Infrastructure costs per application or team
  • Security policy compliance rates

Business Impact Metrics

  • Feature delivery velocity
  • Developer retention rates
  • Engineering team scaling efficiency
  • Customer-facing service reliability

Next Steps: Starting Your Platform Engineering Journey

If platform engineering seems like a ruthlessly helpful practice for your organization:

  1. Survey your teams to identify the most painful infrastructure challenges
  2. Calculate the current cost of infrastructure complexity across all teams
  3. Start small with one high-impact, well-defined problem
  4. Build momentum by demonstrating clear value before expanding scope
  5. Invest in education to help teams understand and adopt platform capabilities

Remember, the goal isn’t to build the most sophisticated platform. The goal is to build the platform that most effectively serves your teams and the business’ needs.


Commentary

When I wrote about build automation in “Pro .NET Best Practices,” I focused on eliminating manual, error-prone processes that slowed teams down. Platform engineering is the natural evolution of that thinking, applied to the entire development workflow rather than just the build process.

What’s fascinating to me is how platform engineering validates many of the strategic principles from my book. The most successful platform teams think like product teams. Platform teams understand their customers (developers), measure satisfaction and adoption, and iterate based on feedback. They focus on removing friction and enabling teams to be more effective, rather than just implementing the latest technology.

The biggest lesson I’ve learned watching organizations adopt platform engineering is that culture matters as much as technology. Platforms succeed when they’re built with developers, not for developers. The most effective platform teams spend significant time understanding developer workflows, pain points, and preferences before building solutions.

This mirrors what I observed with continuous integration adoption: technical excellence without organizational buy-in leads to unused capabilities and wasted investment. The teams that succeed with platform engineering treat it as both a technical and organizational transformation.

Looking ahead, I believe platform engineering is (or will become) as fundamental to software development as version control or automated testing. The organizations that master it early will have a significant competitive advantage in attracting talent and delivering software efficiently. Those that ignore it will find themselves increasingly hampered by infrastructure complexity as the software landscape continues to evolve.


Is your organization considering platform engineering? What infrastructure challenges are slowing down your development teams? Share your experiences and questions in the comments below.

First, the mental creation

With physical things, like buildings and devices, etc. people seem to be generally okay with generating strong specifications, such as blueprints, CAD drawings, etc. These specifications are often about trying to perfect the mental creation before the physical creation gets started. In these cases, the physical thing you’re making is not at all abstract, and it could be very expensive to make. It’s hard to iterate when you’re building a bridge that’s going to be part of an interstate roadway.

missed-it-by-that-much

What I’ve seen in the world of software, the physical creation seems abstract, and engineers writing software appears inexpensive when compared to things like steel and concrete. Many people seem to want to skip the mental creation step, and they ask that the engineers jump right into coding the thing up.

If the increment of the thing that your team is to write software for is ready to be worked on, and you have a roadmap, then an iterative and incremental approach will probably work. In fact, the idea that complex problems require iterative solutions underpins the values and principles described in the Manifesto for Agile Software Development. The Scrum process framework describes this as product increments and iterations.

However, all too often it’s a random walk guided by ambiguous language (either written or spoken) that leads to software that lacks clarity, consistency, and correctness.

Sadly and all too often, the quality assurance folks are given little time to discover and describe the issues, let alone the time to verify that the issues are resolved. During these review sessions, engineers pull out their evidence that the software is working as intended. They have a photo of a whiteboard showing some hand-wavy financial formulas. They demonstrate that they’re getting the right answer with the math-could-not-be-easier example: a loan that has a principal of $12,000.00, a term of 360 payments, an annual interest rate of 12%, and, of course, don’t forget that there are 12 months in a year. And through anomalous coincident, the software works correctly using that example. Hilarity ensues!

banks-care-about-rounding

Unfortunately, QA is often squeezed. They are over-challenged and under-supported. They are given an absurd goal, and they persevere despite grave doubts about whether a quality release is achievable. Their hopelessness comes from knowing that the team is about to inflict this software on real people, good people, and the QA team doesn’t have the time or the power to stop the release.

the-dirty-remains

What are the key things to take away:

  • When the physical creation of software seems to be cheap, comparable to the mental creation of writing down and agreeing to light-weight requirements, then the temptation exists to hire nothing but software engineers and to maximize the number of people who “just write code”. In the long run, however, the rework costs are often extremely expensive.
  • In the end, the end-users are going to find any defects.
  • Better to have QA independently verify and validate that the software works as intended, of course, what is intended needs to be written down:  clearly, consistently and correctly.
  • QA ought to find issues that can to be resolved before the users get the software — issue reports that show clear gaps in required behavior.
  • QA conversations with engineers ought never degenerate into a difference of opinion, which happens when there are no facts about the required behavior. Very often these discussions (rhymes with concussion) escalate into a difference of values — “you don’t understand modern software development”.

Interestingly, the software’s users are going to find the most important defects. The users are the ultimate independent validators and verifiers. End-users, their bosses, and the buyer can be ruthless reporters of defects. In fact, they might very well reject the product.

Thank You DC .NET Users Group 2013.2

A big thank you to the DC .NET Users Group for hosting my presentation on Continuous Integration at their Februrary meeting last night. I really hope that everyone enjoyed the presentation on continuous integration. The questions and conversations were very good.

Code Samples

Although most of the examples used TeamCity, here are the code samples, available through GitHub.
https://github.com/ruthlesshelp/Presentations

Slides

Here are the slides, available through SlideShare.

Compendium of .NET Best Practices

So you’re getting ready to start a .NET Best Practices initiative at your organization and you’re looking to find a lot of specific best practices tips. You want to know: What are the .NET Framework best practices?

You can be assured that I’ve been down this road. In fact, a few readers of my book, Pro .NET Best Practices, expressed some disappointment that the book is not a collection of specific .NET best practices. And this is exactly why I decided to address this subject in today’s post.

For those that want to dig right in, follow this link to part 1, MSDN: .NET Framework Best Practices.

If you want some background, let me start with the question: Who wants to follow best practices, anyway?

Adoption

The adoption of new and different practices is a central theme of Pro .NET Best Practices. I work with enough individuals, teams, and organizations to understand the issues involved with adopting best practices. Consider the four levels at which best practices are embraced:

  • Individual: You or any individual adopts better practices to increase personal and professional productivity, quality, thoroughness, and overall effectiveness.
  • Group: The team adopts better practices to be more industrious together, avoid problems, achieve desired results, improve interpersonal relationships, and work more effectively with other teams.
  • Organization: Your company, agency, or firm adopts better practices to bring more positive outcomes to the organization, attract and retain employees, satisfy end-user expectations, and make stakeholders happy.
  • Profession: Better practices are widely adopted and become generally-accepted standards, patterns, and principles that bring alignment to software development and benefit to all that follow them.

In an ideal world, best practices are quickly adopted at all four levels. However, in the real world, they can be slowly adopted by the group, resisted by the organization, embraced by one individual, not by another, or ignored altogether by everyone but you. It can be a mixed bag.

The Reader

There are two key readers of this blog post that I want to identify with and help:

  1. Developers – As a developer, you have personal practices that make you an effective software developer. The compendium should list new and different practices that help make you a more effective developer.
  2. Team Leaders – As a team leader, you see the team develop software through their current practices. The compendium should list practices that help the team perform better and achieve better outcomes.

These readers are adopting at either the individual or group level.

If you are a reader who wants to bring best practices to the organization or the software development profession then I assert that you are probably not interested in the content of this compendium. Yes, you might refer a developer or team leader to the compendium, but I doubt you will find it directly relevant.

So, given this introduction, let’s look at how a collection (I like the term compendium) of specific .NET best practices might be organized.

Tags for the Compendium

Since this is a blog, tags can help others find and navigate the content. Here is a quick list of tags that come to mind:

  • Coding Best Practices. For example, C#, VB.NET, T-SQL
  • Toolset Best Practices. For example, Visual Studio, ReSharper, Typemock
  • Platform Best Practices. For example, ASP.NET, SQL Server, SharePoint
  • Architecture Best Practices. For example, Client-Server , n-Tier, CQRS
  • Windows 8 Best Practices
  • Engineering Fundamentals Best Practices
  • Cloud Best Practices
  • Phone Best Practices
  • ALM (Application Lifecycle Management) Best Practices

Clearly, there are a lot of ways to slice and dice the topic of best practices; however, I will try to bring things back to the topic of the Microsoft .NET Framework.

You can find the entire Best Practices category here: https://ruthlesslyhelpful.net/category/development/best-practices/

The Power of Free

I mostly wrote Pro .NET Best Practices based on my professional experience trying to get teams and organizations to adopt .NET Framework best practices. Over the years, I have read many books, I experimented, I tried and persevered with one approach, and I tried totally new approaches. Many times I learned from others. Many times I learned by my mistakes.

Over the years and as I researched my book, I found many free, on-line sources of .NET best practices. Many are professionally written and easy to follow. In my book I was reluctant to paraphrase or repeat material, but I should have done a better job of showing people how to access the material. (The one thing I really kick myself over is that I did not use Bitly.)

So, let me start the Compendium of .NET Best Practices with some great material already available on the Internet.

Part 1: MSDN: .NET Framework Best Practices

MSDN: .NET Framework Best Practices

For years now, Microsoft Developer Network (MSDN) has provided free online documentation to .NET developers. There is a lot of individual .NET best practices topics, which are described at the high level at this MSDN link:
MSDN: .NET Framework Best Practices

This is a great MSDN article to read and link to bookmark if you’re interested in.NET best practices.

Best Practices for Strings

Just take a look at all the information within the MSDN topic of Best Practices for Using Strings in the .NET Framework. I am not going to be able to duplicate all of that. If you are developing an application that has to deal with culture, globalization, and localization issues then you need to know much of this material.

Before I go any further, let me introduce you to Jon Skeet. He wrote an awesome book, C# In Depth. I think you might enjoy reading his online article on .NET Strings: http://csharpindepth.com/Articles/General/strings.aspx

Okay, let’s get back to the MSDN article. Below I have highlighted a few of the Strings best practices that I’d like to discuss.

1. Use the String.ToUpperInvariant method instead of the String.ToLowerInvariant method when you normalize strings for comparison.

In the .NET Framework, ToUpperInvariant is the standard way to normalize case. In fact, the Visual Studio Code Analysis has rule CA1308 in the Globalization category that can monitor this.

This is a really easy practice to follow once you know it.

Here is the key point I picked up from rule CA1308:

It is safe to suppress [this] warning message [CA1308] when you are not making security decision based on the result (for example, when you are displaying it in the UI).

In other words, take care to uppercase strings when the code is making a security decision based on normalized string comparison.

2. Use an overload of the String.Equals method to test whether two strings are equal.

Some of these overloads require a parameter that specifies the culture, case, and sort rules that are to be used in the comparison method. This just makes the string comparison you are using explicit.

3. Do not use an overload of the String.Compare or CompareTo method and test for a return value of zero to determine whether two strings are equal.

In the MSDN documentation for comparing Strings the guidance is quite clear:

The Compare method is primarily intended for use when ordering or sorting strings.

All-In-One Code Framework

If you have not had a chance to take a look at the All-In-One Code Framework then please take a few minutes to look it over.

The Microsoft All-In-One Code Framework is a free, centralized code sample library driven by developers’ needs.

It is Microsoft Public License (Ms-PL), which is the least restrictive of the Microsoft open source licenses.

What’s relevant to this article is the All-In-One Code Framework Coding Standards document. You can find the download link at the top of this page: http://1code.codeplex.com/documentation

In that document, they list a very relevant and useful list of String best practices.

  • Do not use the ‘+’ operator (or ‘&’ in VB.NET) to concatenate many strings. Instead, you should use StringBuilder for concatenation. However, do use the ‘+’ operator (or ‘&’ in VB.NET) to concatenate small numbers of strings.
  • Do use overloads that explicitly specify the string comparison rules for string operations. Typically, this involves calling a method overload that has a parameter of type StringComparison.
  • Do use StringComparison.Ordinal or StringComparison.OrdinalIgnoreCase for comparisons as your safe default for culture-agnostic string matching, and for better performance.
  • Do use string operations that are based on StringComparison.CurrentCulture when you display output to the user.
  • Do use the non-linguistic StringComparison.Ordinal or StringComparison.OrdinalIgnoreCase values instead of string operations based on CultureInfo.InvariantCulture when the comparison is linguistically irrelevant (symbolic, for example). Do not use string operations based on StringComparison.InvariantCulture in most cases. One of the few exceptions is when you are persisting linguistically meaningful but culturally agnostic data.
  • Do use an overload of the String.Equals method to test whether two strings are equal.
  • Do not use an overload of the String.Compare or CompareTo method and test for a return value of zero to determine whether two strings are equal. They are used to sort strings, not to check for equality.
  • Do use the String.ToUpperInvariant method instead of the String.ToLowerInvariant method when you normalize strings for comparison.

This post is part of my Compendium .NET Best-Practices series.

Thank You Upstate New York Users Groups

In November I traveled to Upstate New York to present at four .NET Users Group. Here’s the overview:

  1. The first stop was in Albany on Monday, Nov. 12th to present at the Tech Valley Users Group (TVUG) meeting.
  2. On Tuesday night I was in Syracuse presenting at the Central New York .NET Developer Group meeting.
  3. On Wednesday night I was in Rochester presenting at the Visual Developers of Upstate New York meeting.
  4. Finally, on Thursday night I was in Buffalo presenting at the Microsoft Developers in Western New York meeting.

 

Many Belated Thank Yous

I realize it is belated, but I’d like to extend a very big and heartfelt thank you to the organizers of these users groups for putting together a great series of meetings.

Thank you to Stephanie Carino from Apress for connecting me with the organizers. I really appreciate all the help with all the public relations, the swag, the promotion codes, the raffle copies of my book, and for the tweets and re-tweets.

Slides and Code Samples

My presentations are available on SlideShare under my RuthlessHelp account, but if you are looking for something specific then here are the four presentations:

  1. An Overview of .NET Best Practices
  2. Overcoming the Obstacles, Pitfalls, and Dangers of Unit Testing
  3. Advanced Code Analysis with .NET
  4. An Overview of .NET Best Practices

All the code samples can be found on GitHub under my RuthlessHelp account: https://github.com/ruthlesshelp/Presentations

Please Rate Me

If you attended one of these presentations, please rate me at SpeakerRate:

  1. Rate: An Overview of .NET Best Practices (Albany, 12-Nov)
  2. Rate: Overcoming the Obstacles, Pitfalls, and Dangers of Unit Testing
  3. Rate: Advanced Code Analysis with .NET
  4. Rate: An Overview of .NET Best Practices (Buffalo, 15-Nov)

You can also rate me at INETA: http://ineta.org/Speakers/SearchCommunitySpeakers.aspx?SpeakerId=b7b92f6b-ac28-413f-9baf-9764ff95be79

Thank You DC Alt.Net 2012.7

Another great showing for the DC Alt.Net meetup last night. I hope everyone enjoyed my presentation on code analysis in .NET. There were a lot of great questions and good conversation. I really appreciate the audience participation.

Code Samples

Here are the code samples, available through GitHub.
https://github.com/ruthlesshelp/Presentations

Slides

Here are the slides, available through SlideShare.

The SDL Static Analysis Story

With the two day Microsoft Security Development Conference starting tomorrow in DC, I am curious to hear about one thing: what is the static code analysis story in the Security Development Lifecycle?

Microsoft explains their vision of the Security Development Lifecycle and provides SDL Practice #10: Perform Static Analysis. On that page, under the heading of Tools specific to this practice, CAT.NET is recommended and download links are provided. However, the links are to CAT.NET version 1.0. What happened to CAT.NET 2.0?

On the MSDN blog a post from the SDL folks implies that security-oriented code analysis is going to be part of Visual Studio 11. I believe there is a lot of value in having a separate tool, like FxCop, to perform static code analysis across VS projects and solutions and on 3rd-party assemblies.

I would love to hear more about the tools specific to SDL Practice #10: Perform Static Analysis, and I am hopeful that this will be described in detail in one or more sessions at some future SDC.

Keep Your Privates Private

Often I am asked variations on this question: Should I unit test private methods?

The Visual Studio Team Test blog describes the Publicize testing technique in Visual Studio as one way to unit test private methods. There are other methods.

As a rule of thumb: Do not unit test private methods.

Encapsulation

The concept of encapsulation means that a class’s internal state and behavior should remain “unpublished”. Any instance of that class is only manipulated through the exposed properties and methods.

The class “publishes” properties and methods by using the C# keywords: public, protected, and internal.

The one keyword that says “keep out” is private. Only the class itself needs to know about this property or method. Since any unit test ensures that the code works as intended, the idea of some outside code testing a private method is unconventional. A private method is not intended to be externally visible, even to test code.

However, the question goes deeper than unconventional. Is it unwise to unit test private methods?

Yes. It is unwise to unit test private methods.

Brittle Unit Tests

When you refactor the code-under-test, and the private methods are significantly changed, then the test code testing private methods must be refactored. This inhibits the refactoring of the class-under-test.

It should be straightforward to refactor a class when no public properties or methods are impacted. Private properties and methods, because they are not intended to be directly called, should be allowed to freely and easily change. A lot of test code that directly calls private members causes headaches.

Avoid testing the internal semantics of a class. It is the published semantics that you want to test.

Zombie Code

Some dead code is only kept alive by the test methods that call it.

If only the public interface is tested, private methods are only called thorough public-method test coverage. Any private method or branch within the private method that cannot be reached through test coverage is dead code. Private method testing short-circuits this analysis.

Yes, these are my views on what might be a hot topic to some. There are other arguments, pro and con, many of which are covered in this article: http://www.codeproject.com/Articles/9715/How-to-Test-Private-and-Protected-methods-in-NET

Rules for Commenting Code

Unreadable code with comments is inadequate code with comments you cannot trust. Code that is well written rarely needs comments. Only comments that provide additional, necessary information are useful.

Yesterday a colleague of mine told me that he lost 10 points on a university assignment because he did not comment his code. Today I saw a photo with a list of rules for commenting attributed to Tim Ottinger.

Ottinger’s three rules make a lot of sense. These rules are straightforward. In my experience, they are correct and proper. Here are Ottinger’s Comment Rules:

1. Primary Rule

Comments are for things that cannot be expressed in code.

This is common sense. But, sadly, it is not common practice. Software is written in a programming language. A reader fluent in the programming language must understand the code. The code must be readable. It must clearly express what it is that the code does.

Only add comments when some important thing must be communicated to the reader, and that thing cannot be communicated by making the code any more readable. For example, a comment with a link to the MACRS depreciation method could be important because it helps explain the source of the algorithm.

2. Redundancy Rule

Comments which restate code must be deleted.

Any restatement of the code is unlikely to maintained over time. If the comment is maintained, then it just adds to the cost. More importantly, when comments are not maintained they either end up substantially misrepresenting the code or end up being ignored. Reading comments that misrepresent code is a waste of time, at best. At worst, they cause confusion or introduce bugs. Remove any comments that restate the code.

3. Single Truth Rule

If the comment says what the code could say, then the code must change to make the comment redundant.

Writing readable code is all about making sure that the compiler properly implements what the developer intended and making sure any competent developer can quickly and effectively understand the code. The code needs to do both: completely, correctly, and consistently. For example, a comment explaining that the variable x represents the principal amount of a loan violates the single truth rule. The variable ought to be named loanPrincipal. In this way the compiler uses the same variable to represent the same single true meaning that the human reader understands.

Tim Ottinger and Jeff Langr present more pragmatic guideance on when to write (and not write) comments: http://agileinaflash.blogspot.com/2009/04/rules-for-commenting.html