Ruthlessly Helpful

Stephen Ritchie's offerings of ruthlessly helpful software engineering practices.

Crucial Skills That Make Engineers Successful

The other day I was speaking with an engineer and they asked me to describe the crucial skills that make engineers successful. I think this is an important topic.

In a world driven by technological innovation, the role of an engineer is more crucial than ever. Yet, what separates good engineers from successful ones isn’t just technical know-how; it involves a mastery of various practical and soft skills. Let’s explore these skills.

Cultivate Core Technical Skills

    Problem Solving — Every engineer’s primary role involves solving problems to build things or fix things. However, successful engineers distinguish themselves by tackling novel challenges that aren’t typically addressed in conventional education. Refine your ability to devise innovative solutions.

    Learn and practice techniques such as:

    • actively engaging with new and unfamiliar material (e.g., frameworks, languages, other tech)
    • linking knowledge to existing experiences
    • prioritizing understanding over memorization

    Creativity — John Cleese once said, “Creativity is not a talent … it is a way of operating.” Creativity in engineering isn’t about artistic ability; it’s about thinking differently and being open to new ideas.

    Foster creativity by:

    • creating distraction-free environments
    • allowing uninterrupted time for thought
    • maintaining a playful, open-minded attitude toward problem-solving

    Critical Thinking — This involves a methodical analysis and evaluation of information to form a judgment. This skill is vital for making informed decisions and avoiding costly mistakes in complex projects.

    Successful engineers often excel at:

    • formulating hypotheses
    • gathering information (e.g., researching, experimenting, reading, and learning)
    • exploring multiple viewpoints to reach logical conclusions

    Domain Expertise – Understanding the specific needs and processes of the business, market, or industry you are working with can greatly enhance the relevance and impact of your engineering solutions. Domain expertise allows engineers to deliver more targeted and effective solutions.

    Learn the domain by:

    • mastering business-, market-, and industry-specific business processes
    • familiarizing yourself with the client’s needs, wants, and “delighters”

    Enhance Your Soft Skills

      The importance of emotional intelligence (EQ) in engineering cannot be overstated. As engineers advance in their careers, their technical responsibilities often broaden to include leadership roles. These skills help in nurturing a positive work environment and team effectiveness. Moreover, as many experts suggest, EQ tends to increase with age, which provides a valuable opportunity for personal development over time.

      Broaden your skills to include more soft skills:

      • recognizing and regulating emotions,
      • understanding team dynamics, and
      • effective communication

      Debug the Development Process

        Personal Process — Engineering is as much about personal growth as it is about technical know-how. Successful engineers maintain a disciplined personal development process that helps them continuously improve their performance.

        Hone your ability and habit of:

        • estimating and planning your work
        • making and keeping commitments
        • quantifying the value of your work
        • reducing defects and enhancing quality

        Team Process — In collaborative environments, the ability to facilitate, influence, and negotiate becomes crucial. Successful engineers need to articulate and share their vision, adapt their roles to the team’s needs, and contribute to building efficient, inclusive teams. This involves balancing speed and quality in engineering tasks and fostering an environment where new and better practices are embraced.

        Continually Learn and Adapt

          The landscape of engineering is constantly evolving, driven by advancements in technology and changes in market demands. Remaining successful as an engineer requires a commitment to lifelong learning—actively seeking out new knowledge and skills to stay ahead of the curve.

          In summary, to adapt and thrive, you must take charge of you own skill development.

          Recommended Resources

            If you are looking to deepen your understanding of these concepts, many resources are available. Here are some recommended resources to provide insights and tools to enhance you skills.

            Problem Solving

            • Book: “The Ideal Problem Solver” by John D. Bransford and Barry S. Stein
              Amazon Link
            • Video: Tom Wujec: “Got a wicked problem? First, tell me how you make toast”
              TED Talk Link

            Creativity

            • Book: “Creativity: A Short and Cheerful Guide” by John Cleese
              Amazon Link
            • Video: John Cleese on Creativity
              YouTube Link

            Critical Thinking

            • Book: “Thinking, Fast and Slow” by Daniel Kahneman
              Amazon Link
            • Video: “5 tips to improve your critical thinking – Samantha Agoos”
              YouTube Link

            Domain Expertise

            • Book: “Domain-Driven Design Distilled” by Vaughn Vernon
              Amazon Link
            • Video: Introduction to Domain-Driven Design
              YouTube Link

            Emotional Intelligence

            • Book: “Working with Emotional Intelligence” by Daniel Goleman
              Amazon Link
            • Video: Daniel Goleman introduces Emotional Intelligence
              YouTube Link

            Development Process

            Personal and Team Development

            Boundary Analysis

            For every method-under-test there is a set of valid preconditions and arguments. It is the domain of all possible values that allows the method to work properly. That domain defines the method’s boundaries. Boundary testing requires analysis to determine the valid preconditions and the valid arguments. Once these are established, you can develop tests to verify that the method guards against invalid preconditions and arguments.

            Boundary-value analysis is about finding the limits of acceptable values, which includes looking at the following:

            • All invalid values
            • Maximum values
            • Minimum values
            • Values just on a boundary
            • Values just within a boundary
            • Values just outside a boundary
            • Values that behave uniquely, such as zero or one

            An example of a situational case for dates is a deadline or time window. You could imagine that for a student loan origination system, a loan disbursement must occur no earlier than 30 days before or no later than 60 days after the first day of the semester.

            Another situational case might be a restriction on age, dollar amount, or interest rate. There are also rounding-behavior limits, like two-digits for dollar amounts and six-digits for interest rates. There are also physical limits to things like weight and height and age. Both zero and one behave uniquely in certain mathematical expressions. Time zone, language and culture, and other test conditions could be relevant. Analyzing all these limits helps to identify boundaries used in test code.

            Note: Dealing with date arithmetic can be tricky. Boundary analysis and good test code makes sure that the date and time logic is correct.

            Invalid Arguments

            When the test code calls a method-under-test with an invalid argument, the method should throw an argument exception. This is the intended behavior, but to verify it requires a negative test. A negative test is test code that passes if the method-under-test responds negatively; in this case, throwing an argument exception.

            The test code shown here fails the test because ComputePayment is provided an invalid termInMonths of zero. This is test code that’s not expecting an exception.


            [TestCase(7499, 1.79, 0, 72.16)]
            public void ComputePayment_WithProvidedLoanData_ExpectProperMonthlyPayment(
              decimal principal,
              decimal annualPercentageRate,
              int termInMonths,
              decimal expectedPaymentPerPeriod)
            {
              // Arrange
              var loan =
                new Loan
                {
                  Principal = principal,
                  AnnualPercentageRate = annualPercentageRate,
                };
            
              // Act
              var actual = loan.ComputePayment(termInMonths);
            
              // Assert
              Assert.AreEqual(expectedPaymentPerPeriod, actual);
            }
            

            The result of the failing test is shown, it’s output from Unexpected Exception.


            LoanTests.ComputePayment_WithProvidedLoanData_ExpectInvalidArgumentException : Failed
            System.ArgumentOutOfRangeException : Specified argument was out of the range of valid
            values.
            
            Parameter name: termInPeriods
            at
            Tests.Unit.Lender.Slos.Model.LoanTests.ComputePayment_WithProvidedLoanData_ExpectInvalidArgu
            mentException(Decimal principal, Decimal annualPercentageRate, Int32 termInMonths, Decimal
            expectedPaymentPerPeriod) in LoanTests.cs: line 25

            The challenge is to pass the test when the exception is thrown. Also, the test code should verify that the exception type is InvalidArgumentException. This requires the method to somehow catch the exception, evaluate it, and determine if the exception is expected.

            In NUnit this can be accomplished using either an attribute or a test delegate. In the case of a test delegate, the test method can use a lambda expression to define the action step to perform. The lambda is assigned to a TestDelegate variable within the Act section. In the Assert section, an assertion statement verifies that the proper exception is thrown when the test delegate is invoked.

            The invalid values for the termInMonths argument are found by inspecting the ComputePayment method’s code, reviewing the requirements, and performing boundary analysis. The following invalid values are discovered:

            • A term of zero months
            • Any negative term in months
            • Any term greater than 360 months (30 years)

            Below the new test is written to verify that the ComputePayment method throws an ArgumentOutOfRangeException whenever an invalid term is passed as an argument to the method. These are negative tests, with expected exceptions.


            [TestCase(7499, 1.79, 0, 72.16)]
            [TestCase(7499, 1.79, -1, 72.16)]
            [TestCase(7499, 1.79, -2, 72.16)]
            [TestCase(7499, 1.79, int.MinValue, 72.16)]
            [TestCase(7499, 1.79, 361, 72.16)]
            [TestCase(7499, 1.79, int.MaxValue, 72.16)]
            public void ComputePayment_WithInvalidTermInMonths_ExpectArgumentOutOfRangeException(
              decimal principal,
              decimal annualPercentageRate,
              int termInMonths,
              decimal expectedPaymentPerPeriod)
            {
              // Arrange
              var loan =
                new Loan
                {
                  Principal = principal,
                  AnnualPercentageRate = annualPercentageRate,
                };
            
              // Act
              TestDelegate act = () => loan.ComputePayment(termInMonths);
            
              // Assert
              Assert.Throws<ArgumentOutOfRangeException>(act);
            }
            

            Invalid Preconditions

            Every object is in some arranged state at the time a method of that object is invoked. The state may be valid or it may be invalid. Whether explicit or implicit, all methods have expected preconditions. Since the method’s preconditions are not spelled out, one goal of good test code is to test those assumptions as a way of revealing the implicit expectations and turning them into explicit preconditions.

            For example, before calculating a payment amount, let’s say the principal must be at least $1,000 and less than $185,000. Without knowing the code, these limits are hidden preconditions of the ComputePayment method. Test code can make them explicit by arranging the classUnderTest with unacceptable values and calling the ComputePayment method. The test code asserts that an expected exception is thrown when the method’s preconditions are violated. If the exception is not thrown, the test fails.

            This code sample is testing invalid preconditions.


            [TestCase(0, 1.79, 360, 72.16)]
            [TestCase(997, 1.79, 360, 72.16)]
            [TestCase(999.99, 1.79, 360, 72.16)]
            [TestCase(185000, 1.79, 360, 72.16)]
            [TestCase(185021, 1.79, 360, 72.16)]
            public void ComputePayment_WithInvalidPrincipal_ExpectInvalidOperationException(
              decimal principal,
              decimal annualPercentageRate,
              int termInMonths,
              decimal expectedPaymentPerPeriod)
            {
              // Arrange
              var classUnderTest =
              new Application(null, null, null)
              {
                Principal = principal,
                AnnualPercentageRate = annualPercentageRate,
              };
            
              // Act
              TestDelegate act = () => classUnderTest.ComputePayment(termInMonths);
            
              // Assert
              Assert.Throws<InvalidOperationException>(act);
            }
            

            Implicit preconditions should be tested and defined by a combination of exploratory testing and inspection of the code-under-test, whenever possible. Test the boundaries by arranging the class-under-test in improbable scenarios, such as negative principal amounts or interest rates.

            Tip: Testing preconditions and invalid arguments prompts a lot of questions. What is the principal limit? Is it $18,500 or $185,000? Does it change from year to year?

            More on boundary-value analysis can be found at Wikipedia https://en.wikipedia.org/wiki/Boundary-value_analysis

            Better Value, Sooner, Safer, Happier

            Jonathan Smart says agility across the organization is about delivering Better Value, Sooner, Safer, Happier. I like that catchphrase, and I’m looking forward to reading his new book, Sooner Safer Happier: Antipatterns and Patterns for Business Agility.

            But what does this phrase mean to you? Do you think other people buy it?

            Better Value – The key word here is value. Value means many things to many people; managers, developers, end-users, and customers.

            In general, executive and senior managers are interested in hearing about financial rewards. These are important to discuss as potential benefits of a sustained, continuous improvement process. They come from long-term investment. Here is a list of some of the bottom-line and top-line financial rewards these managers want to hear about:

            • Lower development costs
            • Cheaper to maintain, support and enhance
            • Additional products and services
            • Attract and retain customers
            • New markets and opportunities

            Project managers are usually the managers closest to the activities of developers. Functional managers, such as a Director of Development, are also concerned with the day-to-day work of developers. For these managers, important values spring from general management principles, and they seek improvements in the following areas:

            • Visibility and reporting
            • Control and correction
            • Efficiency and speed
            • Planning and predictability
            • Customer satisfaction

            End-users and customers are generally interested in deliverables. When it comes to quantifying value they want to know how better practices produce better results for them. To articulate the benefits to end-users and customers, start by focusing on specific topics they value, such as:

            • More functionality
            • Easier to use
            • Fewer defects
            • Faster performance
            • Better support

            Developers and team leads are generally interested in individual and team effectiveness. Quantifying the value of better practices to developers is a lot easier if the emphasis is on increasing effectiveness. The common sense argument is that by following better practices the team will be more effective. In the same way, by avoiding bad practices the team will be more effective. Developers are looking for things to run more smoothly. The list of benefits developers want to hear about includes the following:

            • Personal productivity
            • Reduced pressure
            • Greater trust
            • Fewer meetings and intrusions
            • Less conflict and confusion

            Quantifying value is about knowing what others value and making arguments that make rational sense to them. For many, hard data is crucial. Since there is such variability from situation to situation, the only credible numbers are the ones your organization collects and tracks. It is a good practice to start collecting and tracking some of those numbers and relating them to the development practices that are showing improvements.

            Beyond the numbers, there are many observations and descriptions that support the new and different practices. It is time to start describing the success stories and collecting testimonials. Communicate how each improvement is showing results in many positive ways.

            Fakes, Stubs and Mocks

            I’m frequently asked about the difference between automated testing terms like fakes, stubs and mocks.

            The term fake is a general term for an object that stands in for another object; both stubs and mocks are types of fakes. The purpose of a fake is to create an object that allows the method-under-test to be tested in isolation from its dependencies, meeting one of two objectives:

            1. Stub — Prevent the dependency from obstructing the code-under-test and to respond in a way that helps it proceed through its logical steps.

            2. Mock — Allow the test code to verify that the code-under-test’s interaction with the dependency is proper, valid, and expected.

            Since a fake is any object that stands in for the dependency, it is how the fake is used that determines if it is a stub or mock. Mocks are used only for interaction testing. If the expectation of the test method is not about verifying interaction with the fake then the fake must be a stub.

            BTW, these terms are explained very well in The Art of Unit Testing by Roy Osherove.

            There’s more to learn on this topic at stackoverflow https://stackoverflow.com/questions/24413184/difference-between-mock-stub-spy-in-spock-test-framework

            First, the mental creation

            With physical things, like buildings and devices, etc. people seem to be generally okay with generating strong specifications, such as blueprints, CAD drawings, etc. These specifications are often about trying to perfect the mental creation before the physical creation gets started. In these cases, the physical thing you’re making is not at all abstract, and it could be very expensive to make. It’s hard to iterate when you’re building a bridge that’s going to be part of an interstate roadway.

            missed-it-by-that-much

            What I’ve seen in the world of software, the physical creation seems abstract, and engineers writing software appears inexpensive when compared to things like steel and concrete. Many people seem to want to skip the mental creation step, and they ask that the engineers jump right into coding the thing up.

            If the increment of the thing that your team is to write software for is ready to be worked on, and you have a roadmap, then an iterative and incremental approach will probably work. In fact, the idea that complex problems require iterative solutions underpins the values and principles described in the Manifesto for Agile Software Development. The Scrum process framework describes this as product increments and iterations.

            However, all too often it’s a random walk guided by ambiguous language (either written or spoken) that leads to software that lacks clarity, consistency, and correctness.

            Sadly and all too often, the quality assurance folks are given little time to discover and describe the issues, let alone the time to verify that the issues are resolved. During these review sessions, engineers pull out their evidence that the software is working as intended. They have a photo of a whiteboard showing some hand-wavy financial formulas. They demonstrate that they’re getting the right answer with the math-could-not-be-easier example: a loan that has a principal of $12,000.00, a term of 360 payments, an annual interest rate of 12%, and, of course, don’t forget that there are 12 months in a year. And through anomalous coincident, the software works correctly using that example. Hilarity ensues!

            banks-care-about-rounding

            Unfortunately, QA is often squeezed. They are over-challenged and under-supported. They are given an absurd goal, and they persevere despite grave doubts about whether a quality release is achievable. Their hopelessness comes from knowing that the team is about to inflict this software on real people, good people, and the QA team doesn’t have the time or the power to stop the release.

            the-dirty-remains

            What are the key things to take away:

            • When the physical creation of software seems to be cheap, comparable to the mental creation of writing down and agreeing to light-weight requirements, then the temptation exists to hire nothing but software engineers and to maximize the number of people who “just write code”. In the long run, however, the rework costs are often extremely expensive.
            • In the end, the end-users are going to find any defects.
            • Better to have QA independently verify and validate that the software works as intended, of course, what is intended needs to be written down:  clearly, consistently and correctly.
            • QA ought to find issues that can to be resolved before the users get the software — issue reports that show clear gaps in required behavior.
            • QA conversations with engineers ought never degenerate into a difference of opinion, which happens when there are no facts about the required behavior. Very often these discussions (rhymes with concussion) escalate into a difference of values — “you don’t understand modern software development”.

            Interestingly, the software’s users are going to find the most important defects. The users are the ultimate independent validators and verifiers. End-users, their bosses, and the buyer can be ruthless reporters of defects. In fact, they might very well reject the product.

            We are the Pakleds; You *are* smart!

            As one of the weary consultants on a multi-year software development project for a major lending institution I observed what became known as the “Pakled-Customer Syndrome”. The Pakleds are a race of dimwitted aliens from Star Trek: The Next Generation (TNG), which are first seen in the “Samaritan Snare” episode (summary here). They co-opt the technology of other spaceships through manipulation, brainless praise and hostage taking. Responding in good faith to the Pakled distress call the Enterprise is ensnared by the Pakled’s intractable problems and pig-headed attitude.

            In the TNG episode, the Pakled are characterized by the repeating of a few simple phrases. When the crew attempts to engage them in a dialog about their distress (e.g. Is your ship damaged?) the response is always; “Uh-hunh.” Diagnostic inquiry and problem-solving are met with; “It is broken…. Can you make our ship go?”, “Make our ship go”, “Will our ship go now?” Any perceived progress is met with oddly enthusiastic and sycophantic praise; “He is smart”, “You are brilliant”; but this turns out to be unproductive support and belies their hidden agendas.

            On many projects the customer can be entrenched in their own Pakled-mindset and the project is soon mired in the resulting quicksand. As organizational behavioral dysfunctions, here is what characterizes the “Pakled-Customer Syndrome”:

            • Snare one: The customer takes no responsibility for understanding the problems they face: “it is broken”. Ultimately, project scope cannot be managed when there is no meeting of the minds on what are the larger tasks at hand.
            • Snare two: The customer cannot participate in their own treatment. Due to many surreal disconnects too many project resources are spent teaching the customer their own business; too few project objectives are being accomplished.
            • Snare three: The customer provides fake praise and misleading recognition. The project believes it is accomplishing worthy goals and that customer value is being created; it is not.
            • Snare four: The customer manipulates and ties up valuable resources. The more the project starts to unravel the more the Pakled-customer latches on to your team, drawing them deeper into their dysfunction.

            At the heart of the “Pakled-Customer Syndrome” are the classic differences in expectations, especially around roles and responsiblities, that lead to conflict. In addition, there are implicit and hidden agendas which must be explicated before meaningful project progress can be made.

            You got to know your limitations

            Engineers must only be limited by their intellect and available time, subject to a sustainable pace. Maturity and experience are important, too.

            This quote is a paraphrase of something a boss from long ago said to me. Here are the things I did to change my professional life based on this revelation, I tried to:

            • Develop my intellect: Stretch my brain. Learn new skills. Acquire new knowledge. Play brain games. Start taking notes instead of always relying on my recall. Read a lot.
            • Increase my available time and set a sustainable pace: Debug my personal software process (see PSP). Read some relevant books: Personal Kanban. First Things First. Learn how to say no, especially to things that are not important. Define what’s important.
            • Raise my maturity level: Read some key books: Raising Your Emotional Intelligence, Working With Emotional Intelligence, I’m OK — You’re OK, Go to cognitive behavioral therapy (CBT) for my professional and personal challenges, especially to better cope with difficult people.
            • Have many and varied experiences: Don’t get the same 1-year of experience for 20 years. Switch jobs. Work with people who challenge me. Don’t make the same mistake once (learn from other people’s prior mistakes). Read ahead in life by learning about other people through their biography and memoire; especially their failures.

            In the end, all of these things that I tried to do I continue to do. I find them to be very helpful. I also find that I am able to be helpful to others, if I’m working to be my best.

            Thank You DC .NET Users Group 2013.2

            A big thank you to the DC .NET Users Group for hosting my presentation on Continuous Integration at their Februrary meeting last night. I really hope that everyone enjoyed the presentation on continuous integration. The questions and conversations were very good.

            Code Samples

            Although most of the examples used TeamCity, here are the code samples, available through GitHub.
            https://github.com/ruthlesshelp/Presentations

            Slides

            Here are the slides, available through SlideShare.

            Compendium of .NET Best Practices

            So you’re getting ready to start a .NET Best Practices initiative at your organization and you’re looking to find a lot of specific best practices tips. You want to know: What are the .NET Framework best practices?

            You can be assured that I’ve been down this road. In fact, a few readers of my book, Pro .NET Best Practices, expressed some disappointment that the book is not a collection of specific .NET best practices. And this is exactly why I decided to address this subject in today’s post.

            For those that want to dig right in, follow this link to part 1, MSDN: .NET Framework Best Practices.

            If you want some background, let me start with the question: Who wants to follow best practices, anyway?

            Adoption

            The adoption of new and different practices is a central theme of Pro .NET Best Practices. I work with enough individuals, teams, and organizations to understand the issues involved with adopting best practices. Consider the four levels at which best practices are embraced:

            • Individual: You or any individual adopts better practices to increase personal and professional productivity, quality, thoroughness, and overall effectiveness.
            • Group: The team adopts better practices to be more industrious together, avoid problems, achieve desired results, improve interpersonal relationships, and work more effectively with other teams.
            • Organization: Your company, agency, or firm adopts better practices to bring more positive outcomes to the organization, attract and retain employees, satisfy end-user expectations, and make stakeholders happy.
            • Profession: Better practices are widely adopted and become generally-accepted standards, patterns, and principles that bring alignment to software development and benefit to all that follow them.

            In an ideal world, best practices are quickly adopted at all four levels. However, in the real world, they can be slowly adopted by the group, resisted by the organization, embraced by one individual, not by another, or ignored altogether by everyone but you. It can be a mixed bag.

            The Reader

            There are two key readers of this blog post that I want to identify with and help:

            1. Developers – As a developer, you have personal practices that make you an effective software developer. The compendium should list new and different practices that help make you a more effective developer.
            2. Team Leaders – As a team leader, you see the team develop software through their current practices. The compendium should list practices that help the team perform better and achieve better outcomes.

            These readers are adopting at either the individual or group level.

            If you are a reader who wants to bring best practices to the organization or the software development profession then I assert that you are probably not interested in the content of this compendium. Yes, you might refer a developer or team leader to the compendium, but I doubt you will find it directly relevant.

            So, given this introduction, let’s look at how a collection (I like the term compendium) of specific .NET best practices might be organized.

            Tags for the Compendium

            Since this is a blog, tags can help others find and navigate the content. Here is a quick list of tags that come to mind:

            • Coding Best Practices. For example, C#, VB.NET, T-SQL
            • Toolset Best Practices. For example, Visual Studio, ReSharper, Typemock
            • Platform Best Practices. For example, ASP.NET, SQL Server, SharePoint
            • Architecture Best Practices. For example, Client-Server , n-Tier, CQRS
            • Windows 8 Best Practices
            • Engineering Fundamentals Best Practices
            • Cloud Best Practices
            • Phone Best Practices
            • ALM (Application Lifecycle Management) Best Practices

            Clearly, there are a lot of ways to slice and dice the topic of best practices; however, I will try to bring things back to the topic of the Microsoft .NET Framework.

            You can find the entire Best Practices category here: https://ruthlesslyhelpful.net/category/development/best-practices/

            The Power of Free

            I mostly wrote Pro .NET Best Practices based on my professional experience trying to get teams and organizations to adopt .NET Framework best practices. Over the years, I have read many books, I experimented, I tried and persevered with one approach, and I tried totally new approaches. Many times I learned from others. Many times I learned by my mistakes.

            Over the years and as I researched my book, I found many free, on-line sources of .NET best practices. Many are professionally written and easy to follow. In my book I was reluctant to paraphrase or repeat material, but I should have done a better job of showing people how to access the material. (The one thing I really kick myself over is that I did not use Bitly.)

            So, let me start the Compendium of .NET Best Practices with some great material already available on the Internet.

            Part 1: MSDN: .NET Framework Best Practices

            MSDN: .NET Framework Best Practices

            For years now, Microsoft Developer Network (MSDN) has provided free online documentation to .NET developers. There is a lot of individual .NET best practices topics, which are described at the high level at this MSDN link:
            MSDN: .NET Framework Best Practices

            This is a great MSDN article to read and link to bookmark if you’re interested in.NET best practices.

            Best Practices for Strings

            Just take a look at all the information within the MSDN topic of Best Practices for Using Strings in the .NET Framework. I am not going to be able to duplicate all of that. If you are developing an application that has to deal with culture, globalization, and localization issues then you need to know much of this material.

            Before I go any further, let me introduce you to Jon Skeet. He wrote an awesome book, C# In Depth. I think you might enjoy reading his online article on .NET Strings: http://csharpindepth.com/Articles/General/strings.aspx

            Okay, let’s get back to the MSDN article. Below I have highlighted a few of the Strings best practices that I’d like to discuss.

            1. Use the String.ToUpperInvariant method instead of the String.ToLowerInvariant method when you normalize strings for comparison.

            In the .NET Framework, ToUpperInvariant is the standard way to normalize case. In fact, the Visual Studio Code Analysis has rule CA1308 in the Globalization category that can monitor this.

            This is a really easy practice to follow once you know it.

            Here is the key point I picked up from rule CA1308:

            It is safe to suppress [this] warning message [CA1308] when you are not making security decision based on the result (for example, when you are displaying it in the UI).

            In other words, take care to uppercase strings when the code is making a security decision based on normalized string comparison.

            2. Use an overload of the String.Equals method to test whether two strings are equal.

            Some of these overloads require a parameter that specifies the culture, case, and sort rules that are to be used in the comparison method. This just makes the string comparison you are using explicit.

            3. Do not use an overload of the String.Compare or CompareTo method and test for a return value of zero to determine whether two strings are equal.

            In the MSDN documentation for comparing Strings the guidance is quite clear:

            The Compare method is primarily intended for use when ordering or sorting strings.

            All-In-One Code Framework

            If you have not had a chance to take a look at the All-In-One Code Framework then please take a few minutes to look it over.

            The Microsoft All-In-One Code Framework is a free, centralized code sample library driven by developers’ needs.

            It is Microsoft Public License (Ms-PL), which is the least restrictive of the Microsoft open source licenses.

            What’s relevant to this article is the All-In-One Code Framework Coding Standards document. You can find the download link at the top of this page: http://1code.codeplex.com/documentation

            In that document, they list a very relevant and useful list of String best practices.

            • Do not use the ‘+’ operator (or ‘&’ in VB.NET) to concatenate many strings. Instead, you should use StringBuilder for concatenation. However, do use the ‘+’ operator (or ‘&’ in VB.NET) to concatenate small numbers of strings.
            • Do use overloads that explicitly specify the string comparison rules for string operations. Typically, this involves calling a method overload that has a parameter of type StringComparison.
            • Do use StringComparison.Ordinal or StringComparison.OrdinalIgnoreCase for comparisons as your safe default for culture-agnostic string matching, and for better performance.
            • Do use string operations that are based on StringComparison.CurrentCulture when you display output to the user.
            • Do use the non-linguistic StringComparison.Ordinal or StringComparison.OrdinalIgnoreCase values instead of string operations based on CultureInfo.InvariantCulture when the comparison is linguistically irrelevant (symbolic, for example). Do not use string operations based on StringComparison.InvariantCulture in most cases. One of the few exceptions is when you are persisting linguistically meaningful but culturally agnostic data.
            • Do use an overload of the String.Equals method to test whether two strings are equal.
            • Do not use an overload of the String.Compare or CompareTo method and test for a return value of zero to determine whether two strings are equal. They are used to sort strings, not to check for equality.
            • Do use the String.ToUpperInvariant method instead of the String.ToLowerInvariant method when you normalize strings for comparison.

            This post is part of my Compendium .NET Best-Practices series.