Ruthlessly Helpful
Stephen Ritchie's offerings of ruthlessly helpful software engineering practices.
NuGet Kickstart Package
Posted by on December 21, 2011
I want to use NuGet to retrieve a set of content files that are needed for the build. For example, the TeamCity build configuration runs a runner.msbuild script, however, that script needs to import a Targets file, like this:
<Import Condition="$(BuildPath)\ImportTargets\MSBuild.Lender.Common.Targets"
Project="$(BuildPath)\ImportTargets\MSBuild.Lender.Common.Targets"
/>
The plan is to create a local NuGet feed that has all the prerequisite files for the build script. Using the local NuGet feed, install the “global build” package as the first build task. After that, the primary build script can find the import file and proceed normally. Here is the basic solution strategy that I came up with.
To see an example, follow these steps:
1. Create a local NuGet feed. Read more information here: http://docs.nuget.org/docs/creating-packages/hosting-your-own-nuget-feeds
2. Write a NuGet spec file and name it Lender.Build.nuspec. This is simply an XML file. The schema is described here: http://docs.nuget.org/docs/reference/nuspec-reference
<?xml version="1.0"?>
<package xmlns="http://schemas.microsoft.com/packaging/2010/07/nuspec.xsd">
<metadata>
<id>_globalBuild</id>
<version>1.0.0</version>
<authors>Lender Development</authors>
<requireLicenseAcceptance>false</requireLicenseAcceptance>
<description>Lender Build</description>
</metadata>
<files>
<file src="ImportTargets\**" target="ImportTargets" />
</files>
</package>
Notice the “file” element. It specifies the source files, which includes in the MSBuild.Lender.Common.Targets file when the ImportTargets folder is added.
3. Using the NuGet Package Explorer, I opened the Lender.Build.nuspec file and saved the package in the LocalNuGetFeed folder. Here’s how that looks:
4. Save the package to the local NuGet feeds folder. In this case, it is the C:\LocalNuGetFeeds folder.
5. Now let’s move on over to where this “_globalBuild” dependency is going to be used. For example, the C:\projects\Lender.Slos folder. In that folder, create a packages.config file and add it to version control. That config file looks like this:
<?xml version="1.0" encoding="utf-8"?> <packages> <package id="_globalBuild" version="1.0.0" /> </packages>
This references the package with the id of “_globalBuild”, which is found in the LocalNuGetFeeds package. It is one of the available package sources because it was added through Visual Studio, under Tools >> Library Package Manager >> Package Manager Settings.
6. From MSBuild, the CI server calls the “Kickstart” target before running the default script target. The Kickstart target uses the NuGet.exe command line to install the global build package. Here is the MSBuild script:
<Project DefaultTargets="Default"
xmlns="http://schemas.microsoft.com/developer/msbuild/2003"
ToolsVersion="4.0"
>
<PropertyGroup>
<RootPath>.</RootPath>
<BuildPath>$(RootPath)\_globalBuild.1.0.0\ImportTargets</BuildPath>
<CommonImportFile>$(BuildPath)\MSBuild.Lender.Common.Targets</CommonImportFile>
</PropertyGroup>
<Import Condition="Exists('$(CommonImportFile)')"
Project="$(CommonImportFile)"
/>
<Target Name="Kickstart" >
<PropertyGroup>
<PackagesConfigFile>packages.config</PackagesConfigFile>
<ReferencesPath>.</ReferencesPath>
</PropertyGroup>
<Exec Command="$(NuGetRoot)\nuget.exe i $(PackagesConfigFile) -o $(ReferencesPath)" />
</Target>
<!-- The Rebuild or other targets belong here -->
<Target Name="Default" >
<PropertyGroup>
<ProjectFullName Condition="$(ProjectFullName)==''">(undefined)</ProjectFullName>
</PropertyGroup>
<Message Text="Project name: '$(ProjectFullName)'"
Importance="High"
/>
</Target>
</Project>
7. In this way, the MSBuild script uses NuGet to bring down the ImportTargets files and places them under the _globalBuild.1.0.0 folder. This can happen on the CI server with multiple build steps. For the sake of simplicity here are the lines in a batch file that simulates these steps:
%MSBuildRoot%\msbuild.exe "runner.msbuild" /t:Kickstart %MSBuildRoot%\msbuild.exe "runner.msbuild"
With the kickstart bringing down the prerequisite files, the rest of the build script performs the automated build using the common Targets properly imported.
Pro .NET Best Practices: Overview
Posted by on December 20, 2011
For those who would like an overview of Pro .NET Best Practices, here’s a rundown on the book.
The book presents each topic by keeping in mind two objectives: to provide reasonable breath and to go into depth on key practices. For example, the chapter on code analysis looks at both static and dynamic analysis, and it goes into depth with FxCop and StyleCop. The goal is to strike the balance between covering all the topics, discussing the widely-used tools and technologies, and having a reasonable chapter length.
Chapters 1 through 5 are focused on the context of new and different practices. Since adopting better practices is an initiative, it is important to know what practices to prioritize and where to uncover better practices within your organization and current circumstances.
Chapter 1: Ruthlessly Helpful
This chapter shows how to choose new and different practices that are better practices for you, your team, and your organization.
- Practice Selection
- Practicable
- Generally Accepted and Widely Used
- Valuable
- Archetypal
- Target Areas for Improvement
- Delivery
- Quality
- Relationships
- Overall Improvement
- Balance
- Renewal
- Sustainability
- Summary
Chapter 2: NET Practice Area
This chapter draws out ways to uncover better practices in the areas of .NET and general software development that provide an opportunity to discover or learn and apply better practices.
- Internal Sources
- Technical Debt
- Defect Tracking System
- Retrospective Analysis
- Prospective Analysis
- Application Lifecycle Management
- Patterns and Guidance
- Framework Design Guidelines
- Microsoft PnP Group
- Presentation Layer Design Patterns
- Object-to-Object Mapping
- Dependency Injection
- Research and Development
- Automated Test Generation
- Code Contracts
- Microsoft Security Development Lifecycle
- Summary
Chapter 3: Achieving Desired Results
This chapter presents practical advice on how to get team members to collaborate with each other and work toward a common purpose.
- Success Conditions
- Project Inception
- Out of Scope
- Diversions and Distractions
- The Learning/Doing Balance
- Common Understanding
- Wireframe Diagrams
- Documented Architecture
- Report Mockups
- Detailed Examples
- Build an Archetype
- Desired Result
- Deliverables
- Positive Outcomes
- Trends
- Summary
Chapter 4: Quantifying Value
This chapter describes specific practices to help with quantifying the value of adopting better development practices.
- Value
- Financial Benefits
- Improving Manageability
- Increasing Quality Attributes
- More Effectiveness
- Sources of Data
- Quantitative Data
- Qualitative Data
- Anecdotal Evidence
- Summary
Chapter 5: Strategy
This chapter provides you with practices to help you focus on strategy and the strategic implications of current practices.
- Awareness
- Brainstorming
- Planning
- Monitoring
- Communication
- Personal Process
- Commitment to Excellence
- Virtuous Discipline
- Effort and Perseverance
- Leverage
- Automation
- Alert System
- Experience and Expertise
- Summary
Chapters 6 through 9 are focused on a developer’s individual practices. These chapters discuss guidelines and conventions to follow, effective approaches, and tips and tricks that are worth knowing. The overarching theme is that each developer helps the whole team succeed by being a more effective developer.
Chapter 6: .NET Rules and Regulations
This chapter helps sort out the generalized statements, principles, practices, and procedures that best serve as .NET rules and regulations that support effective and innovative development.
- Coding Standards and Guidelines
- Sources
- Exceptions
- Disposable Pattern
- Miscellaneous
- Code Smells
- Comments
- Way Too Complicated
- Unused, Unreachable, and Dead Code
- Summary
Chapter 7: Powerful C# Constructs
This chapter is an informal review of the C# language’s power both to harness its own strengths and to recognize that effective development is a key part of following .NET practices.
- Extension Methods
- Implicitly Typed Local Variables
- Nullable Types
- The Null-Coalescing Operator
- Optional Parameters
- Generics
- LINQ
- Summary
Chapter 8: Automated Testing
This chapter describes many specific practices to improve test code, consistent with the principles behind effective development and automated testing.
- Case Study
- Brownfield Applications
- Greenfield Applications
- Automated Testing Groundwork
- Test Code Maintainability
- Naming Convention
- The Test Method Body
- Unit Testing
- Boundary Analysis
- Invalid Arguments
- Invalid Preconditions
- Fakes, Stubs, and Mocks
- Isolating Code-Under-Test
- Testing Dependency Interaction
- Surface Testing
- Automated Integration Testing
- Database Considerations
- Summary
Chapter 9: Build Automation
This chapter discusses using build automation to remove error-prone steps, to establish repeatability and consistency, and to improve the build and deployment processes.
- Build Tools
- MSBuild Fundamentals
- Tasks and Targets
- PropertyGroup and ItemGroup
- Basic Tasks
- Logging
- Parameters and Variables
- Libraries and Extensions
- Import and Include
- Inline Tasks
- Common Tasks
- Date and Time
- Assembly Info
- XML Peek and Poke
- Zip Archive
- Automated Deployment
- Build Once, Deploy Many
- Packaging Tools
- Deployment Tools
- Summary
Chapters 10 through 12 are focused on supporting tools, products, and technologies. These chapters describe the purpose of various tool sets and present some recommendations on applications and products worth evaluating.
Chapter 10: Continuous Integration
This chapter presents the continuous integration lifecycle with a description of the steps involved within each of the processes. Through effective continuous integration practices, the project can save time, improve team effectiveness, and provide early detection of problems.
- Case Study
- The CI Server
- CruiseControl.NET
- Jenkins
- TeamCity
- Team Foundation Server
- CI Lifecycle
- Rebuilding
- Unit Testing
- Analysis
- Packaging
- Deployment
- Stability Testing
- Generate Reports
- Summary
Chapter 11: Code Analysis
This chapter provides an overview of many static and dynamic tools, technologies, and approaches with an emphasis on improvements that provide continuous, automated monitoring.
- Case Study
- Static Analysis
- Assembly Analysis
- Source Analysis
- Architecture and Design
- Code Metrics
- Quality Assurance Metrics
- Dynamic Analysis
- Code Coverage
- Performance Profiling
- Query Profiling
- Logging
- Summary
Chapter 12: Test Framework
Chapter 12 is a comprehensive list of testing frameworks and tools with a blend of commercial and open-source alternatives.
- Unit Testing Frameworks
- Test Runners
- NUnit GUI and Console Runners
- ReSharper Test Runner
- Visual Studio Test Runner
- Gallio Test Runner
- xUnit.net Test Runner
- XUnit Test Pattern
- Identifying the Test Method
- Identifying the Test Class and Fixture
- Assertions
- Mock Object Frameworks
- Dynamic Fake Objects with Rhino Mocks
- Test in Isolation with Moles
- Database Testing Frameworks
- User Interface Testing Frameworks
- Web Application Test Frameworks
- Windows Forms and Other UI Test Frameworks
- Acceptance Testing Frameworks
- Testing with Specifications and Behaviors
- Business-Logic Acceptance Testing
- Summary
Chapter 13: Aversions and Biases
The final chapter is about the aversions and biases that keep many individuals, teams, and organizations from adopting better practices. You may face someone’s reluctance to accept or acknowledge a new or different practice as potentially better. You may struggle against another’s tendency to hold a particular view of a new or different practice that undercuts and weakens its potential. Many people resist change even if it is for the better. This chapter helps you understand how aversions and biases impact change so that you can identify them, cope with them, and hopefully manage them.
- Group-Serving Bias
- Rosy Retrospection
- Group-Individual Appraisal
- Status Quo and System Justification
- Illusory Superiority
- Dunning-Kruger Effect
- Ostrich Effect
- Gambler’s Fallacy
- Ambiguity Effect
- Focusing Effect
- Hyperbolic Discounting
- Normalcy Bias
- Summary
Why a Book on .NET Best Practices?
Posted by on December 10, 2011
I am a Microsoft .NET software developer. That explains why the book is about .NET best practices. That’s in my wheelhouse.
The more relevant question is, why a book about best practices?
When it comes right down to it, many best practices are the application of common sense approaches. However, there is something that blocks us from making the relatively simple changes in work habits that produce significant, positive results. I wanted to further explore that quandary. Unfortunately, common sense is not always common practice.
There is a gap between the reality that projects live with and the vision that the team members have for their processes and practices. They envision new and different practices that would likely yield better outcomes for their project. Yet, the project reality is slow to move or simply never moves toward the vision.
Many developers are discouraged by the simple fact that far too many projects compromise the vision instead of changing the reality. These two concepts are usually in tension. That tension is a source of great frustration and cynicism. I wanted to let people know that their project reality is not an immovable object, and the team members can be an irresistible force.
Part of moving your reality toward your vision is getting a handle on the barriers and objections and working to overcome them. Some of them are external to the team while others are internal to the team. I wanted to relate organizational behavior to following .NET best practices and to software development.
Knowledge
The team must know what to do. They need to know about the systematic approaches that help the individual and the team achieve the desired results. There are key practice areas that yield many benefits:
- Automated builds
- Automated testing
- Continuous integration and delivery
- Code analysis
- Automated deployment
Of course, there is a lot of overlap in these areas. The management scientist might call that synergy. A common theme to these practice areas is the principle of automation. By acquiring knowledge in these practice areas you find ways to:
- Reduce human error
- Increase reliability and predictability
- Raise productivity and efficiency
Know-how in these practice areas also raises awareness and understanding, creates an early warning system, and provides various stakeholders with a new level of visibility into the project’s progress. I wanted the reader to appreciate the significance and inter-relatedness of these key practice areas and the benefits each offers.
Skill
The team needs to know how to do it. Every new and different practice has a learning curve. Climbing that curve takes time and practice. The journey from newbie to expert has to be nurtured. There are no shortcuts that can sidestep the crawl-walk-run progression. Becoming skilled requires experience. Prototyping and building an archetype are two great ways to develop a skill. Code samples and structured walkthroughs are other ways to develop a skill. I wanted the book to offer an eclectic assortment of case studies, walkthroughs, and code samples.
Attitude
Team members must want to adopt better practices. Managers need to know why the changes are for the better, in terms managers can appreciate. The bottom line, it is important to be able to quantify the benefits of following new and different practices. It is also important to understand what motivates and what doesn’t. It helps to understand human biases. Appreciate the underlying principle that software projects are materially impacted by how well individuals interact. I wanted to highlight and communicate the best practices that relate to the human factors that include persuasion, motivation, and commitment.
Pro .NET Best Practices
Here are the links to Pro .NET Best Practices:
Apress: http://www.apress.com/9781430240235
Amazon: http://www.amazon.com/NET-Best-Practices-Stephen-Ritchie/dp/1430240237
Barnes and Noble: http://www.barnesandnoble.com/w/pro-net-best-practices-stephen-d-ritchie/1104143991
Liberate FxCop 10.0
Posted by on June 9, 2011
Update 2012-06-04: It still amazes me that not a thing has changed to make it any easier to download FxCopSetup.exe version 10 in the nearly two years since I first read this Channel 9 forum post: http://channel9.msdn.com/Forums/Coffeehouse/561743-How-I-downloaded-and-installed-FxCop As you read the Channel 9 forum entry you can sense the confusion and frustration. However, to this day the Microsoft Download Center still gives you the same old “readme.txt” file instead of the FxCopSetup.exe that you’re looking for.
Below I describe the only Microsoft “official way” (no, you’re not allowed to distribute it yourself) — that I know of — to pull out the FxCopSetup.exe so you can install it on a build server, distribute within your team, or to do some other reasonable thing. It is interesting to note the contrast: the latest StyleCop installer is one mouse click on this CodePlex page: http://stylecop.codeplex.com/
Update 2012-01-13: Alex Netkachov provides short-n-sweet instructions on how to install FxCop 10.0 on this blog post http://www.alexatnet.com/content/how-install-fxcop-10-0
Quick, flash mob! Let’s go liberate the FxCop 10.0 setup program from the Microsoft Windows SDK for Windows 7 and .NET Framework 4 version 7.1 setup.
Have you ever tried to download FxCop 10.0 setup program? Here are the steps to follow, but a few things won’t seem right:
1. [Don’t do this step!] Go to Microsoft’s Download Center page for FxCop 10.0 (download page) and perform the download. The file that is downloaded is actually a readme.txt file.
2. The FxCop 10.0 read-me file has two steps of instruction:
FxCop Installation Instructions 1. Download the Microsoft Windows SDK for Windows 7 and .NET Framework 4 version 7.1. 2. Run %ProgramFiles%\Microsoft SDKs\Windows\v7.1\Bin\FXCop\FxCopSetup.exe to install FxCop.
3. On the actual FxCop 10.0 Download Center page, under the Instructions heading, there are slightly more elaborate instructions:
• Download the Microsoft Windows SDK for Windows 7 and .NET Framework 4 Version 7.1 [with a link to the download] • Using elevated privileges execute FxCopSetup.exe from the %ProgramFiles%\Microsoft SDKs\Windows\v7.1\Bin\FXCop folder
NOTE:
On the FxCop 10.0 Download Center page, under the Brief Description heading, shouldn’t the text simply describe the steps and provide the link to the Microsoft Windows SDK for Windows 7 and .NET Framework 4 version 7.1 download page? That would be more straightforward.
4. Use the Microsoft Windows SDK for Windows 7 and .NET Framework 4 version 7.1 link to jump over to the SDK download page.
5. [Don’t actually download this, either!] The estimated time to download on a T1 is 49 min. The download file is winsdk_web.exe is 498 KB, but under the Instruction heading the explanation is provided: The Windows SDK is available thru a web setup (this page) that enables you to selectively download and install individual SDK components or via an ISO image file so that you can burn your own DVD. Follow the link over to download the ISO image file.
6. Download the ISO image file on the ISO download page. This is a 570 MB download, which is about 49 min on a T1.
7. Unblock the file and use 7-Zip to extract the ISO files.
8. Navigate to the C:\Downloads\Microsoft\SDK\Setup\WinSDKNetFxTools folder.
9. Open the cab1.cab file within the WinSDKNetFxTools folder. Right-click and select Open in new window from the menu.
10. Switch over to the details file view, sort by name descending , and find the file that’s name starts with WinSDK_FxCopSetup.exe … ” plus some gobbledygook (mine was named “WinSDK_FxCopSetup.exe_all_enu_1B2F0812_3E8B_426F_95DE_4655AE4DA6C6”.)
11. Make a sensibly named folder for the FxCop setup file to go into, for example, create a folder called C:\Downloads\Microsoft\FxCop\Setup.
12. Right-click on the “WinSDK_FxCopSetup.exe + gobbledygook” file and select Copy. Copy the “WinSDK_FxCopSetup.exe + gobbledygook” file into C:\Downloads\Microsoft\FxCop\Setup folder.
13. Rename the file to “FxCopSetup.exe”. This is the FxCop setup file.
14. So the CM, Dev and QA teams or anyone on the project that needs FxCop doesn’t have to perform all of these steps copy the 14 MB FxCopSetup.exe file to a network share.
That’s it. Done.
Apparently, a decision was made to keep the FxCop setup off Microsoft’s Download Center. Now the FxCop 10 setup is deeply buried within the Microsoft Windows SDK for Windows 7 and .NET Framework 4 Version 7.1. Since the FxCop setup was easily available as a separate download it doesn’t make sense to bury it now.
Microsoft’s FxCop 10.0 Download Page really should offer a simple and straightforward way to download the FxCopSetup.exe file. This is way too complicated and takes a lot more time than is appropriate to the task.
P.S.: Much thanks to Matthew1471’s ASP BlogX post that supplied the Rosetta stone needed to get this working.
Problem Prevention; Early Detection
Posted by on June 8, 2011
On a recent project there was one single line of code that a developer added intended to fix a reported issue. It was checked in with a nice, clear comment. It was an innocuous change. If you saw the change yourself then you’d probably say it seemed like a reasonable and well-considered change. It certainly wasn’t a careless change, made in haste. But it was a ticking time bomb; it was a devilish lurking bug (DLB).
Before we started our code-quality initiative the DLB nightmare would have gone something like this:
- The change would have been checked in, built and then deployed to the QA/testing team’s system-test environment. There were no automated tests, and so, the DLB would have had cover-of-night.
- QA/testing would have experienced this one code change commingled with a lot of other new features and fixes. The DLB would have had camouflage.
- When the DLB was encountered the IIS service-host would have crashed. This would have crashed any other QA/testing going on at the same time. The DLB would have had plenty of patsies.
- Ironically, the DLB would have only been hit after a seldom-used feature had succeeded. This would have created a lot of confusion. There would have been many questions and conversations about how to reproduce the tricky DLB, for there would have been many red herrings.
- Since the DLB would seem to involve completely unrelated functionality, all the developers, testers and the PM would never be clear on a root cause. No one would be sure of its potential impact or how to reproduce it; many heated debates would ensue. The DLB would have lots of political cover.
- Also likely, the person who added the one line of code would not be involved in fixing the problem because time had since passed and misdirection led to someone else working on the fix. The DLB would never become a lesson learned.
With our continuous integration (CI) and automated testing in place, here is what actually happened:
- First thing that morning the developer checked in the change and pushed it into the Mercurial integration repository.
- The TeamCity Build configuration detected the code push, got the latest, and started the MSBuild script that rebuilt the software.
- Once that rebuild was successful, the Automated Testing configuration was triggered and ran the automated test suite with the NUnit runner. Soon there were automated tests failing, which caused the build to fail and TeamCity notified the developer.
- A few minutes later that developer was investigating the build failure.
- He couldn’t see how his one line of code was causing the build to fail. I was brought in to consult with him on the problem. I couldn’t see any problem. We were perplexed.
- Together the developer and I used the test code to guide our debugging. We easily reproduced the issue. Of course we did, we were given a bulls-eye target on the back of Mr. DLB. We quickly identified a fix for the DLB.
- Less than an hour and a half after checking in the change that created the DLB, the same developer who had added that one line made the proper fix to resolve the originally reported issue without adding the DLB.
- Shortly thereafter, our TeamCity build server re-made the build and all automated tests passed successfully.
- The new build, with the proper fix, was deployed to the QA/testers and they began testing; having completely avoided an encounter with the DLB.
Mr. DLB involved a method calling itself recursively, a death-spiral, a very pernicious bug. Within just that one line of added code a seed was planted that sent the system into a stack overflow that then caused the service-host to crash.
Just think about it, having our CI server in place and running automated tests is of immense value to the project:
- Greatly reduces the time to find and fix issues
- Detection of issues is very proximate in time to when the precipitating change is made
- The developer who causes the issue [note: in this case it wasn’t carelessness] is usually responsible for fixing the issue
- QA/testing doesn’t experience the time wasted; being diverted and distracted trying to isolate, describe, reproduce, track or otherwise deal with the issue
- Of the highest value, the issue doesn’t cause contention, confrontation or communication-breakdown between the developers and testers; QA never sees the issue
CI and automated testing are always running and vigilantly. They’re working to ensure that the system deployed to the system-test environment actually works the way the developers intend it to work.
If there’s a difference between the way the system works and the way the developer wants it to work then the team’s going to burn and waste a lot of time, energy and goodwill coping with a totally unintended and unavoidable conflict. If you want problem prevention then you have to focus on early detection.
When To Use Database-First
Posted by on June 1, 2011
Code-centric development using an object-relational mapping (ORM) tool has a workflow that many developers find comfortable. They feel productive using the ORM in this way, as opposed to starting with the database model. There are a number of good posts out there on the Entity Framework 4.1 code-first capabilities: MSDN, MSDN Magazine, Scott Guthrie, Channel 9, and Scott Hanselman’s Magic Unicorn Feature. It makes sense to the object-oriented developer and writing code-first comes very naturally.
This prompts the question: When would it be better to take a database-first approach?
For many legacy and Brownfield projects the answer is obvious. You have a database that’s already designed — you may even be stuck with it — therefore you choose database-first approach. This is the defining need for database-first because the database is a fixed point. And so, use database-first when the database design comes from an external requirement or is controlled outside the scope or influence of the project. Similarly, modelling the persistence layer using a model-first approach fits the bill because what you learn about the requirements is expressed in data-centric terms.
Let’s say the project is Greenfield and you have 100% control over the database design. Would a database-first approach ever make sense in that situation?
On-line Transaction Processing (OLTP) and On-line Analytical Processing (OLAP) systems are considered two ends of the the data persistence spectrum. With databases that support OLTP systems the objective is to effectively and properly support the CRUD+Q operations in support of business operations. In the databases that support OLAP systems the objective is to effectively and properly support business intelligence, such as data mining, high-speed data analytics, decision support systems, and other data warehousing goals. These are two extremely different database designs. Many systems’ databases live on a continuum between these two extremes.
I once worked on a student loan originations system. It was a start-with-a-clean-slate, object-oriented development project. Initially, the system was all about entering, reviewing and approving loan applications. We talked about borrowers, students and parents, and their multiple addresses. There was a lot about loan limits and interest rates, check disbursements, and a myriad of complicated and subtle rules and regulations related to creating a loan and making a check disbursement. The system was recording the key records and financial transactions and the database was the master repository of this data. In fulfilling these requirements, the system was a success. However, once the system was readied for parallel Beta-testing at the bank things started to go sideways.
Here is some of what we missed by taking a code-first approach:
- Every day the bank manager must run a reconciliation report, which joins in a lot of financial data from a lot of the day’s records, no one can go home until the report shows that everything is balanced. The bank manager screamed when the report took over two hours.
- At the end of every quarter, there is an even bigger report that looks at more tables and financial transactions and reconciles everything to the penny. More screaming when this report ran for days and never properly reconciled — the query could never quite duplicate the financial engine’s logic to apply transactions.
- And lastly, every loan disbursement that goes out requires a form letter, defined by the Dept. of Education, be sent to every student that has a loan originated by the bank. Imagine the tens of thousands of form letters going out on the day they send the money to UCLA. The project almost died when just one form letter to one student took 30 minutes!
- The data migration from the legacy system to the new system was taking nearly a week to completely finish. The bank wasn’t going to stop operations for a week.
What we failed to realize was that the really significant, make-or-break requirements of the system were all reporting or data conversion related. None of it had been seriously brought up or laid out during system development, however, not meeting those requirements took the project very close to the edge of extinction.
A major lesson learned, look very closely at the question of data persistence and retrieval. Work hard to uncover and define the reporting, conversion and other data requirements. Make any hidden or implicit database requirements explicit. Find out if the system is really all about the care and feeding of a relational database.
Adding it all up: if the database-specific requirements significantly overshadow the application-specific requirements then a database-first approach is probably your best bet.
Agile Requires Agility
Posted by on May 27, 2011
For a long time there has been a widely held belief, early in the collective unconscious and later described in various methodologies: Effective software development requires key elements, like clear deliverable-objectives, a shared understanding of the requirements, realistic estimates of effort, a proper high-level design, and, the most important element of all, effective people.
What happens when the project context doesn’t meet the minimum, essential conditions for a process like Agile development? The project has many missing, ambiguous, or conflicting objectives, and those objectives are nearly always described as activities, not deliverables. There are major differences between what the project leaders, analysts, developers and testers, each think the system is required to do. Every estimate is either prohibitively pessimistic or ultimately turns out to be overly optimistic. The software architecture collapses because it’s not able to carry the system from one sprint to the next, or it’s overly complicated. The project’s people are not sure what to do, how to do it, or aren’t motivated to do it.
In the field of agricultural development, there is the concept of appropriate technology; they say Gandhi fathered this idea. Agricultural development is more successful when straightforward and familiar technologies are used to move from the current status quo to a new, improved status quo. For example, before the tractor would be used effectively farmers should first get comfortable using a team of oxen to plow the fields.
Some ideas to move the team’s status quo from the current state of readiness to the perquisite level:
- Rephrase project objectives from activities to deliverables. For example, “write the requirements document for feature Xyz” becomes “Requirement Xyz; verified complete, clear and consistent by QA.”
- Refocus the team away from providing initial estimates, which are often just guesses anyway, toward a timebox and working the prioritized list of deliverables. Use each timebox’s results as the future predictor.
- Listen carefully and ask probing questions to ensure everyone’s on the same page with respect to what the system’s supposed to do; keep coming back to a topic if there are significant differences.
- Find ways to continuously validate and verify that the architecture is up to the task; not under-engineered or over-engineered.
- Look for the tell-tale signs of knowledge-, skill-, or attitude-gaps. Team members tentative about what they’re supposed to be do. wanting more training, time to experiment or feeling under prepared, or a general concern that project is not on the right track and won’t go better.
A catch-phrase for software development; Agile requires agility. Keep an eye on the appropriateness by monitoring the team’s level of agility and positively influence a transition to the next plateau.
HTML5 Shims, Fallbacks and Polyfills
Posted by on May 26, 2011
There is a lot to know about HTML5 shims, fallbacks and polyfills. Let’s start by trying to define the terms and point to some places on the web to get more information.
The whole idea is to provide a way to develop pages in HTML5 and have everything work properly in a browser that doesn’t natively support HTML5 functionality. For example, this approach can enable the new HTML5 features within IE7.
shim /SHim/
Noun: A relatively small library (js, plugin, etc.) that gets in between the HTML5 and the incompatible browser and patches things — transparently — so the page works properly or as close as practicable. Sometimes referred to as a “shiv”. More info: The Story of the HTML5 Shiv
fallback /ˈfôlˌbak/
Noun: A backup plan when your page detects that it’s being displayed in an incompatible browser. More info: Yet another HTML5 fallback strategy for IE
polyfill /ˈpälē fil/
Noun: A patch or shim that is suitable as a fallback for *a whole lot of* missing functionality.
More info: Modernizr and What is a Polyfill? and HTML5 Cross Browser Polyfills
I’m going to roll up my sleeves now and start with this ScottGu post on HTML5 and ASP.NET MVC 3. It looks like it has some good bits on the modernizr.js JavaScript library.
Don’t Comment Out Failing Unit Tests
Posted by on May 25, 2011
While working with a rather large Brownfield codebase I came upon a set of commented out unit tests. I uncommented one of these unit tests and ran it.
Sanitized illustrative code sample:
[Test]
public void ProductionSettings_BaseBusinessServicesUrl_ReturnsExpectedString()
{
// Arrange
var settings = new ProductionSettings();
// Act
var actual = settings.BaseBusinessServicesUrl;
// Assert
Assert.AreEqual("http://localhost:4321/Business.Services", actual);
}
As expected, the test failed. Here’s some pseudo-output from that test:
SettingsTests.ProductionSettings_BaseBusinessServicesUrl_ReturnsExpectedString : Failed
NUnit.Sdk.EqualException: Assert.AreEqual() Failure
Position: First difference is at position 17
Expected: http://localhost:4321/Business.Services
Actual: http://localhost:1234/Business.Services
at Tests.Unit.Acme.Web.Mvc.Settings.SettingsTests
.ProductionSettings_BaseBusinessServicesUrl_ReturnsExpectedString()
in TestSettings.cs: line 19
So now what should I do? That output prompts the question: what’s the correct port? Is it 1234, 4321 or is it some other port number? To sort this all out I’ll need to take on the responsibility of researching the right answer.
Almost certainly, Mr. or Ms. Comment-out-er gave me this chore because he/she did not have the time to sort it all out themselves. Also, likely, the person who changed the port number didn’t know there was a unit test failing or even that there was a unit test at all. I don’t need to know who did or didn’t deal with this; I’ll leave that to the archeologists.
The larger point is this: if you’re commenting out a failing unit test then you’re missing the point of a unit test. A unit test verifies that the code-under-test is working as intended. A failing test means you need to do something — other than commenting out the test.
If a unit test fails then there are four basic options:
- The code under test is working as intended; fix the unit test.
- The code under test is NOT working as intended; fix the code.
- The code under test has changed in some fundamental way that means the unit test is no longer valid; remove the unit test code.
- Set the unit test to be ignored (it shouldn’t fall through the cracks now), report it by writing up a “fix failing unit test” defect in the bug tracking system, and assign it to the proper person.
Commenting out a unit test means that you’re allowing something important to fall through the cracks. The big no-no here is the commenting out. At the very least, pick option 4.
The Virtues of Blogging
Posted by on May 25, 2011
I recently attended an INETA Community Leadership Summit meeting where Scott Hanselman made a point of highlighting the relative power and importance of blogging.
The main point he made was that each of us frequently communicates to and within a limited group. Those same email threads or conversations would benefit the Microsoft community, both IT pro and development, more if that same dialog made it into the blog-o-sphere. A blog post is one of the most effective ways any individual software professional can help the community. Especially, if the post provides *constructive* criticism — tact is important — then the post can positively influence change; many people read and take notice of blog postings.
Also, not all communication forms are equal. Many are ephemeral with limited distribution, but a blog is more permanent and far-reaching.
Consider the time it takes to write an email. Let’s say your email raises an issue, points out an inconsistency, explains how to overcome a technical obstacle, or describes an effective way to perform common tasks. Instead of putting that info in an email that reaches dozens of people try writing it up in a blog post; potentially reaching hundreds or thousands of people.
The post may never be read. The blog site may never be visited; however, if your email has a link to your post then the content still reaches the same dozen people with the same number of keystrokes. Nonetheless, it’s not about followers, it’s about adding your voice to the community, without regard to how many people will read it but in a fervent belief that at least one reader, beyond my current circle of influence, will read, understand and appreciate what I have to say. Write it globally; socialize it locally.
Scott delivered a good sermon that I thought I’d share. Let’s see if it has any impact on me. Perchance this is the type of rational argument that motivates me to blog.
I guess you’ll know that I’ve taken the prescription when I create a blog and title my first post: “The Virtues of Blogging”.


