Ruthlessly Helpful

Stephen Ritchie's offerings of ruthlessly helpful software engineering practices.

Monthly Archives: June 2011

Liberate FxCop 10.0

Update 2012-06-04: It still amazes me that not a thing has changed to make it any easier to download FxCopSetup.exe version 10 in the nearly two years since I first read this Channel 9 forum post: http://channel9.msdn.com/Forums/Coffeehouse/561743-How-I-downloaded-and-installed-FxCop As you read the Channel 9 forum entry you can sense the confusion and frustration. However, to this day the Microsoft Download Center still gives you the same old “readme.txt” file instead of the FxCopSetup.exe that you’re looking for.

Below I describe the only Microsoft “official way” (no, you’re not allowed to distribute it yourself) — that I know of — to pull out the FxCopSetup.exe so you can install it on a build server, distribute within your team, or to do some other reasonable thing. It is interesting to note the contrast: the latest StyleCop installer is one mouse click on this CodePlex page: http://stylecop.codeplex.com/

Update 2012-01-13: Alex Netkachov provides short-n-sweet instructions on how to install FxCop 10.0 on this blog post http://www.alexatnet.com/content/how-install-fxcop-10-0

Quick, flash mob! Let’s go liberate the FxCop 10.0 setup program from the Microsoft Windows SDK for Windows 7 and .NET Framework 4 version 7.1 setup.

Have you ever tried to download FxCop 10.0 setup program? Here are the steps to follow, but a few things won’t seem right:

1. [Don’t do this step!] Go to Microsoft’s Download Center page for FxCop 10.0 (download page) and perform the download. The file that is downloaded is actually a readme.txt file.

2. The FxCop 10.0 read-me file has two steps of instruction:

FxCop Installation Instructions
1. Download the Microsoft Windows SDK for Windows 7 and .NET Framework 4 version 7.1.
2. Run %ProgramFiles%\Microsoft SDKs\Windows\v7.1\Bin\FXCop\FxCopSetup.exe to install FxCop.

3. On the actual FxCop 10.0 Download Center page, under the Instructions heading, there are slightly more elaborate instructions:

• Download the Microsoft Windows SDK for Windows 7 and .NET Framework 4 Version 7.1 [with a link to the download]
• Using elevated privileges execute FxCopSetup.exe from the %ProgramFiles%\Microsoft SDKs\Windows\v7.1\Bin\FXCop folder

NOTE:

On the FxCop 10.0 Download Center page, under the Brief Description heading, shouldn’t the text simply describe the steps and provide the link to the Microsoft Windows SDK for Windows 7 and .NET Framework 4 version 7.1 download page? That would be more straightforward.

4. Use the Microsoft Windows SDK for Windows 7 and .NET Framework 4 version 7.1 link to jump over to the SDK download page.

5. [Don’t actually download this, either!] The estimated time to download on a T1 is 49 min. The download file is winsdk_web.exe is 498 KB, but under the Instruction heading the explanation is provided: The Windows SDK is available thru a web setup (this page) that enables you to selectively download and install individual SDK components or via an ISO image file so that you can burn your own DVD. Follow the link over to download the ISO image file.

6. Download the ISO image file on the ISO download page. This is a 570 MB download, which is about 49 min on a T1.

7. Unblock the file and use 7-Zip to extract the ISO files.

8. Navigate to the C:\Downloads\Microsoft\SDK\Setup\WinSDKNetFxTools folder.

9. Open the cab1.cab file within the WinSDKNetFxTools folder. Right-click and select Open in new window from the menu.

10. Switch over to the details file view, sort by name descending , and find the file that’s name starts with WinSDK_FxCopSetup.exe … ” plus some gobbledygook (mine was named “WinSDK_FxCopSetup.exe_all_enu_1B2F0812_3E8B_426F_95DE_4655AE4DA6C6”.)

11. Make a sensibly named folder for the FxCop setup file to go into, for example, create a folder called C:\Downloads\Microsoft\FxCop\Setup.

12. Right-click on the “WinSDK_FxCopSetup.exe + gobbledygook” file and select Copy. Copy the “WinSDK_FxCopSetup.exe + gobbledygook” file into C:\Downloads\Microsoft\FxCop\Setup folder.

13. Rename the file to “FxCopSetup.exe”. This is the FxCop setup file.

14. So the CM, Dev and QA teams or anyone on the project that needs FxCop doesn’t have to perform all of these steps copy the 14 MB FxCopSetup.exe file to a network share.

That’s it. Done.

Apparently, a decision was made to keep the FxCop setup off Microsoft’s Download Center. Now the FxCop 10 setup is deeply buried within the Microsoft Windows SDK for Windows 7 and .NET Framework 4 Version 7.1. Since the FxCop setup was easily available as a separate download it doesn’t make sense to bury it now.

Microsoft’s FxCop 10.0 Download Page really should offer a simple and straightforward way to download the FxCopSetup.exe file. This is way too complicated and takes a lot more time than is appropriate to the task.

P.S.: Much thanks to Matthew1471’s ASP BlogX post that supplied the Rosetta stone needed to get this working.

Problem Prevention; Early Detection

On a recent project there was one single line of code that a developer added intended to fix a reported issue. It was checked in with a nice, clear comment. It was an innocuous change. If you saw the change yourself then you’d probably say it seemed like a reasonable and well-considered change. It certainly wasn’t a careless change, made in haste. But it was a ticking time bomb; it was a devilish lurking bug (DLB).

Before we started our code-quality initiative the DLB nightmare would have gone something like this:

  • The change would have been checked in, built and then deployed to the QA/testing team’s system-test environment. There were no automated tests, and so, the DLB would have had cover-of-night.
  • QA/testing would have experienced this one code change commingled with a lot of other new features and fixes. The DLB would have had camouflage.
  • When the DLB was encountered the IIS service-host would have crashed. This would have crashed any other QA/testing going on at the same time. The DLB would have had plenty of patsies.
  • Ironically, the DLB would have only been hit after a seldom-used feature had succeeded. This would have created a lot of confusion. There would have been many questions and conversations about how to reproduce the tricky DLB, for there would have been many red herrings.
  • Since the DLB would seem to involve completely unrelated functionality, all the developers, testers and the PM would never be clear on a root cause. No one  would be sure of its potential impact or how to reproduce it; many heated debates would ensue. The DLB would have lots of political cover.
  • Also likely, the person who added the one line of code would not be involved in fixing the problem because time had since passed and misdirection led to someone else working on the fix. The DLB would never become a lesson learned.

With our continuous integration (CI) and automated testing in place, here is what actually happened:

  • First thing that morning the developer checked in the change and pushed it into the Mercurial integration repository.
  • The TeamCity Build configuration detected the code push, got the latest, and started the MSBuild script that rebuilt the software.
  • Once that rebuild was successful, the Automated Testing configuration was triggered and ran the automated test suite with the NUnit runner. Soon there were automated tests failing, which caused the build to fail and TeamCity notified the developer.
  • A few minutes later that developer was investigating the build failure.
  • He couldn’t see how his one line of code was causing the build to fail. I was brought in to consult with him on the problem. I couldn’t see any problem. We were perplexed.
  • Together the developer and I used the test code to guide our debugging. We easily reproduced the issue. Of course we did, we were given a bulls-eye target on the back of Mr. DLB. We  quickly identified a fix for the DLB.
  • Less than an hour and a half after checking in the change that created the DLB, the same developer who had added that one line made the proper fix to resolve the originally reported issue without adding the DLB.
  • Shortly thereafter, our TeamCity build server re-made the build and all automated tests passed successfully.
  • The new build, with the proper fix, was deployed to the QA/testers and they began testing; having completely avoided an encounter with the DLB.

Mr. DLB involved a method calling itself recursively, a death-spiral, a very pernicious bug. Within just that one line of added code a seed was planted that sent the system into a stack overflow that then caused the service-host to crash.

Just think about it, having our CI server in place and running automated tests is of immense value to the project:

  • Greatly reduces the time to find and fix issues
  • Detection of issues is very proximate in time to when the precipitating  change is made
  • The developer who causes the issue [note: in this case it wasn’t carelessness] is usually responsible for fixing the issue
  • QA/testing doesn’t experience the time wasted; being diverted and distracted trying to isolate, describe, reproduce, track or otherwise deal with the issue
  • Of the highest value, the issue doesn’t cause contention, confrontation or communication-breakdown between the developers and testers; QA never sees the issue

CI and automated testing are always running and vigilantly. They’re working to ensure that the system deployed to the system-test environment actually works the way the developers intend it to work.

If there’s a difference between the way the system works and the way the developer wants it to work then the team’s going to burn and waste a lot of time, energy and goodwill coping with a totally unintended and unavoidable conflict. If you want problem prevention then you have to focus on early detection.

When To Use Database-First

Code-centric development using an object-relational mapping (ORM) tool has a workflow that many developers find comfortable. They feel productive using the ORM in this way, as opposed to starting with the database model. There are a number of good posts out there on the Entity Framework 4.1 code-first capabilities: MSDN, MSDN Magazine, Scott Guthrie, Channel 9, and Scott Hanselman’s Magic Unicorn Feature. It makes sense to the object-oriented developer and writing code-first comes very naturally.

This prompts the question: When would it be better to take a database-first approach?

For many legacy and Brownfield projects the answer is obvious. You have a database that’s already designed — you may even be stuck with it — therefore you choose database-first approach. This is the defining need for database-first because the database is a fixed point. And so, use database-first when the database design comes from an external requirement or is controlled outside the scope or influence of the project. Similarly, modelling the persistence layer using a model-first approach fits the bill because what you learn about the requirements is expressed in data-centric terms.

Let’s say the project is Greenfield and you have 100% control over the database design. Would a database-first approach ever make sense in that situation?

On-line Transaction Processing (OLTP) and On-line Analytical Processing (OLAP) systems are considered two ends of the the data persistence spectrum. With databases that support OLTP systems the objective is to effectively and properly support the CRUD+Q operations in support of business operations. In the databases that support OLAP systems the objective is to effectively and properly support business intelligence, such as data mining, high-speed data analytics, decision support systems, and other data warehousing goals. These are two extremely different database designs. Many systems’ databases live on a continuum between these two extremes.

I once worked on a student loan originations system. It was a start-with-a-clean-slate, object-oriented development project. Initially, the system was all about entering, reviewing and approving loan applications. We talked about borrowers, students and parents, and their multiple addresses. There was a lot about loan limits and interest rates, check disbursements, and a myriad of complicated and subtle rules and regulations related to creating a loan and making a check disbursement. The system was recording the key records and financial transactions and the database was the master repository of this data. In fulfilling these requirements, the system was a success. However, once the system was readied for parallel Beta-testing at the bank things started to go sideways.

Here is some of what we missed by taking a code-first approach:

  1. Every day the bank manager must run a reconciliation report, which joins in a lot of financial data from a lot of the day’s records, no one can go home until the report shows that everything is balanced. The bank manager screamed when the report took over two hours.
  2. At the end of every quarter, there is an even bigger report that looks at more tables and financial transactions and reconciles everything to the penny. More screaming when this report ran for days and never properly reconciled — the query could never quite duplicate the financial engine’s logic to apply transactions.
  3. And lastly, every loan disbursement that goes out requires a form letter, defined by the Dept. of Education, be sent to every student that has a loan originated by the bank. Imagine the tens of thousands of form letters going out on the day they send the money to UCLA. The project almost died when just one form letter to one student took 30 minutes!
  4. The data migration from the legacy system to the new system was taking nearly a week to completely finish. The bank wasn’t going to stop operations for a week.

What we failed to realize was that the really significant, make-or-break requirements of the system were all reporting or data conversion related. None of it had been seriously brought up or laid out during system development, however, not meeting those requirements took the project very close to the edge of extinction.

A major lesson learned, look very closely at the question of data persistence and retrieval. Work hard to uncover and define the reporting, conversion and other data requirements. Make any hidden or implicit database requirements explicit. Find out if the system is really all about the care and feeding of a relational database.

Adding it all up: if the database-specific requirements significantly overshadow the application-specific requirements then a database-first approach is probably your best bet.