18 December 2010

Entertaining Statistical Insignificance

Here's an entertaining example of statistical insignificance that ties the presence of cell towers to increased fertility. You'll want to read it all.

13 December 2010

Project Planning is Process Design

Revised 17 December 2010

Project plans aren't discovered, they're designed. More to the point, project planning is process design.

For any particular project, the plan can lay out the work in a long string, using a small team for a long time, or it can break the work into relatively independent components and have lots of people working in parallel for a short time. The first approach, essentially Agile, is the most cost-efficient because it has less coordination and integration overhead. On the other hand, the longer a project lasts, the more likely it is to be impacted by low-probability hazards and scope creep. The longer the plan timeline, the greater the uncertainty in its outcome.

In either case or in between, it's a choice of how to get the job done, and if you want it done fast, it'll cost more.

05 October 2010

Use Cases Refine Requirements

A necessary element of the strategy is a clear understanding of what's to be achieved. A clearly-defined objective provides the focus, and use cases explain why we're doing anything at all. Every product has use cases, whether they're documented or not. The wise thing is to document them.

For the strategy, we'll keep things at a fairly high level, leaving out a lot of detail. We'll fill in the detail while developing the plan of action and the product design.

14 August 2010

Measurably Improving Your Requirements

Here's a really good article by Timothy Olson on quantifying requirements. In very few words, he pins down the essentials.

11 August 2010

Parkinson's Uncertainty

Parkinson's Law: Work expands so as to fill the time available for its completion.

It's been shown to apply to government bureaucracies and it surely applies to software development. One bit of lore that I've been carrying around for decades and seen repeatedly validated is that work is at best a little early and at worst a lot late, with a ratio of about 4 in either direction. How do we express this in our calculations if historical data is not available to sample?

I've come up with the following algorithm for a time-to-complete distribution expressing Parkinson's Law:

09 August 2010

Tools for the Strategy

This stage of the planning process comes under different names: Strategy, Project Proposal, Business Case, Project Charter, ... The names tend to reflect the degree of approval obtained or being sought, and the kind of detail expected. In any case, this is the first place in the planning process that the planned activities, projected costs, and timelines come together for stakeholder inspection and decision-making. It includes a plan, but probably not at the level of detail used for managing the project.

"How long will it take?" doesn't have just one answer. It has a bunch of them, each with its own probability of being right. Which of the many values we commit to is a risk management choice, not a discovery.

07 July 2010

Political Polls—The Odds

An Ekos poll gives the Conservatives 31% of the vote and the Liberals 27.7%, +/-2.4%, 19 times out of 20--what the media calls a "virtual draw". It's an interesting estimate, but not really useful for making decisions; a Liberal win is within the range of error.

Looking more closely, what these numbers say (but Ekos doesn't) is that the Conservatives are favored to win a plurality with at least 9:1 odds. Probabilities can be more useful than estimates, and they can have a dramatic impact on our understanding of the situation.

The 9:1 odds calculation assumes a Liberal/Conservative zero-sum game. Every vote gained by one party is a loss to the other. If this is relaxed and votes can be taken from or lost to the other parties, the odds favouring a Conservative win max out at about 40:1 (not a typo--that's forty).

These numbers come from a Monte Carlo simulation of 10,000 elections assuming the Ekos numbers represent Gaussian distributions. In the first case, the Conservatives win when they get more than 29.35% of the votes--a majority of the votes in play. In the second case, the Conservatives win when they get more votes than the Liberals.

Ekos hasn't told us enough to gauge the mobility of votes from the other parties, so we can't resolve the difference. On the other hand, they have the data so we're left wondering why they don't report the proper odds.

There's an Excel spreadsheet for this calculation that you can use to see the odds for other polls. It includes a 1000-election simulation. With only two uncertain variables, it's about as simple an example of Monte Carlo simulation as you can get. See http://smpro.ca/crunch/PollOdds.xls

14 June 2010

What Dooms IT Projects? —Again


Ziff's Baseline Magazine treats us to yet another slide show about IT project failure in this deck by Dennis McCafferty.

It trots out the usual suspects: Lack of user involvement, unrealistic timelines, poorly-developed requirements, etc. This time it's based on an uncited Standish Group report that seeks to explain a 35% failure rate.

As always, when you put it all together, it adds up to poor planning: when you look at what actually happened and ask what was in the plan, you find that the model bears little resemblance to the reality.

To have a chance of success, a project has to have a realistic, comprehensive plan whose model is close to what experience tells us is the reality. It needs a crystal clear objective, measurable success criteria,  intelligently estimated timelines, resource allocations that account for risk, and plans for dealing with unknowns.

I love it when a plan comes together, but not when it's by accident.

08 April 2010

04 April 2010

Statistical Insignificance

Our thanks to Tom Siegfried for raising the issue; a simple example is needed:

For reasons known only to him, your lunch companion takes out two coins--a quarter and a nickel. He flips both coins—first the quarter, then the nickel. He repeats this five times and, in each case, if the quarter lands heads, so does the nickel; if it lands tails, the nickel also lands tails. "This," he says, "can't be coincidence; the quarter must be forcing how the nickel lands."

19 March 2010

Odds Are, It's Wrong

It’s science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation.

In this article in Science News, Tom Siegfried talks about the misapplication of statistics. His target is scientists who draw statistically-driven conclusions that aren't in fact supported by the data.

It's well worth the read; the lesson goes beyond the purely scientific realm and is applicable to business as well.

10 March 2010

Quantitative Risk Analysis

I may be overreaching but I include risk analysis as a proper subject of systems analysis. I've done enough TRAs to justify that position—at least to myself. So here's a risk analysis topic.

Toying with the idea of getting some certification I took a look at the CISSP and ISC Common Body of Knowledge. One thing I found odd enough to exchange a few emails with ITSec gurus. They assured me that this was the state of the discipline. The offense lay in a particular statement, paraphrased in various documents:

Purely quantitative risk analysis is not possible because the method is attempting to quantify qualitative items.

That, in the words of Dr. Pauli, is not even wrong.

"Nothing that matters is so intangible that it can't be measured," is almost a tautology.

If it matters, it has an effect. Observing that effect is measuring it. Drawing a distinction between its presence or absence is measuring it. Estimating a range of values or probability distribution for it is measuring it.

This isn't unimportant. No one can do a cost/benefit analysis that tells them how much they should spend mitigating a "medium-high risk". The effect is that a lot of people are overspending on security based on a "Scary Movie" qualitative risk assessment.

Bottom line: one of an analyst's skills should be measuring the putatively immeasurable.

Any challenges?

04 March 2010

Pert Loses at Monte Carlo

It's pretty clear that calculating with averages is unwise; it's particularly true when we're estimating the resources needed for a project.

If all the tasks were strung out in series and just added together, the errors would tend to "average out." We might be safe in assuming that errors in estimating would be high and low and would cancel each other out. If only real projects were that convenient.

24 January 2010

The Project Uncertainty Principle

Plans based on average assumptions are wrong on average.
—Sam Savage. The Flaw of Averages. 

There's a Dilbert cartoon from 2003 in which Dilbert is asked for a description of his project and its projected cost. He responds by declaring that, "The Project Uncertainty Principle says that if you understand a project, you won't know its cost and vice versa." I don't think Adams knew he was on to something.

12 January 2010

You Can't Change Just One Thing


"You can't change just one thing" is an engineering principle that has a wide application beyond engineering. Applied to planning, it is a warning to consider secondary effects during the analysis phase.