21 December 2014
28 August 2013
Under ideal conditions, where there's lots of data, each value in a SIP is valid because it has actually happened, and the frequencies in the SIP are the same as the reality. That is, a well-formed SIP is correct by construction. Since the rest is simple arithmetic, avoiding implementation errors and independent validation are both fairly simple.
MCS stratified sampling and SIPmath are the same except for where in the workflow the samples are taken.
On the other hand, MCS generating random values from a curve that approximates the data, is approximate by construction. We can only hope to get close to the fidelity that comes effortlessly in a SIP composed from history.
07 July 2013
ProbabilityManagement.org has been building some serious big name support. The list includes Chevron Corporation, Computer Law LLC, Foundation for Creativity in Dispute Resolution, General Electric, Lockheed Martin, Ortec Consulting Group , and Wells Fargo Bank. We've also been hard at work building demos and tools.
PM.org had an inaugural meeting in San Diego, hosted by Harry Markowitz, and started the ball rolling on an XML standard for SLURPs and SIPs. I mention this because I'm chairman of the standards committee. If you would like to be involved, let me know. Right now we've got an internal review in progress but it will soon be released for public feedback.
There are now a lot of good tutorials at sipmath.org, and the list of extra goodies available to members is growing nicely.
To make sure we stay busy we're starting up a consultancy in Toronto. To go with it, we've fired up a Google+ Community, Applied Probability Management, focused on the implementation side of the probability management discipline. Come see us at http://goo.gl/GIkMz.
If you haven't done it already, I urge you to go to sipmath.org, sign up as a member, and get involved. Feedback on and reviews of the tools we've been building would be appreciated.
08 June 2013
No matter how well-managed they are, projects tend to finish late and over budget. We keep doing things to correct this problem, but project failure rates have remained constant for decades.
It turns out that one of the reasons, perhaps the principal reason, is that the math we use for estimating project cost and duration is fatally flawed; it gives us consistently optimistic estimates.
The fatal flaw is the Flaw of Averages, eloquently described in Sam Savage's book of the same name.
In project planning and estimating terms, that's the Flaw of Expected Values.
Whether it’s about a whole project or a single task, “How long will it take?” doesn’t have just one answer. It has a whole bunch of answers, each with its own probability of being right.
One of those many answers is the expected value - the theoretical average over theoretically many similar projects.
Here's the irony:
If you are really good at estimation and your expected values are reliable, about half your projects should finish late and over budget. Why would you launch a project knowing it has a 50% probability of failing?
That's what "average" or "expected" means; about half the time the number will be smaller and about half the time it will be bigger.
That's not all. The time and cost to do a task have fairly firm minimums, but no maximum short of abandoning the effort. This skews the possibilities toward the late and over-budget side.
And then there's the concurrency tax.
Suppose you have a project with ten concurrent tasks, each with an expected duration of six months. CPM will wrongly estimate that the project will be done in six months.
Since these are all expected finishes, each one has a 50% probability of being late. Heads, we're early, tails, we're late. All of them coming in on time or early is equivalent to tossing ten heads in a row - one chance in 1,024. So the expected project duration should be greater than six months. The question is, "How much greater?" That's a question you can't answer if all you have is expected values.
When you choose a target, or make a resourcing decision, you also choose the probability of success (or the chance of failure if the glass is half-empty). Whether you’re conscious of the choice or not, you’re making a risk management decision. Making the choice conscious and informed is where Probability Management and SIPmath comes in.
SIPmath is not a proprietary "methodology" or product. It's an easily-applied way of quantifying and calculating uncertainty that's simple enough to do in Excel. It's open standards, open-source code, a growing library of resources, and a non-profit organization to promote and develop the tools and techniques.
A planned cost or duration doesn't have just one value. It has many possible values, each with its own probability of becoming the actual. This uncertainty – many possible values, each with its own probability – applies to every value in the project model.
With SIPmath, we calculate not with single expected values, but many values. What CPM does once, we do hundreds or thousands of times. We simulate many different mixes of possible inputs, producing many possible outcomes and probabilities.
You still have to commit to plan numbers for cost and duration, if only to coordinate effort. Using SIPmath you can choose your plan numbers with a clear understanding of your odds of success. Learn more at sipmath.org .
09 March 2013
Conventional planning tools produce one or more expected values -- expected finish, expected cost.
"Expected value" is also known as the average or mean. But, an average over what? An average assumes a bunch of things whose values can be added up. Average time or average cost implies a large number of activities whose cost and duration can be averaged.
It also implies that the activities that finish early and below budget will provide the savings to underwrite the activities that finish late and over budget.
More generally, if the calculation of expected value is a valid calculation, the sum of the actual costs of a large number of activities should be close to the sum of their expected costs. Is this what happens in the real world?
Silly question -- it doesn't. Relative to expected values, task and project finishes range from a little early to a lot late, slightly under-budget to major overrun. The sum of the actuals is inevitably greater than the sum of the averages.
A sure sign of insanity is doing the same thing over and over again expecting a different result.
15 December 2012
I've wrapped up a new release of SDXL. There's a bunch of graphics related goodies in it (including my preferred histogram), as well as a bit of a cleanup. And, a new Reference Manual.
24 November 2012
In many ways, the essence of Probability Management is how to do probabilities by counting stuff – and having a computer do the counting. This monograph focuses on that.
Now available as a paperback and as a Kindle e-book.
Pdf format and Excel workbooks:
20 November 2012
You're estimating a project.
Let’s say you have a risk element and the event has a 25% chance of happening. If it does, it will add $100,000 to the cost of a particular task. You’ll resist the temptation to just add $25,000 to the task cost, because that’s not what happens in the real world. It’s one project, not a million transactions, so the average is invalid. In each possible future, it’s $100,000 or nothing.
It’s possible that downstream events would be triggered by the $100,000 while $25,000 would fly under the radar. Also, looking at the range of possible project costs, the high numbers would be $75,000 low, and the low numbers would be $25,000 high.
So don’t use Probability x Impact. Ever.
08 November 2012
Sam Savage has put another brick in the wall with Distribution Processing and the Arithmetic of Uncertainty, an article in the ORMS Analytics Magazine (2012 Nov-Dec).
The article expands on the concept of SIPs (Stochastic Information Packets) as packaged uncertainty. It shows how to use SIP math and raw Excel to do Monte Carlo Simulation "without the Monte Carlo."
He also introduces SIPmath – an Excel add-in to simplify building models that use SIP math. Once the model is built the add-in is no longer needed and the simulation can run without it.
Probability Management is on a roll. Read the article and then go to sipmath.com to learn more.
10 September 2012
The main thing we're trying to fix with Probability Management for projects is that conventional tools and techniques give us wrong estimates, and the errors are all one-sided; they consistently underestimate project cost and duration.
Underestimating resources makes it more likely that a project will be approved, and makes it more likely that it will fail. That's a double-whammy that results in more failed projects.
04 September 2012
You see, there's this mystery: Spend a few minutes with Google and you can get a long list of the things that cause projects to fail; we know what they are and how to deal with them. To that easily accessed tradecraft, add the fact that institutions like PMI are certifying over 50,000 project managers a year. Project failure rates should be plummeting. But, for any given industry, failure rates have remained unchanged for decades. This leaves one thing to fix - the math.
Sam Savage has shown us what the problem is, and pointed us at the solution. The Art of the Plan includes my attempt at fixing the problem in project planning.
31 August 2012
By Mark Powell
Have you ever seen a risk register with 500 or more risks on it? It seems that these days a lot of projects have huge risk registers. How does this happen?
Most people believe that this is natural for a large and complex project.
A good friend recently described a proposal for the California High Speed Train that would go from San Diego to San Francisco and Sacramento. His pre-project draft risk register covered everything from track, signals, routes, station interchanges, software, train sets, health and safety, Environment, etc., and it was huge. Well, that, of course, is no surprise; it is one big, complex, project!
However, I told my friend that when he started that project, his risk register should be empty. I also told him that during the life of that project, he should never have more than about a dozen active risks on his risk register. He scoffed – of course.
Now why would I say such outlandish things for such a large complex project? Two reasons: good Project Management and good Project Planning.
Good project management handles (identifies, assesses, sets up monitoring and, possibly, mitigations) all those pre-start risks in their multitude of management plans and management systems that are complete at project start. In fact, if you think about it, all of our management plans and management processes are nothing more than plans to execute monitoring and mitigation of all those risks we identified before project start.
For instance, we know there’s a risk that all project requirements may not be satisfied in the implemented system. But we don’t put a risk for every requirement in a risk register; we develop a verification plan to mitigate all those risks, and work it through a verification processes. We know there is always a risk that the subsystems and components of the system as-built may not interface correctly. We don’t put those risks in the risk register, we develop an integration plan to monitor and track interface development and manage system builds to mitigate those risks.
For a big infrastructure project like the California High Speed Train, there will be a slew of environmental risks. Various EPA regulations will be prescriptive with respect to monitoring and mitigation of these risks. Every project has a multitude of budget and schedule risks, but we have Earned Value Management Systems to address those.
Good project management may generate upward of 50 management plans. Each of these plans will describe how the risks identified before project start (most of which we know that all projects will have) will be monitored and mitigated through existing processes. Only those risks that were not identified before project start should ever populate the risk register, and any well-managed project should never have more than dozen active risks at any one time.
Bad project management dumps all of these risks into a risk register instead of developing all of those plans. It is nothing short of an abdication of project management responsibility. That’s how you will see 500 or more risks in a risk register. It is not poor risk management; it is poor project management, and no real project planning.
Mark Powell is a consultant specializing in Project Management, Systems Engineering, Risk Assessment, and New Business Acquisition. He is regularly sought as a plenary speaker for conferences and symposia, and to provide tutorials and workshops to improve corporate performance.
He is active on a number of discussion groups on LinkedIn that are particularly relevant to this blog post. Invite him to link, or contact him directly at email@example.com for more information.
18 August 2012
It's taken way longer than I thought it would, but I've finally got The Art of the Plan written and published. The e-book version is available from Smashwords in all the useful formats.
The printed version is still in process. I'm guessing early September for release.
The book covers most of the topics I've been writing about in The Art of the Plan blog – from identifying crystal-clear objectives and requirements through to modeling and simulation using Probability Management techniques to produce realistic project plans. There's an Excel workbook loaded with examples to go with it and, of course, it uses SDXL.
13 July 2012
Here's a really good article on closing the gap between project predictions and realization. Jenner covers the well-known sources of error and misrepresentation. Unlike other writers on this topic, he doesn't just wring his hands but responds with well-thought-out prescriptions.
His prescriptions include effective planning (start with benefits and requirements, design the solution later), Science (seek disconfirming evidence), Reference Class Analysis, Probability Management (distributions rather than point forecasts).
In short, this is an article I wish I had written.
28 June 2012
An article in the Harvard Business Review adds more evidence that presenting analytic results as charts instead of numbers improves the interpretation of the data.
Economists Are Overconfident. So Are You reports on a study that makes a good case for just charts and no numbers – charts and numbers produced worse interpretations.
It's a good read