## 19 March 2010

### Odds Are, It's Wrong

It’s science’s dirtiest secret: The “scientific method” of testing hypotheses by statistical analysis stands on a flimsy foundation.

In this article in Science News, Tom Siegfried talks about the misapplication of statistics. His target is scientists who draw statistically-driven conclusions that aren't in fact supported by the data.

It's well worth the read; the lesson goes beyond the purely scientific realm and is applicable to business as well.

## 10 March 2010

### Quantitative Risk Analysis

I may be overreaching but I include risk analysis as a proper subject of systems analysis. I've done enough TRAs to justify that position—at least to myself. So here's a risk analysis topic.

Toying with the idea of getting some certification I took a look at the CISSP and ISC Common Body of Knowledge. One thing I found odd enough to exchange a few emails with ITSec gurus. They assured me that this was the state of the discipline. The offense lay in a particular statement, paraphrased in various documents:

Purely quantitative risk analysis is not possible because the method is attempting to quantify qualitative items.

That, in the words of Dr. Pauli, is not even wrong.

"Nothing that matters is so intangible that it can't be measured," is almost a tautology.

If it matters, it has an effect. Observing that effect is measuring it. Drawing a distinction between its presence or absence is measuring it. Estimating a range of values or probability distribution for it is measuring it.

This isn't unimportant. No one can do a cost/benefit analysis that tells them how much they should spend mitigating a "medium-high risk". The effect is that a lot of people are overspending on security based on a "Scary Movie" qualitative risk assessment.

Bottom line: one of an analyst's skills should be measuring the putatively immeasurable.

Any challenges?

## 04 March 2010

### Pert Loses at Monte Carlo

It's pretty clear that calculating with averages is unwise; it's particularly true when we're estimating the resources needed for a project.

If all the tasks were strung out in series and just added together, the errors would tend to "average out." We might be safe in assuming that errors in estimating would be high and low and would cancel each other out. If only real projects were that convenient.