Here's a really good article by Timothy Olson on quantifying requirements. In very few words, he pins down the essentials.
Greetings, Marc -- just found your website and enjoyed reading your archive. I'm linking (in my signature) to a friend's ebook on Measuring the Performance of Business Analysts that I think you and your readers would be interested in learning about.The problem I see with Olson's article is that he cites many metrics that don't have a clear purpose. In Adriana's ebook she mentions the need to choose metrics starting from performance goals, and to avoid subjective measures. For example, if you ask different stakeholders what is the probability of a requirement changing, you will most likely get different answers, so "Requirement stability (i.e., what is the probability of the requirement changing)" -- one of Olson's examples -- probably isn't a good measure to adopt.
Hi Chris, It's nice to know someone is reading this stuff. About subjective measures: all estimates, expert or otherwise have some degree of subjectivity. This makes them no less measurements. Everything of consequence is measurable (that's a tautology); it's just that some measurements are more reliable than others.Referring to the example, scope creep has a probability distribution and the various stakeholder estimates, along with whatever other input we can get, are a sample set. Having a probability distribution lets us make informed decisions about managing the risk.An important part of my message is that you should not be hunting for the "one true number;" it's guaranteed to be wrong. Nor do you throw your hands in the air and say that you can't plan for scope creep because the estimates are subjective. In doing the calculations for a plan, any datum that's not a system constant should be carried as a discrete probability distribution extracted from the best available sources.