Dr Paul Duignan on Outcomes (bit wonkish): I'm not much into sport as a rule, but I've become an avid watcher of the America's Cup Yacht Race over the last week or so. With only one more win needed, hopes are rising that the Cup will return to New Zealand. If New Zealand wins, planning will start in earnest for the event in four years time and no doubt we'll see economic estimates pouring in regarding the potential hosting benefits.
Fortunately some are already warning that such estimates usually grossly overestimate the actual real earnings from such events. Shamubeet Eaqub talks about 'over-hyped studies that are proven to be absolute b……. after the fact.' See media article here.
Why do people want these estimates in the first place? Obviously it is to help them deal with strategic risk. The strategic decision that needs to be made in planning for an event like this is how much to invest in preparation. This obviously should bear some relationship to how much the event is going to make.
Being naturally averse to risk around decision making, people love it when someone comes up with a dollar value estimate for the ultimate outcome of an event. It allows them to feel that they're rationally working back from this estimate to determine the appropriate level of investment. However the truth of the matter is that junk estimates do nothing to address the risk of over-investment. All they do is paper over the fact that decision-makers are often working under conditions of major uncertainty regarding the outcomes they are seeking.
We live in an age that often denies the inherent riskiness of achieving outcomes. The 2007 economic crash was underpinned by people living in a fantasy world in which they thought that they were managing risk just because they were utilizing complex formula to estimate their risk exposure. What these approaches served to do however was to mask the massive underlying risk which lay in wait for them.
Paradoxically what we need to do to manage risk better is to actually acknowledge that many decisions are inherently risky and that there will be failures and money will sometimes be wasted on outcomes that don't eventuate. Junk estimates cannot save us from this decision-making reality.
It's not only the prior estimates of what will happen in an event like the America's Cup that are problematic, there are also problems with the estimates that are made afterwards as to how much an event actually did make. Shane Vuletich in the same article points to the fact that following another recent yachting event in New Zealand - the Volvo Ocean Race - the amount of money that was spent by people visiting the Auckland waterfront where the event was hosted, was simply counted up and used as the estimate of how much the event made. There was no consideration of the fact that many visitors would have been present on the waterfront even if there was no event in progress. As Shane put it: 'you have got to demonstrate that the money is caused by the event, that it would not be present without the event. Measurement is really critical.'
While he talks about measurement, the language is a little slippery and could benefit from the technical insights of outcomes theory. What Shane is criticizing is definitely a type of measurement - people are measuring the amount of money spent at a location. What he's referring to is the type of measurement people are making.
It's useful to tease out two different types of measurement here and also to introduce a third concept - impact evaluation. In Duignan's Outcomes System Diagram we distinguish between measures (indicators) that are controllable by an intervention (the event in this case) and ones that are not-necessarily controllable by it. Indicators that are controllable have the great merit that they are attributable to an event. Attributable means that we know that they have been improved by the event. In this case, the amount of money being taken at the waterfront in the course of the event is a not-necessarily attributable indicator of the outcome we are seeking. As Shane says, the amount of money taken might have been taken regardless of the event because people would be visiting the waterfront and spending money anyway. Therefore the raw measure of this outcome is not-necessarily controlled by the event and hence not-necessarily attributable to the impact of the event.
Where we have a situation, as we often do in outcomes work, where the high-level outcome of interest is measured by a not-necessarily controllable and hence not-necessarily attributable measure (indicator), we should not just give up and use the measurement we have regardless. To do so is to indulge in the sloppy thinking that Shane is rightly criticizing. What we need to do it turn to another tool - impact evaluation. This is where one or more of seven possible techniques (impact evaluation designs) are used to work out the actual attributable impact of the intervention itself on the outcome we are looking at. This is done by controlling for other factors that might have caused the measurements we are seeing. This approach is outlined in the outcomes theory principle here.
The natural thing to do in this case is to adjust the money taken during the event by the expected takings on the waterfront in the absence of the event. It would not be hard to make this adjustment if figures are available for the usual revenue level. The particular type of impact evaluation design we would use if we made such an adjustment is known in outcomes theory as a constructed comparison group design.
One last point that Shane makes in the article is that money payback would be well down his list of cup benefits anyway. He thinks that 'feel-good' factors are more important and that 'sometimes you miss the point by just focusing on the financials'. From the outcomes theory point of view, the best way of ensuring that planners focus on the wider outcomes picture that Shane is pointing to would be to get them to do all their strategy work against a visual outcomes model of all of the outcomes being sought from the event.
The over-dependence in strategic discussions of dollar estimates of how much can be made directly from an event, not even considering the fact that they are often wildly inaccurate, creates a myopic strategic focus. This works counter to smart strategic thinking about the whole range of potential outcomes and how much effort, and the kind of effort those outcomes warrant. Our work in getting people to always visually model their outcomes prior to thinking about any aspect of measurement is directed at getting them to see this wider strategic picture.