Academic references to outcomes theory
Just measuring an improvement in a not-necessarily controllable indicator (outcomes system building-block three) does not prove that it has been improved by a program or organization.
In an outcomes-orientated world, programs and organizations are under pressure to measure high-level indicators in addition to lower-level indicators at the outputs-type level. Such higher-level indicators (outcomes system building-block three) are often not controllable by a particular program or organization. Where this is the case, the measuring that such a high-level non-controllable indicator improving has improved does not, by itself, prove that a particular program or organization caused this to happen. This is in contrast to lower-level controllable indicators (outcomes system building-block two). In the case of these indicators, their mere measurement is usually taken to have established that they've been improved by the program or organization.
Many outcomes problems and unintended consequences in outcomes systems arise because of the mistaken assumption that measured improvements in high-level outcomes are proof that a particular program or organization has made this happen. This can lead to programs or organizations being rewarded for improvements in high-level indicators they did not actually contribute to; or, being punished in situations where they've done everything they can do to achieve a high-level outcome but which, because it is not controllable by them, it has not improved despite their efforts.
Clearly distinguish between controllable and non-controllable indicators in any outcomes system. This can be done by having two lists of indicators (or an equivalent approach of marking up the controllable indicators within a single list). See the related single-indicator list problem. Only assume that it can be proved that the program or organization has actually improved controllable indicators. It is still worthwhile to measure high-level not-necessarily controllable indicators because, at the end of the day, they are the thing that the work of the program or organization is trying to achieve. Instead of just relying on the mistaken belief that measured improvements in not-necessarily controllable high-level indicators improving establishes that a particular program or organization has improved them, you should examine the possibilities in regard to the other five types of evidence which can be provides to show that a program 'works' from Duignan's Outcomes System Diagram. In particular what is possible in regard to impact evaluation (building-block five) should be considered. However it may or may not be feasible to undertake impact evaluation in the case of a particular program or organization.
An example - applicant for a regional health management job having an uncontrolled indicator as his proposed KPI
An application for the position of running a regional health organization was offered the position on the basis that one of his KPIs would be to reduce smoking in the region. Taking a principled stand, he refused to all this to be one of his KPIs on the basis that it was improving anyway and its improvement would say nothing about his performance. A less ethical applicant would have accepted a reduction in smoking as his KPI. Using not-necessarily controlled indicators as accountability KPIs in situations such as this can destroy the integrity of outcomes system system in question.