Single indicator list error principle


Academic references to outcomes theory


The principle

All outcomes systems should distinguish between controllable and not-necessarily controllable indicators. 

The problem

Many outcomes systems require programs or organizations to report on what they're doing in the form of a single list of indicators. Often specific levels of achievement are also set on the indicators in such lists. When this is done, they are called targets

Indicator lists have names such as - indicators, deliverables, Key Performance Indicators (KPIs), outcomes (when used in the sense of measurements) and results lists. Their primary purpose is to hold programs/organizations to account. When used in this way, programs/organizations are contractually punished if they do not meet targets in the list, and rewarded if they do. 

Given the current pressure for programs or organizations to be outcomes-focused, indicator lists often contain two different types of indicators - controllable indicators and not-necessarily controllable indicators. Controllable indicators are ones which are under the control of the program or organization. In contrast, not-necessarily controllable indicators may be influenced by the program plus a number of external factors.

Controllability is very important in the context of such lists because the mere measurement of a controllable indicator is normally taken as evidence that the program/organization has caused it to happen. For example, the number of You Tubes produced by a mental health organization as part of a campaign fighting discrimination against people with mental illness may be counted. The mere measurement of this number is normally taken as evidence that the organization produced the You Tubes in question. In technical outcomes language, this is known as 'changes in the indicator are attributable to the program/organization'. This aspect of controllability, and therefore attributability, makes such indicators very desirable when indicator lists are used for accountability purposes - there normally will be no dispute about whether or not the program/organization made what is being measured happen. 

In contrast, the mere measurement of an improvement in an indicator which is not-necessarily controllable by a program does not prove that the program/organization has caused it to improve. That is, it is not seen as being an indicator which is automatically attributable to the program/organization merely by virtue of having measured it. 

For instance, there might be an improvement in a higher-level outcome of the campaign, say a measure of the actual level of discrimination by the population against people with mental illness. The mere measurement of this, by itself, does not prove that the improvement is attributable to the program/organization. This is because it's not controllable by the organization - other factors can also influence it. There may be ways of establishing such attribution but they rely on impact evaluation (number 5 in the outcomes system diagram above) rather than just indicator collection (number 3 in the diagram above).

So, while it seems desirable to include higher-level not-necessarily controllable indicators in indicator lists because they measure outcomes, it also seems undesirable to include them because they are not controllable and therefore, not automatically attributable to the program or organization merely by their measurement and hence in many instances not appropriate for use as direct accountabilities.

Whatever position one may take regarding the above issue, where higher-level indicators are not-necessarily controlled by a program, just having a single list of indicators can cause confusion. This arises because of people's expectations about what types of items such a single list should be used for - i.e. accountability. 

Hybrid lists consisting of controllable and not-necessarily controllable indicators, if these are not marked up as so, are potentially confusing. Observers who notice that the list contains two types of conceptually distinct entities can become disturbed at what they see as a problem. They usually describe the problem as: 'the indicator list contains some things we can be accountable for and some that we can't be held to account for because we can't control them'. There are three possible responses in situations where this concern is expressed:   

  • The list can be purged of lower-level (controllable) indicators. But an indicator list consisting of just higher-level non-controllable indicators can be criticized because they are not attributable to the organization (i.e. the program/organization can't prove that it changed them) and are usually not thought to be appropriate as accountabilities.
  • The list can be purged of higher-level non-controllable indicators. But a list just made up of lower-level controllable indicators can be criticized because they are too 'low-level' and ‘outputy’ and it usually does not give any picture of what it is that the program is seeking to achieve in terms of high-level outcomes.
  • A hybrid list can be left which is a mix of lower-level controllable indicators and higher-level not-necessarily indicators. But this is likely to continue to be criticized for being conceptually muddled as to the type of items which should be in the list (i.e. this is a version of both of the above criticisms at the same time!).

The solution

Always allow indicator lists for a particular program/organization to include both controllable and non-controllable indicators. This can be described as a double-list approach rather than a single-list approach. The first list will be a list of controllable indicators and second will be a list of non-controllable indicators. The same effect can also be achieved by using a single list but by clearly marking it up to show which indicators are controllable. In outcomes theory, the convention of putting an @ after an indicator name (as shown below) is sometime used to signify an accountability indicator (in many situations, these are the same as a controllable indicator). In a visual approach as used within outcomes theory, there does not have to be a separate 'list' of indicators, as the indicators are mapped directly onto the visual outcomes model and marked up with an @. 

The use of this approach is shown in the mocked-up example below:


An example

The United Nations uses a Results-Based Management System. In a review of their own performance management system they critiqued the content of their indicator list. They were concerned that it contained two different types of items - controllable and non-controllable items and presented this as a problem which they were not sure how to solve. The solution proposed above (a dual indicator list approach) would have solved this technical problem.