Features of boxes that are allowed to be included within outcomes models (‘what’s in and what’s out) principle 

Academic references to outcomes theory

Signup to the DoView School of Outcomes Tips Newsletter

“Boxes within an outcomes model can be: relevant; influenceable; measurable; controllable; attributable: and/or accountabilities"

“Trying to manage risk around accountability by leaving out not-necessarily controllable boxes can mean outcomes models don’t seem sufficiently outcomes-focused"

“To keep accountability clear, accountable boxes can just be marked up as such within a more comprehensive outcomes-focused outcomes model"

The principle

Boxes that are allowed within any outcomes model can have one or more features including: relevance; influenceability; measurability; controllability; attributability; and, accountability. The only requirement about which boxes can go into an outcomes model is that boxes need to be relevant to higher-level boxes within the model. While all boxes in an outcomes model need to be relevant, they can have none, one, or more of the other features. (Note: an example of a relevant box that does not have any of the other features could be a risk of some sort).

The problem

There are often disputes about the types of boxes that can be included within outcomes models. Outcomes models are models of high-level outcome(s) that are being sought by an intervention and the steps it is believed need to occur for them to be achieved. Outcomes models go by names such as: logic models; theories of change; program theories; interention logics; results roadmaps; results chains; strategy maps etc.).

These disputes can take the form of people insisting: ‘That box can’t go into the model because it’s not measurable’, or ‘that box can’t go into the model because we can’t control it’ etc.

This insistence about what should, and should not, be included within outcomes models arises in some cases from parties wanting to manage risk around what they will be held accountable for. They don’t want to be held accountable for boxes they do not control. This is a legitimate risk for them to worry about.

However, if they try to manage the risk by insisting on limiting what goes into an outcomes model there are implications for how comprehensive the model will end up being. This type of narrowing-down means that the outcomes model that is produced is likely to leave out the higher-level boxes that are being sought - the outcomes of the intervention. This exposes people working off such narrowed-down outcomes models to the criticism that they are not being sufficiently ‘outcomes-focused'.

The solution

Within outcomes theoryoutcomes models are used both as a conceptual tool (the implicit outcomes model underpinning any intervention) and as a practical tool for working with outcomes systems of any type (e.g. strategy, performance management, evaluation, risk management systems). 

They function best for both conceptual and practical purposes if they provide a comprehensive picture of all of the boxes relevant to an intervention and its outcomes. Different boxes in the model will have different features.  In practical applications, boxes within outcomes models can be marked up with the particular features they have (e.g. whether or not they are measurablecontrollable, accountabilities etc.) 

Intervention workers have a legitimate concern about not wanting to be held accountable for not-necessarily controllable boxes. However, it is best to include these boxes in the outcomes model. You can then simply mark up the boxes (a sub-set of boxes within the model) for which the intervention will be held accountable. This allows for the full outcomes story to be told within the outcomes model. At the same time, because accountabilities are marked up within the model, funders and providers can be totally clear about who is going to be held accountable for what.


A chief executive of a government agency produced an outcomes model in his annual planning documentation which was much lower-level than the outcomes models produced by other government agencies. In response to questioning about why his outcomes model was so minimalistic he said: ‘I am not going to put anything in my model that I don’t control because I don’t want to be held to account for boxes I can’t control’.

The end result of this approach for this chief executive was that he was seen as being less 'outcomes focused' than other agency chief executives because they had included higher-level, but not-necessarily controllable, boxes within their models.

Duignan, P. (2009d). Using Outcomes Theory to Solve Important Conceptual and Practical Problems in Evaluation, Monitoring and Performance Management Systems. American Evaluation Association Conference 2009, Orlando, Florida, 11-14 November 2009.