Dr Paul Duignan on Outcomes: Currently in the middle of constructing a vast DoView outcomes model for a consortium with hundreds of projects and multiple outcomes at many levels. There is something really satisfying about bringing together a visualization representing the work of hundreds of people within the one visual model. The exciting part is that you know that it's going to allow them to quickly communicate with funders and stakeholders what it is that they're attempting to do with their work. It is particularly satisfying when they are doing globally really important work, it makes the hours you put in really worth it. 

People use DoView outcomes software to draw a wide range of different types of causal models (logic models, results chains, program theories, intervention logics etc.). But when I'm consulting I build models according to a strict set of rules. These embody principles from outcomes theory and mean that the model will be free from major technical problems. The larger the model and the more people having input into it, the more chance there is that it will develop technical structural problems because people do not understand the underlying principles of outcomes model construction and insist on doing things that introduce structural problems.

The technical points we argued for in this case, and had accepted, include: 

Ensuring that the DoView is large enough to communicate what it is that the consortium is doing - so many times programs fail to communicate sufficient detail about what they are doing because they fall into the trap of thinking that they have to cram everything onto a page or two. The model needs to be as big as it needs to be to tell their story to a sufficient level of detail.

Including the non-measurable. A common mistake is to demand that all boxes within a model are measurable. This mean that you end up having no formal way of reasoning about the currently unmeasured. From a strategic perspective these are often the most interesting things that you should be focusing on when you develop your strategy. Measurement infrastructure is a function of what you did in the past, not necessarily what you want to do in the future. Therefore if you don't have a measure for something it might be that you have not yet developed a measurement infrastructure for it, not that it is not important going forward.

Getting them to mainly just talk in terms of 'boxes' rather than sinking beneath the chaos of the terminological madness. Just on the madness: I am currently in a Linkedin discussion where someone made the fatal mistake of asking what is the different between a result and an outcome. There have been 166 comments so far with people making utterly authoritative statements such as: 'an outcome is …' and 'a result is…' with someone coming along a couple of comments later to insist that they are something else. Total chaos. Of course all avoidable if you just talk in terms of 'boxes' within a visual model. You just need to say: 'this box makes this box happen' etc. and you get to construct your causal pathway in half the time it takes to play the terminological game. 

Avoiding the confusion in visual modeling where the attempt is made to structure the causal flow within the model on the basis of attribution (being able to prove that you changed a box). Saying that don't have to construct models in this way is probably one of the most heretical aspects of my approach. The problem occurs when people insist on putting columns of outputs (or at a higher level within the model, columns of impacts) within the causal flow of the model. Columns of these disrupt the proper layout of the causal flow within the model. Of course you can identify both outputs and impacts but they may lie at various parts within the flow of the causal chain.

It must be noted that what I'm talking about here is the use of the term impacts to mean more than just the last row of boxes in a model, above boxes you are calling outcomes. This convention, as often used in international development, does not cause a problem in itself. It is only if the definition of the term impact involves both a position in the causal chain in a model (e.g. at the top) and that it has to be attributable to the intervention (i.e. it has to be something that you can prove that you have changed). The problem arises because you cannot dictate the level at which attribution will occur within a model and hence you cannot demand that attributable boxes will be at the top of your model because often attributable boxes will be lower down in the model. So you end up, in effect, dragging boxes that should sit lower in the causal model up to the top of the model and therefore distorting the visual representation of the causal flow. 

There is no problems with identifying outputs or identifying impacts with a code in the box or color of the border or whatever you like. It is just demanding that they are should be located in a particular position within the causal flow that causes problems. In this case we created another drill-down DoView page, separate from the pages showing the general causal flow, beneath the highest-level boxes and on that page simply listed all of the impacts (things that the intervention thought they could prove were attributable to them). 

Anyway my rules for drawing outcomes DoViews in the way I do them are here. It is almost always a matter of not stopping people from doing things they want to do, but showing them how they can model what they want to do in the most elegant way and avoid running into technical problems later. 

Most of my approach is initially regarded as heresy by those who hear that I want to build models how I want to. However, the great thing is that after you talk people through and show them the implications of building the DoView the way I do, they understand what I am saying and appreciate that a properly structured model consistent with the principles of outcomes theory avoids many problems further down the track. 

Discuss this posting on the DoView Linkedin Community of Practice.

Follow Duignan on Twitter.com/PaulDuignan and see About.Me/PaulDuignan for more info.