Untitled


Common outcomes system problems fixed by outcomes theory

Duignan, P. (2009). Using Outcomes Theory to Solve Important Conceptual and Practical Problems in Evaluation, Monitoring and Performance Management Systems. American Evaluation Association Conference 2009, Orlando, Florida, 11-14 November 2009.


Heading

Deteriotation in indicator of regulatory enforcement may show more enforcement rather than more trangressions.

Single indicator list problem

Using just a 'single list' of indicators for provider/doer accountability is a mistake. Often such lists contain two types of indicators without clearly distinguishing between them. They are ones which are controlllable by the provider/doer and those which are not-necessarily controllable. Differentiating between these two types (using a 'double' rather than 'single' indictor list) is desirable.

Heading

Deteriotation in indicator of regulatory enforcement may show more enforcement rather than more trangressions.


Anyone can use the above material, with acknowledgment, when doing evaluation planning for their own organization or for-profit or not-for-profit consulting work just with acknowledgement. However you can't embed the approach into software or web-based systems without our permission. If you want to embed it in software or web-based systems please contact general@doview.com.

*Reference to cite in regard to this work: Duignan, P. (2009). A concise framework for thinking about the types of evidence provided by monitoring and evaluation. Australasian Evaluation Society International Conference, Canberra, Australia, 31 August – 4 September 2009.



Building-block one is essential for all of the other building-blocks because it's the way you identify your priorities as the basis for tightly aligning all the other building-blocks to your priorities at the outcomes (what you want to happen) and at the project (what you are going to do) level. 

Building-blocks two and three are complementary in that not-necessarily controllable indicators (two), in addition to controllable indicators (three), need to be measured in any comprehensive outcomes system.

Often, controllable indicators in building-block three do not reach to the top of the outcomes model in building-block one. In such cases, you cannot make any impact attribution statement attributing changes in high-level outcomes to a program only on the basis that you've measured improvements in a not-necessarily controllable indicator (in building-block two).

When you face this situation, there's only one place you can go to attempt to make an impact attribution statement. This is to building-block four - impact evaluation. Impact evaluation uses more one-off evaluation designs (rather than routine indicator data collection which is used in two and three). 

You should usually only do building-block four (impact evaluation) if you have already done as much of five (non-impact evaluation implementation optimization evaluation) as is necessary to optimize the chances of the program succeeding. 

For evidence-based (as apposed to hypothetically-based) cost-effectiveness and cost-benefit analysis in building-block six you need to have robust effect-size estimates available from building-block four. These are only produced by some of the impact evaluation designs which may be possible under four.

Building-block seven determines the rewards and punishments for a program if it does, or does not, reach certain results on building-block three, and sometimes two and four, or other building-blocks.




Building-block one is essential for all of the other building-blocks because it's the way you identify your priorities as the basis for tightly aligning all the other building-blocks to your priorities at the outcomes (what you want to happen) and at the project (what you are going to do) level. 

Building-blocks two and three are complementary in that not-necessarily controllable indicators (two), in addition to controllable indicators (three), need to be measured in any comprehensive outcomes system.

Often, controllable indicators in building-block three do not reach to the top of the outcomes model in building-block one. In such cases, you cannot make any impact attribution statement attributing changes in high-level outcomes to a program only on the basis that you've measured improvements in a not-necessarily controllable indicator (in building-block two).

When you face this situation, there's only one place you can go to attempt to make an impact attribution statement. This is to building-block four - impact evaluation. Impact evaluation uses more one-off evaluation designs (rather than routine indicator data collection which is used in two and three). 

You should usually only do building-block four (impact evaluation) if you have already done as much of five (non-impact evaluation implementation optimization evaluation) as is necessary to optimize the chances of the program succeeding. 

For evidence-based (as apposed to hypothetically-based) cost-effectiveness and cost-benefit analysis in building-block six you need to have robust effect-size estimates available from building-block four. These are only produced by some of the impact evaluation designs which may be possible under four.

Building-block seven determines the rewards and punishments for a program if it does, or does not, reach certain results on building-block three, and sometimes two and four, or other building-blocks.