An article setting out the argument for always first constructing an outcomes model (program logic) and then mapping indicators back onto the model. The main reason for this is that you can then immediately see what your list of indicators is, and is not, measuring. This is not immediately obvious from just looking at a straight unmapped indicator list.
An article describing a very common problem in indicator and performance management system design where the system exclusively relies on the specification of a 'single' list of indicators which an individual, program or organization are to be held to account for achieving. This becomes a problem where it is accompanied with an insistence that the list be at a high 'outcomes' rather than 'outputs' level in a situation where it is hard to attribute changes in high level outcomes to a particular individual, program or organization.
A set of indicators mapped back onto a visual outcomes model is a very useful way of specifying accountabilities in outcomes-focused contracting. It enables clear identification of those indicators which will just be tracked and those indicators for which the individual, program or organization will be held to account. Such a visual model can be attached to the back of contracts to clearly specify the accountabilities which are being contracted for.
A set of outcomes models (program logics) which have already been developed for a number of programs in a number of areas. You can click through them in their web page versions and if if like any of them and if they are relevant to you, you can immediately print them off as PDFs. If you have DoView outcomes software installed, you can immediately download the DoView file of the model and start amending it to reflect what is happening in your program.
If you want to map indicators onto an outcomes model you need to build the outcomes model in a particular way. In particular it needs to be able to include currently non-measured outcomes so that once you have done the mapping you can see which steps and outcomes you can currently measure and which you are not currently measuring. Use this tip sheet yourself, or hand it out when working with a group so that everyone is clear about the rules you are using to build your model. If anyone wants to know where the 13 rules come from, refer them to the article on which it is based - a set of formal standards for building outcomes models (program logics).
A one page 'tip sheet' which describes the ways in which a well constructed program logic (outcomes model) developed, for instance, for indicator work, can be used for strategic planning, prioritization, evidence-based practice, working out different or activities are believed to contribute to joint outcomes, getting staff outcomes focused, performance measurement and tracking progress, evaluation planning and outcomes-focused contracting. Hand this tip sheet out to the group with which you are building your program logic model and they will get a clear idea of how their organization can leverage off the work they are doing on the model for indicator development. This approach is set out in more detail in the article here.
A full workbook showing you the process for facilitating a group when drawing a program logic model (outcomes model) (e.g. what size the group should be etc.). It then leads you through all of the stages in developing and using such a model for strategic planning, prioritization, monitoring and performance management, evaluation, evidence-based practice and outcomes-focused contracting and delegation. The whole Easy Outcomes system of which this workbook is part is set out at EasyOutcomes.org.