Dr Paul Duignan on Outcomes:
The Zuckerbergs' problem
Last week Mark Zuckerberg announced he wants to give away 99% of his Facebook shares. It seems easy, he just needs to employ grantmaking staff to give it to grantees who will use the money to do good stuff.
However there are several technical problems that will face Mark Zuckerberg's grantmakers regardless of whether they're working through a foundation or through the limited liability company structure that he's setting up in this case to distribute his funding. These are the same problems that face all foundations or funders, whether they are working in philanthropy or in other nonprofit areas like government. The underlying problem is how to identify exactly what a foundation/funder wants to achieve and how it wants to achieve it. It then needs to have a mechanism to make sure that its grantees' outcomes and priorities are aligned with what it wants and that they are working in the most effective and efficient way they can to achieve these.
At the moment the way this is usually done is through foundations/funders preparing screeds of guidance about what they want and grantees preparing long applications setting out what they're planning to do. Those of us who have been on grant assessing committees know where this all ends up. We get faced with reading through endless pages of text-based applications, pondering funding criteria and trying to figure out whether each application is worth funding or not. Often you suspect that the money ends up going to those projects that can afford to employ the best wordsmiths rather than being channeled into the most worthy projects.
One response to the text overload problem has been to put word limits on funding applications. However we know that fixing the social, environmental, educational and the other problems that foundations and government funders focus on is often complex. Just demanding that the length of potential grantees' applications be limited can prove counterproductive. It can prevent foundations/funders and potential grantees from adequately describing the complexity of what they're trying to do.
The truth of the matter is that foundations/funders need an agile methodology for surfacing and comparing their outcomes with those of potential grantees. It needs to be a methodology which, while being simple, still allows them to adequately model complexity when this is required. Such a methodology should let you overview outcomes and priorities while also allowing you to drill-down to a sufficient level of detail whenever you have to go more in-depth. Adequately modelling foundation/funder and grantee outcomes and priorities lies at heat of a set of seven tasks that any foundation/funder must address if it is going to spend its money wisely. The exciting thing is that visual outcomes modeling offers a new methodology for undertaking these tasks and in this article I'm going to look briefly at how it can help with each of them. In a second article I'll be looking at how to do this in practice.
First, foundations/funders must articulate their outcomes and priorities
It's obvious that if foundations/funders can't clearly articulate their outcomes and priorities, it's unlikely that they'll be able to allocate their funding in a way that will achieve the outcomes they're seeking.
From an outcomes theory point of view (the theory of how we identify, intervene with, and measure success in achieving outcomes) it's always faster to identify and work with outcomes in a visual rather than a traditional textual format. Foundations/funders writing about the outcomes they want just using text-based documents; grantees writing about how they're going to achieve these; and, grant assessing committees reading through all this material, is incredibly time consuming in comparison to using a visual outcomes modeling approach.
If you think about what foundations/funders are trying to do here, it's as follows: they have a mental model of the outcomes they want to achieve and their grantees also have a mental model of what they're wanting to do. It's the job of grant assessing committees to compare these two mental models to see if they're in alignment. Traditionally the way we've worked is to try to get people to translate their mental model into text. We've then expected other people to read through all that text and re-translate it back into their own mental model so that they can think about the outcomes and priorities that are being focused on. In the meantime, people spend a considerable amount of time wordsmithing all the text they're producing at each stage in this process.
A visual outcomes modeling approach is much more direct. It is attempting to, in effect, suck the mental model out of the foundations' or funders' heads, do the same with the grantees' model and then quickly compare these models visually - it's potentially a much faster and more transparent approach.
What I'm arguing for here is for foundations/funders to fully articulate their outcomes and priorities using a visual format within a sufficiently technical and fit-for-purpose outcomes model. The particular format they want to use is up to the specific foundation/funder. It's important at this stage that people experiment with a range of possible formats for visual modeling to see which format ultimately works best as a way of structuring the interaction between foundations/funders and grantees.
Getting 'foundations/funders to fully articulate their outcomes' are the operative words here. Of course, foundations/funders often already have a Powerpoint slide with a few boxes on it outlining their outcomes, or some sort of graphic of their outcomes on their website. While there's nothing wrong with using these to summarize a foundation's/funder's outcomes, I'm not talking about the use of a few small diagrams here. What I'm pushing for is the construction of a comprehensive technical visual outcomes model which is capable of providing a full framework for thinking about, and working with, a foundation's or funder's outcomes.
Visual outcomes models go by many names and aspects of them are already employed in the grantmaking business under names such as: program logics, theories of change, program theories, results chains, intervention logics, log frames etc. What I'm trying to do is to encourage foundations/funders to take the possibilities offered by visualization to their logical conclusion and explore whether a fully visual outcomes modeling approach can ultimately be used to underpin the entire funding process.
Once they've built a comprehensive visual model of their outcomes, foundations/funders can use this to communicate to stakeholders what exactly it is that they're trying to achieve. For instance, if the visual outcomes model is in the form of a drill-down model, they can put it up on their website and stakeholders can overview it and drill-down to see what the foundation/funder wants to change in the world. This is much faster than stakeholders having to read through many pages detailing what a foundation/funder is wanting to achieve. When clicking through a properly constructed and presented technical visual outcomes model, a stakeholder quickly becomes convinced that the foundation/funder has thought through their theory of change (the way a program or intervention of any sort works). A foundation's or funder's theory of change needs to surface how they think that their funding it going to impact on the real world problems they're attempting to address by giving funding to their grantees.
Second, improving the way foundation/funder-grantee outcomes are matched
The second challenge for foundations/funders, once they have articulated their outcomes, is to work out how they can quickly assess whether potential grantees' outcomes and priorities align with theirs.
If foundations/funders have successfully set out their outcomes within a comprehensive technical visual outcomes model as suggested above, they can obviously use this to quickly communicate to potential grantees what they're wanting to do. The whole process can become even more efficient if grantees submit their proposals in a visual outcomes model that is in a similar format to the foundation or funder's model. The two models can then be compared in various ways to see if there is alignment between the foundation or funder's model and the potential grantee's outcomes and priorities.
Third, improving grantees ability to understand and articulate their outcomes and priorities
Whether or not foundations/funders decide to use a visual approach to working out if potential grantees' outcomes and priorities are aligned with theirs, they're going to be interested in a third task - building grantees' ability to understand and articulate the grantee's outcomes and priorities. Given the advantages of visual outcomes modelling, foundations/funders should be thinking about how they can promote visual outcomes modeling by potential grantees simply in order to build grantees' capability as effective delivery providers of services. Again, the exact format of the visual modeling with which foundations/funders want to do this is up to the particular foundation/funder. Ideally we will have a series of different approaches to visual modeling being used and we can then all learn more about the pros and cons of different formats for this type of visual modeling work.
Obviously, foundations/funders that are getting their potential grantees to use visual modelling as part of their grant assessment process will be particularly interested in building potential grantees' capability in this area.
Fourth, promoting the use of evidence-based practice by grantees
The fourth task that conscientious foundations/funders are likely to be interested in is also one related to building grantee capability. In this case it is grantees' ability to be guided by evidence-based practice, or as it is also sometimes called, evidence-informed practice.
The problem of getting grantees to actually use evidence-based practice (rather than just agreeing that it's a great idea in principle) is not just that there's a lack of evidence available. In fact we've seen a spectacular growth in many evidence-based repositories (e.g. the Cochrane Collaboration in the health sciences and the Campbell Collaboration in the social sciences). Regardless of how much evidence we have available we need to face the fact that grantee staff are hyper-busy delivering in their area of work. We need a realistic focus on the problem of how we can get grantees and other providers to actually apply the evidence that is available to guide what they do in their day-to-day work.
Visual outcomes models can assist with this problem in three ways. First, technical experts can be asked to review a program to see it if it is evidence-based. However, comprehensively reviewing a program to help ensure that it is evidence-based can be tricky and time consuming because you first need to work out what's happening in the program. Only then can you go on to figure out whether or not it is using an evidence-based approach. Wading through program documentation and talking to program staff is a time consuming approach to getting to understand exactly what a program is doing. On the other hand, if a program is required to produce a comprehensive visual model of what it is planning to do, this provides a very quick and accessible way for an expert reviewer to work out whether or not it is actually employing evidence-based practice.
The second way that visual outcomes model can be used to promote evidence-based practice is peer review between programs. This can work where a number of programs are working on the same topic in different locations. If such programs have visual outcomes models of what they're trying to achieve, they can easily compare these between each other. Differences in approach are immediately apparent and this can form the basis for discussion between them about the reason they have adopted these different approaches.
There is a third way that visual outcomes models can be used to encourage evidence-based practice. When programs are using visual models as a practical tool to guide their work, evidence-based practice can be hardwired into the visual outcomes model they use right at the start of the process. Foundations/funders can either do this hardwiring themselves or contract this work to others.
If programs go on to use such evidence-based visual outcomes models in their work, this will drive evidence-based practice into the heart of what they're doing. This approach, which makes it easy for programs to be evidence-based, seems more likely to succeed rather than just telling busy program staff that they need to go to the evidence-based literature and use it to inform their practice. It is an elegant approach which embeds evidence-based practice into a tool (the visual outcomes model) which programs are using themselves in their day-to-day work for other reasons. The beauty of it is that programs do not even need to know that they're using evidence-based practice for their practice to be increasingly informed by it.
Fifth, making monitoring and evaluation planning easier
The fifth challenge for foundations/funders is how to make monitoring and evaluating planning easier for themselves and for their grantees. Again this becomes something of a paper battle as monitoring and evaluation plans are prepared and reviewed by the foundation/funder. The monitoring and evaluation plans are then executed and monitoring and evaluation reports are prepared. Again, it is possible to make monitoring and evaluation planning more efficient by using visual outcomes models as the basis for such planning. As with visual outcomes modeling itself, the exact format for such monitoring and evaluation plans based on using a visual outcomes model should be determined by the foundation/funder itself. Experimenting with different formats in this area can also help us work out the best approach.
In this article, I've outlined five ways in which technical visual outcomes models can potentially increase the efficiency and effectiveness of grantmaking for people like Mark Zuckerberg and for other foundations and government funders. In the second part of this series I'll be looking at how they can actually do this in practice.
Back to the DoView Blog.