screenshot 633


When we first developed DoView, we spent a lot of time building a massive help system which you can access within DoView from Help > DoView Help. But these days no one has time to access a help system.

Help for software is increasingly in the form of 2 minute videos on a range of topics. We’ve just set up a video library - DoView TV which is going to be our library of really short help videos. We already have ones on: how to format a box; drilling-down under a page and page jumps: and, on how to use clones. 

We’ll be adding to these. Please let us know if there are particular subjects you’d like to have a short help video on. And remember you can go directly to these videos by remembering the short web address doview.com/tv.


_______________________________________________________

Get in touch with us and subscribe to our Outcomes and Strategy Tips Newsletter. Download a trial copy of DoView® software.














We’ve just found out that we’ve been made a Gartner 2017 Cool Vendor. Being recognized as a Gartner Cool Vendor is as good as it gets in the software business. As the Gartner reports says, sometimes simplicity and availability define elegance - that’s DoView! It’s so great that what we’ve been trying to do with DoView has now been recognized internationally. 

We’ve stuck with our vision to produce a piece of software that’s hyper-simple to use and has been optimized for use in the boardroom or in planning meetings to discuss strategy and to make sure you’ve achieved business alignment. 

We’ll be blogging more on this. Here’s some early media on the Gartner recognition. http://istart.co.nz/nz-news-items/doview-kiwi-software-vendor/.

_______________________________________________________

Get in touch with us and subscribe to our Outcomes and Strategy Tips Newsletter. Download a trial copy of DoView® software.














DoView is used in 50+ countries by a wide range of users including, for instance: planning a multi-billion overhall of Denmark’s railway system; private company planning: goverment agencies using it for strategic planning; and nonprofits using it for health and other projects.

For some time now we’ve structured the DoView website along the lines of the different functions that DoView can be used for: strategic planning; measuring indicators; evaluating impact etc.   

But given the diversity of our users, this approach presents a problem. The problem is that specific examples which speak to one customer group often don’t resonate with the other groups.  

We’ve been thinking about this at the same time as we’re seeing mounting interest from the private sector wanting to use DoView as an agile tool for Project Portfolio Management. Project Portfolio Management is what you need to do before you do detailed Project Planning. It consists of working out which projects you’re going to need to do to achieve your priorities. 

So we’ve now restructured our frontpage so that users immediately get channeled into one of three areas: private users; government users; and nonprofit users.

Once people have clicked-down into the right ‘channel’ we can then use the exact language that they speak and tailor the examples we used to ones that will resonate with them.

We would love to hear any feedback you have on this new structure for the website. Please do get in touch if you have any comments. 

_______________________________________________________

Lift your outcomes game by subscribing to the DoView® School of Outcomes Newsletter. Download a trial copy of DoView® software.














We are starting to revamp the DoView website. You will see progressive changes over the next few weeks. We have a new look front page and will be making it easier for you to access what you need. If you have any comments about the website please get in touch

_______________________________________________________

Lift your outcomes game by subscribing to the DoView® School of Outcomes Newsletter. Download a trial copy of DoView® software.














A Thee Minute Outcomes video by Dr Paul Duignan.

_______________________________________________________

Lift your outcomes game by subscribing to the DoView® School of Outcomes Newsletter. Download a trial copy of DoView® software.










doview-pro-vs-doview-standard2


If you’ve a user who's enjoying DoView Standard but intrigued by what DoView Pro has to offer, or if you’re wanting to know which version to purchase, DoView Pro has three additional features:

  1. Syncing functionality
  2. Ability to add corporate colors
  3. Presentation mode which allows handles around boxes to be turned off.

Corporate colors allow you to have an additional color pallete incorporating your corporate colors so that the style of your models conforms to your corporate standard. While you can make any element in DoView any color that you like (Right-click > Change Color > Color Palettes > Custom Color), the added functionality of corporate colors is that all of your corporate colors appear in one additional palette.

Presentation mode (View > Presentation Mode) turns off the small green ‘handles’ on the border of boxes and other elements when you select them - it gives a slightly more polished look to presenting models on a dataprojector.

Now Syncing . . . what’s this feature all about?

It lets you create a master file and collaborate with others when editing a DoView file.  

Syncing can be useful, for instance, if you have a high-level outcomes model and you want different groups within an organization to go through and show how their project boxes link to the boxes in the higher-level model.  You can delegate this linking to the each group by having a separate folder for them to do their editing in.

How do you use syncing?

First you create a Master DoView file and include one folder in it for each of the people you want to collaborate with. 

You then email them the Master file (often you include their name or work title in the name of the folder so they know which one the should edit).

They can then edit anything within their folder and they email the DoView file back to you. 

What happens next is that you open your own master file in DoView Pro and you also open the edited version(s) of the file(s) they've sent back to you. 

You can then go to their folder in their edited version fo the file and do a Right Click. Select Copy Folder to Sync. You then go to your master file and go to their folder within the master file. You do a Right Click and select Paste and Sync Folder.

This will then update your master file with any edits they have made. The important point is that if they have made connections between their boxes and, say, some boxes in another folder, these changes will also be updated.

If a number of people are amending folders in this way you should tell them that they can update links to the folder where you are holding the master file, but not change links to other folders which others will be editing (otherwise the file might get into a mess).


Are you a DoView Pro user?  Let us know what you think of the syncing functionality? 

Back to the DoView Blog.




_______________________________________________________

Lift your outcomes game by subscribing to the DoView® School of Outcomes Newsletter. Download a trial copy of DoView® software.

B434.








AAEAAQAAAAAAAAiCAAAAJDg2MWMwMWE4LTJhZjgtNDQ0Zi1hMTMxLTBmMDNmYTRiYmFmNQ


State of the current discussion about different types of outcomes models

There is ongoing discussion in outcomes and evaluation circles about the various ways in which outcomes models should be structured. My summary of the current state of this discussion is: 

  1. There are a range of different ways of drawing such models which attempt to describe the outcomes and steps being used within interventions. 

  2. Such models can be in: narrative text, tables, visual models or mathematical format.

  3. There are many different names for these models and these names do not necessarily signify models with entirely different rules for modeling (that is, people might draw models structured in different ways but call them by the same name and people might draw models structured in the same way but call them by different names). However the term ‘logic model’ is (by at least a number of people) often taken to mean a model structured under headings similar to: inputs, activities, outputs, outcomes.

  4. In addition to logic models, the names that are used for such models include: theories of change, program theories, strategy maps, strategy models, intervention logics, outcomes models, outcomes chains, outcomes hierarchies, strategy models, program logics, results chains, log-frames (a very specific type of tabular format used in international development).

  5. There are potential pros and cons associated with different ways of drawing such models and different types of models may be of more, or less, use for different purposes.


Comparison of two types of models 

All of these different types of models can be drawn in DoView® software.* I’ve just written an article comparing the pros and cons of just two of these types of models. The first type are the Traditional Inputs/Activities/Outputs/Outcomes Logic Models. These structure the model out under inputs etc. (or similar) headings for either the rows or columns within the model. They often also have a table associated with them to set out the assumptions underpinning the program. 

This traditional logic model is compared with the Duignan Multi-Layered Outcomes Model. This is a model drawn according to the 13 Outcomes Modeling Rules developed within outcomes theory

The Chocolate Chip Cookie workshop exercise - in which a model is built for the process of cooking Chocolate Chip Cookies - used by outcomes and evaluation trainers is used to illustrate and discuss the differences between these two types of modeling. Below is a model for the exercise using the traditional logic model format.


AAEAAQAAAAAAAAiEAAAAJDM1YzdhOTBhLWJjYTItNDgyMS1iMTE5LTkyMDJmYmJmODAzZg


The second type of model for the same exercise - a Duignan multi-layered model - is shown below. Click through the drill-down webpage version to see the detail set out on the different layers. 


AAEAAQAAAAAAAAizAAAAJGY2MmQ0NDM3LTllYTUtNDk4Mi1hYzQ1LTEwOWZhMjY0OTY4MQ


The full article talks about the pros and cons of these two different types of modeling.

You can find the full article here on Linkedin Pulse.


_______________________________________________________

Lift your outcomes game by subscribing to the DoView® School of Outcomes Newsletter. Download a trial copy of DoView® software.

* Full disclosure: Dr Paul Duignan is involved in the development of DoView® software.

Dr Paul Duignan.

B431L








chocolate-chip-cookies-640x370


Many outcomes specialists and evaluators use Hallie Preskill and Darlene Russ-Eft's [1] Chocolate Chip Cookie Exercise when they're running evaluation or outcomes workshops. It's a great exercise because at the end you can reward participants with chocolate chip cookies when they do well!

The same exercise can be used for teaching a range of topics in evaluation and outcomes, including how to draw theories of change, logic models, intervention logics, results chains and outcomes models. We've drawn a simple DoView® Best Practice Templates™ which you can use with any group when you're teaching them about outcomes and evaluation topics.

This is just a basic DoView® outcomes model, we use it in a variety of ways in our strategy, outcomes and evaluation training and I hope to post an article in the future with more detail on how it can be used. In the meantime, we thought that we would put up the template as a very simple example of an outcomes model/theory of change/logic model so anyone using DoView® can start using it and so that I can illustrate a couple of points about drawing outcomes models.

Below is the overview page of the template. It's built using the standard DoView® Outcomes modeling rules. This is just the top overview page, check out the clickable webpage version of the model. A PDF of the template is available as is the DoView® file which created it (download a trial of DoView® software and then edit and play around with the file). undertaken with the client.

chocolate-chip-cookied-overview

A drill-down page

Below is a drill-down page within the template which provides detail which lies underneath the Well-cooked chocolate chip cookies box. All of the other boxes in the overview can be drilled-down in the same manner in the webpage or DoView® software version here.


well-cooked-chocolate-chip


Including boxes that you cannot control or influence

The first point I'd like to make about outcomes model when looking at this model relates to the rules for drawing outcomes models. The DoView® outcomes modeling rules allow the inclusion of boxes that may not be controllable or even influenceable by a program but which are essential to the satisfactory achievement of outcomes. On the drill-down page below regarding Satisfactory Chocolate Chip Cookie eating experience there is a box - Appropriate company to eat with.

Depending on the nature of the Chocolate Chip Cookie Cooking program, it may be that this particular box is outside of the control, or even the influence, of the program. The program may only be responsible for cooking the cookies. However, the philosophy behind the DoView® approach when you're trying to build a comprehensive outcomes model [2], is that it should include everything that is necessary to achieve final outcomes. Otherwise it is just a partial program-centric model rather than a full world-centric model focused on what is needed to achieve outcomes in the outside world.


chocolate-chip-cookie-experience



This approach ensures that the visual outcomes model is a tool which encourages those using it to have a fully outcomes-focused approach rather than to just limit their attention to only what their program is doing. Of course at the end of the day you also need to be crystal clear about direct accountabilities. This is dealt with in the DoView® approach by including indicators on the model and marking-up which are direct program accountabilities. There is an example available on this point. I will also talk about this in a later article on how you can use the Cookie Cooking exercise in workshops to illustrate a range of points about strategy, outcomes and planning.

Labeling boxes inputs, activities, outputs, outcomes etc.

Another thing to note is that on the Well cooked chocolate chip cookies page above, the boxes on the left could be described as 'inputs' if you were building an outcomes model using the traditional columns approach. You can easily build a traditional column-type model in DoView® software, but there are also advantages in building free-form models such as we have done in this template. I will discuss the pros and cons of the different approaches in more detail at a later stage but just wanted to bring to your attention the way this template model has been formatted. 

Pictures in boxes

Lastly, DoView® software lets you put pictures into the boxes shown in strategy templates and outcomes models. It does take a bit more time to locate pictures for your models. However, the reason this feature was initially included in DoView® software was that people working with low literacy communities and mixed language groups in international development wanted to use it. Having pictures in the boxes in the model meant that they could show people a visual outcome model and discuss it with the audience even when the audience could not read the text in the boxes. However, in the DoView® Best Practice Templates™ we normally release, we do not usually use pictures. For instance, this one on best practice for a mental health service.

Poster version

The Chocolate Chip Cookie Template DoView model has been drawn in the form of an overview page and a series of drill-down pages. This approach makes the model much more accessible and easy to work with than an approach which attempts to draw the model on one single big page. It also means that the model can be represented across different platforms: as a DoView electronic model, as a webpage model, and as a letter-size printed model. However, the different drill-down pages in the model can be combined onto a poster version so that readers can get an overview of the model. Here is the poster version below. 


chocolate-chip-cookie-poster


Initial points on how to use this template in a workshop

I will provide more detail in a later article on how to use this template in workshops, however  in the meantime one way you could use it would be to get participants to download a trial copy of DoView® and then draw their own outcomes model using the DoView® outcomes model rules.

They could then open the DoView® file of this template. They could modify it if the model they built included boxes not in the template and then use the outcomes model for other purposes as part of the workshop. For example, putting on indicators, evaluation questions, traffic-lighting areas for program improvement etc.

This means that your workshop participants would get to play around with drawing outcomes models plus they would have a go at using a piece of software for building theories of change/logic models/intervention logics/results chains.  DoView® is very easy to use and most people in a workshop immediately pick up how to use it within minutes.

Any initial comments

Do you have any comments on the initial points I've raised above: whether you normally include boxes that are not controllable or influenceable by a program within a model; whether you like outcomes models that use the traditional input, outputs etc columns; and whether you think including pictures with boxes adds value? You might also might like to critique the content of the template or have comments on how you might be able to use it in a workshop. Put any comments you have up on the Linkedin Pulse version of this article.

[1] Preskill, H. & D Russ-Eft (2005) Building Evaluation Capacity. Sage Publications.

[2] DoView software can be used to model any time of theory of change/outcomes model/intervention logic/logic model/strategy map/result chain/logframe, including those that are in the traditional colum format used in logic models.


_______________________________________________________

Lift your outcomes game by subscribing to the DoView School of Outcomes Newsletter. Download a trial copy of DoView.

* Full disclosure: Dr Paul Duignan is involved in the development of DoView software.

Dr Paul Duignan

B423L






water-drop-name

Ask The Outcomes Guru - 'No outcomes question too simple nor too hard'

Question

M&E and capacity building specialist Lesley Williams (Linkedin Profile) asks: How do linear models address the complexity in which we work?

Answer

There are two issues here, the first is the size of the model being built and the second is whether or not a model is ‘linear’. What I’m striving for in my work with visual outcomes modeling is to encourage the development of models that are ‘simple but not simplistic’. This relates to both the size of the model and the number, and type, of interactions (lines and arrows) drawn between boxes within the model. Here’s an example of multi-page model that I built in the form of a best practice template for a mental health service

Size of models

In terms of size, a multi-page model such as the mental health one provides enough room to model a fair degree of complexity. I think that a model should include as many sub-pages as are needed to fully describe the program that’s being modeled.  

In contrast to multi-page models, people often drawn single page visual models of programs. In my view, regardless of their usefulness as an overview, a single-page model does not adequately convey the complexity of what’s happening within most programs. Don’t get me wrong, I’m not complaining about people drawing program overviews - it’s just that you should be able to instantly drill-down to lower-level pages to get access to as much detail as you need. 

Should models be ‘linear'

While I think that models can be large, we need to be careful about making them too visually complex at this stage in socializing the widespread use of visual modeling in planning and evaluation. 

The first point is that the flow of time is generally viewed as linear and I don’t think there is a problem with this being reflected in our modeling. However the causal relationships between boxes that are linked within models is by no means linear in the mathematical sense that an increase in one box will be followed by the same amount of increase in another box. All sorts of causal relationships are possible between boxes with thresholds and cutoff points etc. In addition, there may be feedback loops in operation. 

Ultimately I would like us to be able to model all of this visually and I am hoping that we will gradually built the appetite for this amongst stakeholders at the same time as we improve our software visualization tools to do so (animation and Virtual Reality would be cool in this regard). At the moment I think that you need to at least be able to connect any box with any other box (if you have a multi-page model this needs to be able to be done across pages) and be able to model feedback loops where one box influences, and is influenced by, another box in any software that you use to draw your model.

While I use box-linking extensively to show the relationship between projects and higher-level outcomes in ‘line-of-sight’ alignment, at the moment I normally don’t do a lot of linking of boxes within the higher-level modeling of outcomes and the steps leading to them in the models I use when communicating with stakeholders. 

The way I put it to stakeholders who are looking at my models is that there are many links and potential feedback loops between the boxes on the page. I actually put a note to that effect on the bottom of most pages in my models. I then just show a general movement of time with single arrows pointing from left to right (as in the mental health example). I use a gray filled arrow to signify a ‘then’ statement and a non-filled arrow to represent an ‘and’ statement and leave it at that.

First rule, ‘Don’t scare stakeholders'

As I said before, I think that further down the track we will get better ways of visually modeling complexity which will be less overwhelming. Equally importantly, stakeholders will have more of an appetite for more complex modeling. At the moment there is a trade-off between modeling complexity and turning off stakeholders.

The cause of visual model can potentially be put back by years if just a few high-level stakeholders don’t get our models. This happened in the case of the diagram shown to General McCrystal in Afghanistan where he said something along the lines of, it would be easier to win the war than understand this diagram!

I talk about this example in this Fulbright Seminar.

Keep it simple for stakeholders but go to town in backroom work

Here is an example of an outcomes model with links and one without. I think that the second model is more likely to be accepted by stakeholders at the present time. However this does mean that the more complex model could not be used as a back-room analytical tool. 

policy-influenced-with-line-and-arrows-b421l


policy-influenced-with-just-arrows-b421l



Conclusion

In conclusion, I think that we need to allow models to be as large as they need to be. There will be a linear aspect arising from the fact that time passes in a linear fashion. But there will be non-linear aspects from thresholds, cutoffs and feedback loops. While we can try to model as much complexity as possible in back-room work, there is an argument for trying to keep models reasonably accessible in stakeholder-facing work until we increase stakeholders appetite for visually modeling complexity and we get better at it from the software point of view. 

_______________________

If you would like to discuss this post you can do so on the Linkedin Pulse version of it.

Lift your outcomes game by subscribing to the DoView School of Outcomes Newsletter

Ask the The Outcomes Guru any question about outcomes.

Full disclosure: Dr Paul Duignan is involved in the development of DoView software.

Dr Paul Duignan









outcomes-guru-small



TheOutcomesGuru.com

We’ve just launched The Outcomes Guru (TheOutcomes Guru.com) service and hope to answer any questions people have about strategy, outcomes, performance measurement, evaluation, impact measurement etc. 

You can see answers to questions and discuss them in the DoView Linkedin Group

People tend to have lots of questions about these areas and Dr Paul Duignan has extensive experience dealing with these issues in real-world consulting situations across all sectors.

There’s no question too simple or too complex for you to ask.

Ask questions by putting them up on the DoView Linkedin Group or by  contacting us directly. Your questions can be anomymous or have your name associated with them.

_______________________________________________________

Lift your outcomes game by subscribing to the DoView School of Outcomes Newsletter.

b420












people-in-therapy-640x370


There are millions of mental health services around the world. Can we make it easier for those funding, setting up, running, or reviewing such programs to share best practice? Is it possible to represent and quickly communicate the essential ingredients required in any effective Mental Health Service?

The Mental Health Service DoView® Best Practice Template™ has been designed to summarize the key elements needed in such programs within an accessible visual model. It can be used for a range of planning and other purposes which include: contracting for such services; planning them; determining priorities; monitoring service implementation; improving service performance; and identifying out how to measure impact. There is more information available on how the template can be used for these functions.

DoView® Best Practice Templates™ are presented in the form of visual models of the outcomes that a program is trying to achieve and how it is planned to achieve them. The templates are built according to a specific set of rules to ensure that they can be used for a range of program and organizational purposes. They are edited in DoView® software (currently used in over 50 countries).  An overview page of the mental health service template is shown below. It sets out the high-level areas in which the service has to function well. Beneath each of the colored boxes, more detailed drill-down pages elaborate the content of each summary box.  

The DoView® visual best practice model covers areas such as: having a well run service; well trained, supported and effective staff; sufficient and effective partnerships with other services; and appropriate interventions undertaken with the client.


overview-mental-health-provider-doview


The screenshot below shows just one of the drill-down pages - Appropriate interventions undertaken with client. It sets out all of the things that need to happen for this to occur.

clickable webpage version of the DoView® best practice model is available. Within the webpage version, if you click on the boxes with triangles in them you will be able to drill-down to the more detailed pages.


intervention-mental-health-provider-doview


In this template, the list of final outcomes on the Good mental health and related outcome for clients drill-down page have been specified as a list from the Carer and User Expectations of Service (CUES) mental health mental health outcomes tool. On the left of the outcome list from the CUES is a list of the CUES statements translated into outcomes that are controllable (and therefore more appropriate for use as direct accountabilities) by mental health workers. 

For example, the CUES outcome of enough money to meet basic needs is translated into quality budget advice and benefit application assistance. However, anyone editing and using the DoView® mental health service strategy template could easily modify the outcomes to a set they liked more, either based on a different mental health outcomes tool, or using a list that the service has developed itself in consultation with its clients and stakeholders.


outcomes-mental-health-provider-doview


In addition to being represented within DoView® software, as a clickable webpage model and as a printed letter-sized model, a DoView® Best Practice Template™ can also be represented in a poster version as can be seen below. This is a powerful way to quickly show staff, funders and stakeholder the overall roadmap of the service's outcomes and how it is going to get to them.



poster-mental-health-provider-doview


If you have experience with mental health services you might have some views on whether this DoView® Best Practice Template™ captures the essential ingredients needed in such services. Feel free to comment below if you think that new boxes need to be added or any need to be changed. Also, if you're interested, download DoView® software and the template to edit it, play around with it and adapt it to your particular program.

_______________________________________________________

Lift your outcomes game by subscribing to the DoView School of Outcomes Newsletter.

* Full disclosure: Dr Paul Duignan is involved in the development of DoView software.

Dr Paul Duignan

b416l









You can tag major elements in DoView (boxes and the elements under the advanced menu - indicators, questions and items). 

You can use tags to categorize these elements in any way you like. For instance, you might like to categorize your boxes into outputs and outcomes and indicators into current and proposed.

A '#’ is used at the start of the tags as it has now become the universal symbol for a tag thanks to its use in Twitter. 

You can have as many tags as you like on any one element. 

Tags are entered in the box that appears at the bottom right of the screen beneath the page list. You only see the box when you click on an element that can have tags. You can see what the box looks like in the screenshot below. To enter additional tags, put a comma after the first tag (e.g. after 'This is a tag'). 




Essential ingredients of a children or young people in care program - A DoView Strategy Template

Dr Paul Duignan on Outcomes: 

Anyone working in the children and young people in care sector knows that programs for this population group need to involve careful planning, risk management and supervision. Can we make it easier for those working on such programs to share best practice? Is it possible to represent and quickly communicate the essential ingredients of a children or young people in care program?

The Children and Young People in Care DoView® Best Practice Template™ is an attempt to summarize the key ingredients of such programs within an accessible visual model which can be used for a range of planning and other purposes. These purposes include: contracting for such programs; planning them; determining priorities; monitoring program implementation; improving program performance; and working out how to measure impact. There is more information available on how the template can be used for these functions.

A DoView® Best Practice Template™ is presented in the form of a visual model of what it is that a program is trying to achieve and it plans to achieve it. They can be edited in DoView® Software. The overview page below shows the high-level outcomes for a children or young people in care program and what needs to happen in order to achieve these outcomes. Beneath each of the colored boxes, more detailed drill-down pages elaborate the content of each summary box.  

The DoView® visual model covers areas such as: having a well run provider arranging placement for the child or young person; making sure that caseworkers are experienced and effective; vetting potential caregivers; making sure that the child or young person is cared for properly; caring for the caregivers; keeping in touch with the natural family where possible; ensuring that the child or young person thrives in all of the important areas of life; and ensuring that there is appropriate cultural work undertaken with the child or young person.  


children-in-care-doview-caseworker


The screenshot below shows just one of the drill-down pages - ensuring that caseworkers are experienced and effective. It sets out all of the things that should happen for this to occur.

A clickable webpage version of the DoView® visual model is available. Click on the boxes with triangles in them to see the drill-down pages.


children-in-care-doview-overview

If you work in children or young people in care programs, or have some experience with this type of program, you might have some views on whether this DoView® Best Practice Template™ captures the essential ingredients needed in such programs. Feel free to comment below if you think that new boxes need to be added or any need to be changed. Also, if you're interested, download DoView® Software and the template to edit it, play around with it and adapt it to your particular program.

_______________________________________________________

If you would like to discuss this post you can do so on the Linkedin Pulse version of it.

Lift your outcomes game by subscribing to the DoView School of Outcomes Newsletter.

* Full disclosure: Dr Paul Duignan is involved in the development of DoView software.

Dr Paul Duignan

b413l












1362539281jex23


In North Carolina, Leigh Hayden, a lead planner/evaluator for a North Carolina government agency, is using DoView to keep things in mind. (Sorry for the obscure pun. JamesTaylor once wrote a song called Carolina in my mind).

Leigh writes "As a planner/evaluator for a North Carolina government agency, DoView allows me to easily construct logic models for my organization, its units and individual programs. I am able to link potential projects, evaluation questions and performance indicators for agency prioritization and planning. 

DoView helps our agency and partners conceptually understand the resources and actions necessary to meet goals. It aids long-term planning and visually communicates desired outcomes and agency progress."

Information on using DoView in these ways is available for strategic planning and for monitoring and evaluation.  

Download a DoView trial now.

Back to the DoView Blog.

B412


doview-page


Chicago Public Schools were experimenting with the use of Small Learning Communities (SLC) within several high-needs schools. The SLC approach is where small groups of students are aligned with small groups of core teachers throughout the four years of high school. The idea is for deeper relationships to develop between, and among, students and teachers. It is believed that this can enhance continuity of instruction and increased interest by teachers in students’ futures.

A DoView outcomes model (logic model) was used in the project to identify its outcomes and to plan its evaluation. 

For those with an interest in education, or in logic modeling, here is the DoView outcomes model described.  

Download a DoView trial now.

Back to the DoView Blog

B409


ny-skyline

Dr Paul Duignan on Outcomes: 

People often try to structure outcomes around their current organizational structure. In fact, it’s better to base outcomes on what you want to change in the real world. Organizational structures come and go, when you’re specifying your outcomes, what you should be modeling is the real outside world.   

Sometimes management, or others, put arbitary limits on the number of outcomes that should be produced. This violates the principle that you should be modeling the real world. This is set out in the Real World Outcomes Principle within outcomes theory. Check out the principle here, and for a longer discussion of some of the implications, look here

Get tips on outcomes by signing up to the DoView School of Outcomes Tips Newsletter. Follow me on Twitter.

Dr Paul Duignan

b407









A muddle in the middle - dealing with the outputs-outcomes connection

Dr Paul Duignan on Outcomes: 

Outcomes Agony Aunt

One of the cool things about being an outcomes theorist is when people tell you about their outcomes problems. It's a bit like being an Agony Aunt columnist but you're providing advice on outcomes rather than people's romantic problems. The moment you mention you're involved in measuring outcomes people start telling you about their difficulties. You can usually identify how their problem is arising - it will be because an outcomes theory principle is being violated in some way. Using the conceptual framework provided by outcomes theory makes it easy to diagnose and provide fixes for most of the common outcomes problems people describe to me.

 
A touch of theory - levels within outcomes models

To make it easier to talk about the particular problem I want to discuss in this article, we need a little bit of theory. From an outcomes theory point of view, when people are talking about developing lists of outcomes, whether they know it or not, they're talking about setting up an outcomes system. An outcomes system is any system that is attempting to do any of the following: identify; prioritize; improve; measure; attribute; contract; delegate or hold parties to account for outcomes of any type. An important part of any outcomes system is an outcomes model. Conceptually an outcomes model can be thought of as a set of boxes within a visual model showing high-level outcomes and all of the steps it's believed are needed to achieve them.

Outcomes models have three major levels. At the top is where we find boxes that are often described as goals, results or outcomes. At the bottom is where we find boxes often referred to as outputs, activities or projects. It's the area in the middle that I want to talk about here. It's the job of the middle boxes within an outcomes model to detail the logical connection between the boxes at the bottom and the ones at the top of the model, this is often called a 'theory of change'. The need to do this in any outcomes modeling is captured in the Detailing the Middle outcomes theory principle. In cases where the logic of this connection has not been spelt out properly, we're confronting what's called the 'problem of the missing middle'. The function of showing the logical connection between the bottom and the top levels of an outcomes model is central to outcomes modelling - it's the reason that outcomes models are sometimes referred to as logic models.

Effect of constraining the number of allowable high-level outcomes

Recently I was talking to a colleague who had several problems that can easily be formulated in terms of the levels within an outcomes model. As is often the case, my colleague had first been asked to develop a set of high-level outcomes for the work that they're involved in. But, as also often happens, there were constraints placed on how these 'high-level outcomes' were to be itemized and described. My colleague had been instructed that there should only be a limited number of high-level outcomes and for what seemed in essence presentational reasons, they were not allowed to spell out much detail beneath them.

These types of constraints, on how many high-level outcomes there should be and how detailed they should be allowed to be, are often imposed by people setting up outcomes systems. However they have considerable consequences when it comes to working with outcomes. The result of applying these constraints means that the set of high-level outcomes that is produced will inevitably only describe the very highest  levels of the relevant outcomes model.

Placing constraints that effectively push the high-level outcomes that are being formulated right to the top of an outcomes model has implications for the 'middle' of the model. In essence, it expands the size of the remaining 'gap' - the 'middle' that needs to be bridged between the bottom (outputs, projects or activities) and the top of the model (results, outcomes or goals). This occurs because the constraints have forced the outcomes that are being specified right up to the top of the model. The gap will be wider than in a case where one is allowed to have as many high-level high-level outcomes as one likes and/or where you're allowed to include as much detail as you like. So the consequence of imposing the constraints is that more detailing will have to take place in the middle. This is because the middle is larger than it might have been if a less restrictive approach had been taken to formulating high-level outcomes.

 

What should determine the number of outcomes?

The second issue that my colleague was facing was that the area of activity for which outcomes were being developed was a large one undertaken by a single administrative grouping within their organization. This single administrative grouping included a significant number of sub-groups involved in somewhat varied activities. My colleague had been instructed to develop a small set of outcomes, but diversity in what a set of sub-groups within a wider administrative group is trying to archive makes it hard to produce a small set of outcomes which can adequately encompass the work of the group as a whole.

The Real world outcomes principle within outcomes theory states that the  number of outcomes struck for any type of activity should be dictated by what is being attempted in the real world, not by the often arbitrary way in which sub-groups that are undertaking different activities have been clustered within an organizational structure. To put it simply, if people are doing a lot of different stuff they're going to need to have sufficient outcomes to cover all of it. In such situations it's a mistake to blindly force people to develop outcome sets which only have a limited number of outcomes.


The constraints that had been placed on my colleague in doing their outcomes work translated into the two problems they talked to me about. The first one was that some of the people from some of the sub-groups within the wider administrative group for which outcomes were being prepared could not 'see' what they were doing within the small set of  high-level outcomes that were being been developed. The inevitable result of this was considerable unfruitful argument about the wording of the small number of high-level outcomes my colleague producing. Given the diversity in what the sub-groups were doing, it is unlikely that any wordings could be arrived at which would lead to all the sub-group members being satisfied that what they were doing was adequately represented in the outcome set.

 

Detailing the missing middle one activity at a time

The second problem that arose from the constraints imposed in this case was regarding detailing the 'missing middle' between the high-level outcomes and the bottom of the model. As I've said, requiring that there only be a small number of high-level outcomes inevitably results in a large 'gap' between the few specified high-level outcomes and the lower-level outputs, activities or projects which are being undertaken. Obviously this 'missing middle' in the outcomes model needs to be detailed in some way in order to comprehensively set out the theory of change that is being attempted. In my colleague's instance a common strategy was being employed to do this. The strategy was to get them to write a block of text for each activity being undertaken within the group. In each instance this block of text was meant to set out the 'rationale' for why the particular activity was being undertaken. The purpose of a rationale of this type is to show how it's believed that each of the activities being done is going to contribute to high-level outcomes. The idea in this approach is for the missing middle to be separately detailed for each of the activities in turn.

My colleague was complaining that they thought that detailing all of the activities in this way was repetitive and inefficient. They were well aware that despite how much work they put in, very few people would ever read the screeds of text they were having to write. Developing this outcomes set was an administrative task that they had to do on top of their normal work duties and was taking up a great deal of their time which though could be more usefully employed.

In addition to the inefficiency of this approach, even when completed, the whole collection of textual rationale statements for each activity would not provide a particularly efficient way for anyone to rapidly overview how well the bottom level of the model is connected to the top - the overall theory of change for the work that the group as a whole was doing.

Using a visual outcomes model to detail the missing middle

The approach being adopted in my colleague's case is a common approach to detailing the middle of an outcomes model. However outcomes theory would suggest that the problem of the missing middle be dealt with in a somewhat different way. First, it would suggest that a comprehensive visual outcomes model should be built for the work of the group. This would be built according to the rules for building outcomes models and would not suffer from the constraint around the number of high-level outcomes and limited detailing of top level outcomes that my colleague was having to confront.  Building an outcomes model would allow the identification of as many high-level outcomes as are needed to represent the work that is being done by all of the sub-groups within the wider group. It would also provide as much space as is needed within the visual model to document the full theory of change set out in the steps in the model linking the bottom with the top levels of the model.

In effect this approach would be developing a standardized model of the way in which activities were believed to be influencing high-level outcomes. This allows one to articulate the missing middle once and for all and not have to do it individually many different times for each separate activity. Common pathways for different activities only need to be detailed once within the visual model when using this approach. In contrast, using the individual blocks of text approach to articulating individual rationales for each activity ends up with duplication of all common pathways within different rationale statements.

 

Showing an activity's logic within a wider context and checking for alignment

Of course using a visual modelling approach, even though you've developed a common model of the logic of what you're trying to do, you still need to be able to identify the particular pathway of an individual activity. If you're using suitable outcomes software (e.g. DoView*), this can easily be done within a visual outcomes model by including a box for each activity and linking it to each box within the higher levels of the model it is focused on.

This makes showing the individual theory of change for any particular activity easy. In outcomes software, clicking on an activity box will show up all of the higher-level boxes within the model which it is linked to. This will show exactly what the activity is attempting to influence within the outcomes model and spell out the particular pathway for that specific activity. This has the advantage of showing the theory of change for the particular activity not just on its own, but also within the larger context of the whole model.

Once the individual pathways for different activities have been mapped onto an outcomes model, this opens up various possibilities for further analysis which are not available through the text-based approach to 'articulating the middle'. Again, with suitable outcomes software which counts the number of activity boxes mapping onto each box within the higher levels of the outcomes model, you can quickly identify gaps and overlaps in the mix of activities that are being undertaken. This is done by using a visual 'line-of-sight' alignment approach where you can see how many activities are, or are not, linked to which boxes within the higher-levels of the model. (For an example of line-of-sight alignment see the screenshots at the end of the page here).

It's not clear how those wanting a block of text rationale to be separately developed for each activity believe that one can efficiently check for alignment between activities and higher-level outcomes. Presumably the idea is that someone reads all of the statements setting out the rationale for each activity and thinks about whether there are any gaps or overlaps. However, the 'cognitive load' of doing this - the amount of mental energy required - is high. As soon as there are a number of activities involved, it is very hard for any reader to remember what was stated in each of the rationale statements and so they cannot make a rigorous judgment regarding whether there are, or are not, any gaps or overlaps.

In conclusion, as I said at the start, when you're being an outcomes Agony Aunt and listening to people's outcomes problems, outcomes theory makes it very easy to diagnose their problems. It is also easy to suggest ways in which their problems could be overcome by applying outcomes theory principles to the way in which they are working with outcomes in their particular setting.  

* Full disclosure: Dr Paul Duignan is involved in the development of DoView software.

If you want to comment on this article you can do so on the Linkedin version of it.

Get tips on outcomes by signing up to the DoView School of Outcomes Tips Newsletter. Follow me on Twitter.

Dr Paul Duignan

b402l









PC MAC canva


It has always been part of our DoView philosophy to encourage groups of users to swap DoView files as they build and improve their strategy maps, outcomes models, theories of change and other visual models. 

To do this, when we released the Mac version of DoView, we put a lot of effort into making sure that files saved in the PC version of DoView can also be opened and edited in the Mac version.

Sometimes Mac and PC versions of software can’t interchange files saved in each, or there are formating problems when you do open a file created in another operating system.

With DoView files you can happily edit one on a PC and email it to a colleague who has a Mac. They can edit it on their Mac and send it back to you. All the time you can be assured that it will keep all of the formating you have done with it.  

Download a DoView trial now.

Back to the DoView Blog

B403


screenshot 738


Alignment is the most important strategic task

There are many situations in organizations where you need to prove that what you’re doing is tightly focused on your outcomes. For instance, organizational alignment is all about this - you need to show that your projects or activities are focused on your priorities. In educational and training settings you need to prove that what you’re teaching is focused on curriculum outcomes. It can be argued that alignment - correctly targeting the use of resources - is the most important thing in organizational strategy.

It’s hard to quickly prove that you’ve achieved alignment and this is why DoView’s Counting Links Function is so revolutionary. It lets you show alignment quickly - by showing it visually.

How to count links in DoView

First simply link boxes - Left-Click on the blue diamond in the middle of a box you’ve selected and hold down the mouse button while you pull the connecting arrow out and over the box you want to connect to. Connect as many boxes as you like in this way, for instance a set of project or activity boxes with a set of outcome boxes.

Remember that if you have selected the menu item View > Show Off Page Selected Step then you can make links between boxes on different pages within DoView. This is a unique function you can find in DoView. It means you can build multi-page models you can easily read and click through, but you can still link boxes even if they appear on different pages within your model.  

Viewing the number of links

Now just click on Count Links (it’s on the right in the Tool Bar at the top of the screen). A little number will appear - at the top and bottom of each box if View > Model Direction > Bottom-To-Top is selected. Alternatively, the numbers will appear on the right and left of boxes if Left-To-Right model direction is selected.

Checking alignment is now easy because you can see how many boxes (e.g. projects or activities) are focused on each priority outcome. You can also see if there are boxes which have a lot of projects or activities focused on them even though they are not priorities. This lets you identify gaps and overlaps in allocating your effort.

Seeing exactly which boxes are linked

If you do a Right-Click > This is The Result Of on an outcome box you will see a listing of all of the project or activity boxes linking to it. Alternatively use Right-Click > This Makes Happen on a project or activity box to show a list of all of the outcomes boxes that that project or activity is focused on. You can also get a PDF listing of all of the links using the menu item File > Print As PDF and selecting the checkbox Include Details (In a Separate File).

Look at the page here to see how it’s done in practice in strategic planning.  

Check out how DoView can be used for curriculum mapping

Affordable Enterprise Portfolio Management

Enterprize portfolio management is a fancy name for this type of organizational alignment work. You can pay thousands for complex enterprize portfolio management software and platforms, or you can dip your toes in the water and prototype this approach for a few years using very affordable DoView Software. Remember that the DoView approach is scalable - you can link hundreds of project and outcome boxes. 

If you have any questions about this approach, please post them as comments to this article in the DoView Linkedin Group.


Download a DoView trial now.

Back to the DoView Blog

B401





scotland1


Guest Post from Avril Blamey:

I’m a freelance planning and evaluation consultant based in Scotland and have been using DoView since 2004 with most of my clients working across areas such as: health improvement; employability; leisure; social care; local government; and, energy. 

I use the models developed on DoView to support both outcome focused planning and to develop bespoke monitoring and evaluation solutions and frameworks for clients. Lots of my work involves theory driven evaluation approaches and so I tend to build logic models and theories of change as a first step with most clients. 

DoView is intuitive to use but the most important thing for me is that it allows me to embed more detailed models and maps under an organization’s strategic model.  That way you can share the simple strategic picture but also capture the detail in terms of underlying programme theories and assumptions for future testing/evaluation and so strengthen the programme as it develops. Having both strategic and more detailed models aids the identification of success criteria and indicators that are essential for effective monitoring.

The DoView team  are great at answering questions about IT issues and building and linking more complex models – despite the time differences!

Avril Blamey, Avril Blamey and Associates. Contact Avril at Avrilblamey.co.uk.


______________________________

Download a DoView trial now

Back to the DoView Blog

B399


guys-making-movie


Evaluating stuff is hard and our mission is to make it easier for people to plan evaluations using our visual approach. We had a request regarding how you could best use DoView to evaluate a video campaign. 

We’ve prepared a resource showing how to do this. It’s at DoView.com/u/evaluate-video-campaign.html.

If you have to evaluate anything like this have a play around with it.

As we say around here, Happy DoViewing. 

Melissa Bethwaite

Download a DoView trial now.

Back to the DoView Blog

B379


AAEAAQAAAAAAAAQ_AAAAJGMwMDRmMTg2LTBkNjctNDk4ZS1iNWQ3LTU1MGU2YjdhMzcyZA

Dr Paul Duignan on Outcomes: 

High up on the central plateau

We had crossed the snow covered South Crater and we were standing beneath the flank of a steeply rising ridge covered in snow. It was a perfect day in the mountains of the New Zealand central plateau. We only needed to climb up the final ridge to Red Crater and we would easily complete our crossing. (For those who want to know, the ridge I am talking about lies between the two mountains in the photo above).

The four of us looked at each other and decided to continue up the ridge despite the snow. There were hot pools on the other side of the mountain near the hut where we were planning to stay. These have recently been destroyed by an eruption, but this was some time ago and at that time you could still access them. There's no doubt that our decision-making was influenced by the thought of sinking into the warm water of the pools and watching the sunset if we made it to the other side of the mountain. 

We climbed up the flank of the ridge and then started making our way up it towards the Red Crater. We didn't have crampons as we'd not expected this much snow and at the bottom of the ridge the snow was soft. Our party included an inexperienced 14-year-old - the nephew of one of my hiking companions. At one stage I stopped and took a drink from my water bottle. When I was about to put it back into my pack, I dropped it. It slithered off down the steep snow slope. We all just stood and watched in silence.

The seriousness of the situation dawned on us. If the 14-year-old slipped - well if any of us slipped - would we go the way of the water bottle? Fortunately, we were already close to the top of the ridge and at that moment a climber appeared on the ridge-line carrying an ice ax. I scrambled up to him, borrowed his ax, returned back down and cut steps for the 14-year-old, who was now tiring. We all eventually made it to Red Crater in one piece.

It was sunset now and we’d made our destination hut. I leaned back in the hot pools, surrounded by snow. In this perfectly relaxed setting I started reflecting on our decision to climb the ridge without crampons and ice axes.

Were we right or wrong? It was certainly true that we'd been 'successful'. We had got over the track as we'd planned and we were now enjoying the fruits of our adventurousness here in the hot pools. If we'd turned back we would have had none of this.

However, were we right to go on as we had done without crampons and ice axes and with an inexperienced 14-year-old in our party? The answer is certainly no. It was a mistake, regardless of the fact that the outcome turned out to be positive. We'd taken too much of a risk and we could all have been sitting around a hospital bed at this moment, or worse. The thought of the hot pools had seduced us and we'd carried on regardless of the risk.  

 

To the US Federal Reserve

It's a long way from a ridge on a mountain in New Zealand’s central plateau to a meeting of the US Federal Reserve trying to decide whether to raise interest rates or not. Recently the Fed made the call to raise them.

Economist Paul Krugman criticized the decision because he believes that current economic indicators don’t justify an interest rate hike. In his criticism of the Fed, entitled Fed Follies, he touched on this issue of whether success on its own is enough. He writes:  '. . . it will be quite some time before we have evidence about whether the Fed's judgment of the economy's trajectory was right. (I think this was an ex ante mistake even if it turns out OK ex post . . .').

Even after cutting through the economic jargon of ex ante (before the fact) and ex post (after the fact), what Krugman's saying here seems somewhat paradoxical. Even if the Fed's judgment is 'right', it's still a 'mistake', and presumably 'wrong' in some sense for them to decide to raise interest rates at this point in time.

How can we make sense of Krugman's comments and translate them into a generalizable principle which speaks to a broad range of other decision-making situations? I'm interested in identifying such principles so as to guide strategic thinking and decision-making on any topic.  

 

Outcomes theory and the When success is not enough principle

I’m an outcomes theorist and my job is, obviously, to study outcomes theory. Outcomes theory attempts to formally state the set of decision-making and other principles we should use when we're trying to take action in any area to achieve outcomes. These principles need to be general enough to be relevant whether you’re climbing a mountain or trying to manage an economy. The theory’s focus is on how we identify, prioritize, measure, intervene in, attribute and hold people to account for outcomes of any kind in any field.

The operative outcomes theory principle here is called: When decisions are not always vindicated by their outcomesor more colloquially, the When success is not enough principle

This principle clarifies the issue of whether a decision can be vindicated (i.e. 'made right') by whether its desired outcomes are achieved or not. There are really two definitions of 'right' in operation in such situations. The first is whether a decision was 'right' at the time it was made - whether it took into account all of the information and factors that should have been taken into account. Our decision did not take into account the risk we were exposing ourselves to and therefore it was 'wrong' at the time.  Krugman's also arguing that the Fed's recent decision is similarly wrong regardless of whether it turns out to be 'right' in the end. Based on his reading of the current economic indictors, he thinks that interest rates should not be raised. Therefore he's arguing that the Fed’s decision to raise them at the current time is not consistent with this data and is therefore 'wrong' regardless of the ultimate outcome.

Outcomes theory’s When success is not enough principle states that in a number of situations (specifically risk management situations), the 'rightness' or 'wrongness' of a decision should, in the first instance, only be based on the information the decision-maker has at the time they make the decision. A decision which might have been regarded as 'wrong' on the basis of what was known at the time cannot suddenly change to 'right' on the basis of later information regarding whether desired outcomes were achieved.  

 

The principle in action – never leave safety to luck

The Success is not enough principle is most appropriate in situations where the cost of a one-off failure is so high that you want to make sure that people take very serious steps to avoid it. You don't want to reward them just because they got lucky.

The principle lies behind effective risk management systems, for instance,  in airline safety. Pilots are punished for deviating from routines and procedures that they're required to follow when flying. It doesn't matter if no crash results from their failure to follow the correct procedure. The point is that at the time when they were making their decisions they should have followed the standard safety procedures. Their decision to deviate from safe practice was still wrong regardless of the outcome and in the airline business they are disciplined accordingly. The When success is not enough principle highlights the fact that relying on luck and ‘things just working out’ is not enough in situations where major risks are being managed.

‘Seeing’ the principle in action by using a visual model

If we look at the  principle in action in the form of a visual outcomes model (an approach used extensively in outcomes theory), this principle is all about the level at which accountability is set. I've illustrated this with a DoView outcomes model of our adventure in the mountains. The DoView model shows the steps involved in us having a safe hiking trip. I've traffic-lighted the DoView with how I think we performed on the various steps - red is hopeless and green is good. Note that I’ve traffic-lighted Appropriate gear for the conditions likely to be encountered green. We did have sufficient gear if we had kept our hike at an appropriate technical level. Our errors were all around the risk management boxes shown within the DoView model below.  


when-success-is-not-enough


The point is that our level of accountability was not at the extreme right box in the outcomes model - Planned trip completed. In this case, the appropriate level to set accountability is at the Safe hiking trip box where safety is operationally defined in terms of following the right risk management procedures. Because we failed at this level (the number of red traffic-lights leading up to this clearly shows our failure) we failed in our accountability, despite the green traffic-light on the Planned trip completed.

In business decision-making, a visual model like this is a very powerful way of clarifying the level at which parties should be held accountable. In practice, people sometimes print out a PDF of the relevant DoView model showing the level of accountability and attach it to their contracts with providers. This ensures that everyone is clear about the level at which accountability kicks in and it avoids any confusion on the part of any of the parties involved. Without a visual model there often can be a lack of clarity about the level at which accountability is being set. 

If you’re interested in downloading this model to play around with it, or to build your own model, get a DoView free trial here and get the DoView file of the model here. You can find the rules for drawing this type of outcomes model here.

It should be noted that the When success is not enough principle is not the only principle that is concerned with the relationship between acting and the final outcomes that an action is seeking to achieve. The final outcomes of actions can be relevant in the wider context of working out what level of risk management is required over a number of similar decision-making situations.

In addition, other principles come into play in situations where people are willing to tolerate some failures in decision-making within an overall larger system. For instance in evolutionary, experimental, or entrepreneurial systems, which consist of many attempts to solve problems. These attempts are likely to include some failures as well as some successes - that is how such systems are set up to operate. In such situations, final outcomes and 'success' are more relevant to incentivization than in the type of risk management cases where the When success is not enough principle is more relevant. I'll be discussing these other outcomes theory principles at a later stage.

As with all outcomes theory principles, the When success is not enough principle's purpose is to help us clarity the conceptual basis of how we think about, and work with outcomes. The more we can identify and communicate general principles across different topics, the more sophisticated we will be at setting up the strategy and accountability systems that we rely upon throughout society. 

If you want to comment on this article you can do so on the Linkedin version of the article.

Get tips on outcomes by signing up to the DoView School of Outcomes Tips Newsletter. 

Paul Duignan, PhD, follow on Twitter.com/PaulDuignan; contact me here. Checkout our Linkedin DoView Community of Practice.

Back to the DoView Blog.  

B345l








zukerbergs1

Dr Paul Duignan on Outcomes: 

The Zuckerbergs' problem

Last week Mark Zuckerberg announced he wants to give away 99% of his Facebook shares. It seems easy, he just needs to employ grantmaking staff to give it to grantees who will use the money to do good stuff.

However there are several technical problems that will face Mark Zuckerberg's grantmakers regardless of whether they're working through a foundation or through the limited liability company structure that he's setting up in this case to distribute his funding. These are the same problems that face all foundations or funders, whether they are working in philanthropy or in other nonprofit areas like government. The underlying problem is how to identify exactly what a foundation/funder wants to achieve and how it wants to achieve it. It then needs to have a mechanism to make sure that its grantees' outcomes and priorities are aligned with what it wants and that they are working in the most effective and efficient way they can to achieve these.

At the moment the way this is usually done is through foundations/funders preparing screeds of guidance about what they want and grantees preparing long applications setting out what they're planning to do. Those of us who have been on grant assessing committees know where this all ends up. We get faced with reading through endless pages of text-based applications, pondering funding criteria and trying to figure out whether each application is worth funding or not. Often you suspect that the money ends up going to those projects that can afford to employ the best wordsmiths rather than being channeled into the most worthy projects.

One response to the text overload problem has been to put word limits on funding applications. However we know that fixing the social, environmental, educational and the other problems that foundations and government funders focus on is often complex. Just demanding that the length of potential grantees' applications be limited can prove counterproductive. It can prevent foundations/funders and potential grantees from adequately describing the complexity of what they're trying to do. 

The truth of the matter is that foundations/funders need an agile methodology for surfacing and comparing their outcomes with those of potential grantees. It needs to be a methodology which, while being simple, still allows them to adequately model complexity when this is required. Such a methodology should let you overview outcomes and priorities while also allowing you to drill-down to a sufficient level of detail whenever you have to go more in-depth. Adequately modelling foundation/funder and grantee outcomes and priorities lies at heat of a set of seven tasks that any foundation/funder must address if it is going to spend its money wisely. The exciting thing is that visual outcomes modeling offers a new methodology for undertaking these tasks and in this article I'm going to look briefly at how it can help with each of them. In a second article I'll be looking at how to do this in practice.


First, foundations/funders must articulate their outcomes and priorities

It's obvious that if foundations/funders can't clearly articulate their outcomes and priorities, it's unlikely that they'll be able to allocate their funding in a way that will achieve the outcomes they're seeking.

From an outcomes theory point of view (the theory of how we identify, intervene with, and measure success in achieving outcomes) it's always faster to identify and work with outcomes in a visual rather than a traditional textual format. Foundations/funders writing about the outcomes they want just using text-based documents; grantees writing about how they're going to achieve these; and,  grant assessing committees reading through all this material, is incredibly time consuming in comparison to using a visual outcomes modeling approach.

If you think about what foundations/funders are trying to do here, it's as follows: they have a mental model of the outcomes they want to achieve and their grantees also have a mental model of what they're wanting to do. It's the job of grant assessing committees to compare these two mental models to see if they're in alignment. Traditionally the way we've worked is to try to get people to translate their mental model into text. We've then expected other people to read through all that text and re-translate it back into their own mental model so that they can think about the outcomes and priorities that are being focused on. In the meantime, people spend a considerable amount of time wordsmithing all the text they're producing at each stage in this process.

A visual outcomes modeling approach is much more direct. It is attempting to, in effect, suck the mental model out of the foundations' or funders' heads, do the same with the grantees' model and then quickly compare these models visually - it's potentially a much faster and more transparent approach.

What I'm arguing for here is for foundations/funders to fully articulate their outcomes and priorities using a visual format within a sufficiently technical and fit-for-purpose outcomes model. The particular format they want to use is up to the specific foundation/funder. It's important at this stage that people experiment with a range of possible formats for visual modeling to see which format ultimately works best as a way of structuring the interaction between foundations/funders and grantees.

Getting 'foundations/funders to fully articulate their outcomes' are the operative words here. Of course, foundations/funders often already have a Powerpoint slide with a few boxes on it outlining their outcomes, or some sort of graphic of their outcomes on their website. While there's nothing wrong with using these to summarize a foundation's/funder's outcomes, I'm not talking about the use of a few small diagrams here. What I'm pushing for is the construction of a  comprehensive technical visual outcomes model which is capable of providing a full framework for thinking about, and working with, a foundation's or funder's outcomes.

Visual outcomes models go by many names and aspects of them are already employed in the grantmaking business under names such as: program logics, theories of change, program theories, results chains, intervention logics, log frames etc. What I'm trying to do is to encourage foundations/funders to take the possibilities offered by visualization to their logical conclusion and explore whether a fully visual outcomes modeling approach can ultimately be used to underpin the entire funding process. 

Once they've built a comprehensive visual model of their outcomes, foundations/funders can use this to communicate to stakeholders what exactly it is that they're trying to achieve. For instance, if the visual outcomes model is in the form of a drill-down model, they can put it up on their website and stakeholders can overview it and drill-down to see what the foundation/funder wants to change in the world. This is much faster than stakeholders having to read through many pages detailing what a foundation/funder is wanting to achieve. When clicking through a properly constructed and presented technical visual outcomes model, a stakeholder quickly becomes convinced that the foundation/funder has thought through their theory of change (the way a program or intervention of any sort works). A foundation's or funder's theory of change needs to surface how they think that their funding it going to impact on the real world problems they're attempting to address by giving funding to their grantees.

 

Second, improving the way foundation/funder-grantee outcomes are matched

The second challenge for foundations/funders, once they have articulated their outcomes, is to work out how they can quickly assess whether potential grantees' outcomes and priorities align with theirs.

If foundations/funders have successfully set out their outcomes within a comprehensive technical visual outcomes model as suggested above, they can obviously use this to quickly communicate to potential grantees what they're wanting to do. The whole process can become even more efficient if grantees submit their proposals in a visual outcomes model that is in a similar format to the foundation or funder's model. The two models can then be compared in various ways to see if there is alignment between the foundation or funder's model and the potential grantee's outcomes and priorities.

Third, improving grantees ability to understand and articulate their outcomes and priorities

Whether or not foundations/funders decide to use a visual approach to working out if potential grantees' outcomes and priorities are aligned with theirs, they're going to be interested in a third task - building grantees' ability to understand and articulate the grantee's outcomes and priorities. Given the advantages of visual outcomes modelling, foundations/funders should be thinking about how they can promote visual outcomes modeling by potential grantees simply in order to build grantees' capability as effective delivery providers of services.  Again, the exact format of the visual modeling with which foundations/funders want to do this is up to the particular foundation/funder. Ideally we will have a series of different approaches to visual modeling being used and we can then all  learn more about the pros and cons of different formats for this type of visual modeling work.

Obviously, foundations/funders that are getting their potential grantees to use visual modelling as part of their grant assessment process will be particularly interested in building potential grantees' capability in this area.

Fourth, promoting the use of evidence-based practice by grantees

The fourth task that conscientious foundations/funders are likely to be interested in is also one related to building grantee capability. In this case it is grantees' ability to be guided by evidence-based practice, or as it is also sometimes called, evidence-informed practice.

The problem of getting grantees to actually use evidence-based practice (rather than just agreeing that it's a great idea in principle) is not just that there's a lack of evidence available. In fact we've seen a spectacular growth in many evidence-based repositories (e.g. the Cochrane Collaboration in the health sciences and the Campbell Collaboration in the social sciences). Regardless of how much evidence we have available we need to face the fact that grantee staff are hyper-busy delivering in their area of work. We need a realistic focus on the problem of how we can get grantees and other providers to actually apply the evidence that is available to guide what they do in their day-to-day work.  

Visual outcomes models can assist with this problem in three ways. First, technical experts can be asked to review a program to see it if it is evidence-based. However, comprehensively reviewing a program to help ensure that it is evidence-based can be tricky and time consuming because you first need to work out what's happening in the program. Only then can you go on to figure out whether or not it is using an evidence-based approach. Wading through program documentation and talking to program staff is a time consuming approach to getting to understand exactly what a program is doing. On the other hand, if a program is required to produce a comprehensive visual model of what it is planning to do, this provides a very quick and accessible way for an expert reviewer to work out whether or not it is actually employing evidence-based practice.

The second way that visual outcomes model can be used to promote evidence-based practice is peer review between programs. This can work where a number of programs are working on the same topic in different locations. If such programs have visual outcomes models of what they're trying to achieve, they can easily compare these between each other. Differences in approach are immediately apparent and this can form the basis for discussion between them about the reason they have adopted these different approaches.

There is a third way that visual outcomes models can be used to encourage evidence-based practice. When programs are using visual models as a practical tool to guide their work, evidence-based practice can be hardwired into the visual outcomes model they use right at the start of the process. Foundations/funders can either do this hardwiring themselves or contract this work to others.

If programs go on to use such evidence-based visual outcomes models in their work, this will drive evidence-based practice into the heart of what they're doing. This approach, which makes it easy for programs to be evidence-based, seems more likely to succeed rather than just telling busy program staff that they need to go to the evidence-based literature and use it to inform their practice. It is an elegant approach which embeds evidence-based practice into a tool (the visual outcomes model) which programs are using themselves in their day-to-day work for other reasons. The beauty of it is that programs do not even need to know that they're using evidence-based practice for their practice to be increasingly informed by it.


Fifth, making monitoring and evaluation planning easier

The fifth challenge for foundations/funders is how to make monitoring and evaluating planning easier for themselves and for their grantees. Again this becomes something of a paper battle as monitoring and evaluation plans are prepared and reviewed by the foundation/funder. The monitoring and evaluation plans are then executed and monitoring and evaluation reports are prepared. Again, it is possible to make monitoring and evaluation planning more efficient by using visual outcomes models as the basis for such planning. As with visual outcomes modeling itself, the exact format for such monitoring and evaluation plans based on using a visual outcomes model should be determined by the foundation/funder itself. Experimenting with different formats in this area can also help us work out the best approach.


Conclusion

In this article, I've outlined five ways in which technical visual outcomes models can potentially increase the efficiency and effectiveness of grantmaking for people like Mark Zuckerberg and for other foundations and government funders. In the second part of this series I'll be looking at how they can actually do this in practice.

Get tips on outcomes by signing up to the DoView School of Outcomes Tips Newsletter. Follow on Twitter.

If you want to comment on this article you can do so on the
Linkedin version of the article.

Paul Duignan, PhD, follow on Twitter.com/PaulDuignan; contact me here. Checkout our Linkedin DoView Community of Practice.

Back to the DoView Blog.  

B340L



screenshot 617


You can use indicators, questions and items within DoView for monitoring, evaluation and research planning.  Put indicators next to the steps and outcome boxes they measure to see which boxes are being measured.

Use evaluation or research questions in the same way to quickly see which level they’re focused on within your outcomes model. Put in further information about the indicator or question with a Right Click > Edit Notes or Double Click on the gray bar in the Details Table at the bottom of the screen. 

Display these details with a Right Click > Show Details. The purpose of the item object is for you to represent anything that you like (e.g. evaluation or research projects etc).

See the last few screen shots on the page here for examples. 

  

Linkedin Community of Practice.

Download a DoView trial now.

Back to the DoView Blog

B341


screenshot 577


Dr Paul Duignan on Outcomes: 

Sitting on the edge of your seats for the next instalment?

I know that you're all sitting on the edge of your seats waiting for my next instalment in this series on the latest enhancements to the New Zealand public sector management system. As those who read my first piece know, Graham Vaughan-Jones (Executive Director NextEra)  and myself (from DoView Outcomes Systems) are going where few souls are brave enough to go - taking a deep dive into the arcana of NZ's public sector strategy, budgeting and reporting system.

We're looking at the latest enhancements (2013) to the New Zealand's ongoing reform of its public sector strategy, budgeting and reporting system. And why should anyone be interested in how the New Zealand reforms, commenced in the 1980's, are working out in the fullness of time? Well NZ's been something of a poster boy/girl for changing the way a public sector deals with public administration. So how it's all panning out is something that people who follow this sort of stuff should be interested in knowing.


Kicking the tires - what are the key issues?

So far we've been looking at what departments are producing and talking to people from various departments and central agencies. We've identified an initial set of topics which we'd like to get to the bottom of. I've outlined these below and will be commenting on some of them in more detail in later articles.


Statements of Strategic Intent (SoSIs) and 4-Year Plans (4YPs) - how do they relate?

A very important aspect of changes to any public sector strategy, budgeting and reporting system are changes to the form and content of the documents that departments are required to produce. Two of the important documents now being prepared by New Zealand government departments are the Statement of Strategic Intent (SoSI) and the 4-Year Plan (4YP). The first question we've surfaced is how these two are intended to relate to each other because they cover similar material and time periods. What's been suggested by the people we've talked to so far is that the SoSI 'front-ends' the 4YP. I'm hoping to talk more about this when we immerse ourselves deeper into how the distinction is playing out in practice. 


Should the 4-Year Plans be for public consumption or more for internal government purposes?

A second question is who the audience for the 4-Year Plan should be? Should it just be an internal document for communicating between the departmental level and the relevant Ministers? Or, on the other hand, should it be a public document? At the moment when 4-Year Plans are being released they have significant portions redacted which are relevant to them being used for communicating 'free and frank advice' between officials and Ministers. We're interested in perspectives on this issue.


Votes versus departments and the good old attribution problem

The third issue is one that particularly fascinates me. As I understand it, 2013 has brought about a change from trying to work out if departments are achieving what they say they are trying to do, to focusing on whether a Vote is achieving its 'intentions'. (A Vote is a lump of money allocated to achieving a particular purpose). Some see this new approach as having less of an 'attribution' problem. Attribution is about figuring out whether something caused something to happen. In modern public administration systems, the more parties are pushed to specify higher-level outcomes, the more the attribution problem rears its ugly head. Obviously lower level activities which are done and products that are produced which are controlled by a single department (often known as outputs) are trivial to attribute. You only need to measure that they have happened in order to knowwho made them happen - the party that controlled them happening. Anyway, I'm hoping to dive into this issue in a later article and will be looking at the application of two outcomes theory principles - Non-necessarily controllable indicator measurement not attributable and Impact evaluation and controllable indicators not reaching to the top of an outcomes model and their relevance to this issue.


Specifying higher-level sector strategy

The fourth issue which we've picked up on so far is the relationship between higher-level sector strategies and lower-levels of strategy. If you want joined-up government you need to have a way of specifying wider sector strategy. There are lots of fascinating issues in here. For instance how you should define a sector and how you can best coordinate strategy between the many different stakeholders involved. I hope to talk about these in a later piece.


Use of a list of results set as targets

The fifth issue is the role of targets within the current system. The current government has employed a system of setting ‘results’ for the public service called Better Public Services (BPS). These are in the form of ten results in five different areas. For example for long-term welfare dependence, reducing the number of people who have been on the benefit for more than 12 months. In education – increasing participation in early childhood education. The use of a fairly short list of overall targets for a whole country is an interesting mechanism and I’m planning to also look at this in more detail in a later article.


Upgrading the guidance

The 'guidance' is the documentation used by the central agencies to inform government departments about how they should be implementing the 2013 changes. It’s the main mechanism through which changes to public administration get translated into practice. We've been looking at the current guidance and examining how, now that it has been used for a while, it can be improved and we will be reporting back on that.


Ensuring necessary capability to implement the 2013 changes

We know from our experience with change management in government systems that the success of any changes such as those in 2013 relies on staff within government departments having the capability to effectively implement them. This includes skills, knowledge, tools and motivation. As I noted in my first article, one aspect of this might be the use of a Group Action Planning approach. In this approach, one person from each department comes together on a regular basis to: increase their own skills, knowledge and motivation; to then feed this information and their motivation to progress change back to their colleagues in their own departments; and lastly to give them the opportunity to provide feedback on system-level enhancements to which could improve implementation. We'll be looking at this and at other ways that capability could be built to optimize the bedding down of the 2013 changes.

Paul Duignan, PhD Outcomes Theorist - follow on Twitter and on my Blog.

Part 1 of this article.

If you want to comment on this article you can do so on the Linkedin version of the article.

Paul Duignan, PhD, follow on Twitter.com/PaulDuignan; contact me here. Checkout our Linkedin DoView Community of Practice.

Back to the DoView Blog.  

B336L



screenshot 577



Dr Paul Duignan on Outcomes: 

New Zealand - the Rock Star of public sector management innovation

For anyone who's obsessed with public sector management issues, New Zealand has been something of a rock star in the area of public sector reform. It’s the domain of the select group of people who identify themselves as public sector management wonks. (A side note: public sector management wonks are admittedly just a small portion of humanity, but they're a group we certainly need. If governments get their public sector management arrangements wrong they're likely to waste significant amounts of money and end up failing to achieve the outcomes we're all seeking from them).


Changes: 1987 - outputs focus for departments; 2000 - outcomes focus at departmental level; 2013 - list of targets and reduced reporting requirements

In the 1980's New Zealand moved from focusing on inputs to using a tightly specified set of outputs to achieve accountability for individual government departments. The idea was that these sets of outputs would be selected on the basis of the outcomes being sought by the government. However this selection of outputs would not happen within government departments themselves but further up within the political decision-making hierarchy (Ministers and their advisors). Then around the year 2000 the next move was made. This put a greater emphasis on outcomes and measuring impacts, getting government departments involved in this rather than just Ministers and their advisors. 

Since 2013 NZ's been working on bedding down a third round of enhancements to its public sector strategy, management, planning, budgeting and reporting system. This new set of enhancements have included driving and focusing the public sector with a set of public sector targets. In addition there have been legislative amendments to reduce the requirements for the way strategy and planning and associated documentation is set out. If you want a detailed snapshot of the first two waves of change within the reform process, check out Derek Gill's book The Iron Cage Revisited: The Management of State Organisations in New Zealand.  


My colleagues and I are pretty obsessed with this stuff - someone has to be!

As part of my work on strategy and outcomes with governments and other organizations, I've been consulting on these issues to the NZ government central agencies (the bodies that provide control and guidance to the public service) and individual departments since just before the 2000 enhancements. It may be only a minority interest but I find it fun watching how a whole public service negotiates and deals with various the waves of change that have been taking place.

Recently I've been working with colleague Graham Vaughan-Jones Executive Director at NextEra, another public sector management and strategy specialist, looking at how the 2013 set of enhancements is bedding-in. The NZ public services has been implementing the 2013 enhancements for several years now and it's a great time to take stock. It provides the opportunity to work out how we can further optimize the most recent round of changes.

 

How to best implement reform in complex distributed systems

As I've learnt in my work with the NZ government and elsewhere, making changes across a whole public sector, particularly when it comes to such arcane matters as public sector strategy, planning, management and reporting is a fascinatingly complex thing. It starts with an intention in the mind of the government ministers who introduce proposed changes. This intention is then translated into legislation. The legislative requirements are then translated into technical guidance. The guidance then fans out and is implemented in various ways in all of the different agencies and departments within a public service. Of course, there's plenty of room for different interpretations to emerge and for personalities and interdepartmental politics to play their part in the way it all ends up being applied.

I've always thought that implementing such reforms in distributed systems can be helped by a process that I developed called DoView Group Action Planning. You use it when you're implementing reform within a complex system where there are multiple, somewhat independent, parties working on implementing similar changes. When I was asked to write a report for the NZ central agencies prior to the 2000 reforms I suggested that effectively implementing those reforms could be assisted by using such a process. There might be value in considering if this approach could be used with the 2013 set of changes.

In summary, the process is designed to avoid a situation where a central body wanting a number of different agencies to implement change has to work with each individual agency one-by-one to introduce such reforms. That is often the way people attempt to do it and it involves many bilateral discussions between the central agencies and individual departments. DoView Group Action Planning works in a much more multilateral way. It brings together a group of people (one coming from each department in which the changes are being implemented) and gets them to work collaboratively on assisting the introduction of the reforms over a number of years.

In my experience this completely changes the dynamic of the reform process. There is always more buy-in and you get all sorts of peer-to-peer learning, pressure and peer-to-peer support emerging. Equality importantly, it gives the agencies where the reforms are being implemented a vehicle to provide collective feedback which they can send back up the chain to the central agencies about common issues that they're all facing.

 

The Devil is in the detail

As with all the various waves of changes, the 2013 enhancements have seen changes in the format of the various statements, information and pieces of reporting documentation that are now required. I'll not go into them in detail here, but it's always fascinating to see how such documentation ends up being structured. There's always room for different variants and at this stage in the reform process - a couple of years out, it's a great time to see how the different departments are preparing their documentation.

At the moment Graham has been doing a lot of the leg work looking at what's happening in departments as they respond to the new 2013 requirements and its associated documentation. While that sounds easy, it's actually complicated getting your head around different sets of technical documentation from different departments. The Devil is in the detail. I know this as I was involved in such an exercise when the 2000 enhancements were bedding-in. I reviewed all of the outcomes documentation from each of the NZ government departments for the NZ Treasury and rated it for its thoroughness and consistency with the legislative requirements.

 

How far will they be taking the use of visual models?

From my theoretical perspective - outcomes theory – at the heart of all of this documentation in the 2013 round, as with the earlier rounds of the reforms, basically consists of statements about outcomes and about the steps that it is thought are the best way of getting to them. As usual, departments are articulating this in text, tables and in some cases visual models of what they are attempting to achieve. With Graham, I've been looking at, and comparing some of the visual representations of what departments are trying to do when describing their strategies and outcomes. I'll be writing later articles about what we're finding.

From my point of view, it's exciting that the reforms allow for more discretion in the way in which departments identify and report their strategy. Obviously being a psychologist and strategist interested in the benefits to strategy and outcomes from visualization, I'm keen to see how far they will push the visualization paradigm in the 2013 set of changes. 

Almost all of the documentation I assessed from the 2000 changes included at least some visual representation of outcomes (the documents I reviewed were called Statements of Intent SOIs). I'm interested in the extent to which departments will be able to advance their use of visual outcomes modelling to make it easier to formulate, capture and to communicate their strategy, outcomes and priorities. Some examples of how you can use outcomes model visualisation in this way for public sector strategy, management, planning, budgeting and reporting are here

In future articles I'll report on these issues and on all of the other intricacies of how this round of the reform process is going.

If you want to comment on this article you can do so on the Linkedin version of the article.

Paul Duignan, PhD, follow on Twitter.com/PaulDuignan; contact me here. Checkout our Linkedin DoView Community of Practice.

Back to the DoView Blog.  

B335L


One of the endless challenges in communicating with your users (and potential users) is how you structure your website. 

We are making incremental improvements to our website as we try to improve it’s usability for our extended DoView community - those in 50+ countries using the DoView approach and/or our software (and those who are wanting to find out how our approach can add value).

Just today we’ve tweaked our menu list. We’ve changed Download to Free Trial, Training to Learn and included a new menu item Examples. The examples (sector and other user case studies) used to be hidden away at the bottom of the front page and we suspect that many people did not see it. Here is the Examples page if you want to check it out.

We have many examples and templates we want to get up on the DoView website and we will be adding to the Examples page over time.

It will be interesting to track now these tweaks change usage of our site. 

If you have any comments on how we should improve our website please let us know.  

Linkedin Community of Practice.

Download a DoView trial now.

Back to the DoView Blog

B321




                Download the template. Get a trial copy of DoView to edit it in.


Our users in 50+ countries use DoView in different ways. Today we’re delighted to share with you an evaluation plan template developed by Maggie Jakof-Hoff. Maggie is a highly experienced evaluator who has worked in many different sectors and settings. 

One of the great things about Maggie’s template is that it summarises in just a few pages of her DoView template the whole of an evaluation plan. This is totally part of our mission at DoView - to replace those multipage text-based evaluation plans with a much more concise and accessible evaluation plan.

In the video above, Maggie explains how her template is set out. You can use it to plan any evaluation in any area. Maggie has refined it over a number of years to bring it to its present form. 

If you install a trial copy of DoView you can download Maggie’s template from here for your own use.

You can also download a PDF version of the template.

Have fun with the template. If you have any comments or questions feel free to post them up on our Linkedin Community of Practice.

Download a DoView trial now.

Back to the DoView Blog

B315 VB-5

DoView cares LARGE!-4


Many non-profits and charities really want to use DoView but their funding is currently terribly tight. We’re also very concerned about this, so we’ve been doing some research into pricing strategies. It seems to be an incredibly complex topic. 

Obviously we want to continue developing DoView and supporting our customers (we don’t currently charge for support). But to do this we need to strike a sustainable price.  

However, on the other hand, the more customers we have, the lower we can afford to drop the price as our sales volumes increase.

We're really serious about making the world a better place, that’s what got us into this business in the first place. We’d like to get DoView into the hands of as many people as possible because we know it can make such a difference. All the time we hear satisfied customers raving about how DoView has improved the efficiency and effectiveness of their organization.  

So we’ve decided to experiment with pricing in November and really reduce our price for non-profits and charities. If we can build the volume of sales high enough, then we can keep the price lower. 

Here’s where you come in. Regardless of whether you own DoView yourself, if you know of anyone in a non-profit or charity who might be interested in benefiting from this November discount, please email them with the link to http://doview.com/buy.html. As I said above, the more sales we can get, the lower we can keep the price.

Lastly, while I'm on about making the world a better place. Do you know about our DoView Donor program? This is where we give away free DoView licenses to international development programs. For each copy of DoView we sell, we can make a free license available to a program that cannot afford to buy DoView. 

So when you buy DoView you’re also often helping someone in a development project better define their outcomes. And as we all know this can often enable them to get additional funding because they can better communicate their outcomes and impact to their funders.

As we say around here, Happy DoViewing. 

Melissa Bethwaite

Download a DoView trial now.

Back to the DoView Blog

B310



DoView School of Outcomes

From October to December, we’ll be running a set of 6 workshops in Wellington NZ on outcomes, results-based management, impact measurement, intervention logics, theories of change etc. Register for one or more.

Check out the three minute video above for information on them. We’ll be teaching our visual planning approach which lets you put your outcomes house in order once and for all.

You can then deal with all of the other outcomes stuff that gets thrown at you in the public and non-profit sector - think RBA, ILM, BS, SROI, SIP, PIF, BPS, EBP, 4YPlans etc. 

Find out more at DoView.com/school-of-outcomes/courses and register here

Follow Dr Paul Duignan on Twitter.com/PaulDuignan; contact him here

Back to the DoView Blog.  

B288 VB4



Next week, Dr Paul Duignan will be chairing the 3rd Annual Strategic Management Accounting Forum in Auckland on 16 September. This is being attended by accountants wanting to upskill in the strategic management accounting area. 

On the next day (17 September) he’ll be running a workshop on The Toolbox and Competencies of the Strategic Management Accountant

This workshop will provide techniques for developing stategy at any level using DoView Visual Strategic Planning. This approach is increadibly powerful and can be applied to many strategic issues that strategic management accountants face. 

This is a fantastic event and we hope you can make it along. If you want to go, register below. The DoView School of Outcomes will also be announcing more workshops in Wellington, New Zealand before the end of the year. Plus we’ll be announcing our first online training offerings in the very near future. 

We look forward to seeing you at a workshop in the near future. 

Link for registering.

Follow Dr Paul Duignan on Twitter.com/PaulDuignan; contact him here

Back to the DoView Blog.  

B280


business-man-thumbs-up-graph-em10


Most people reading this will be well aware of the SMART planning acronyn which has been drummed into us over the years. It insists that objectives or outcomes should be Specific, Measurable, Achievable, Relevant and Timebound. Of course, there’s nothing wrong with this advice if we see it as a general principle for our overall approach to outcomes - ultimately measurement is really important.

However in my view it is a mistake to insist that we should let measurement limit our thinking at the point when we first start trying to identify our outcomes. I know that this is a heresy of sorts, but why do I want to downplay focusing on measurement at the start of the planning process?

The outcomes we identify should be our statement of what we are trying to achieve. Working out what these are should not be constrained by whether or not we happen to be able to measure them at the current time.

If we insist on our outcomes always being measurable from the very beginning of our strategic thinking process, we have no way of thinking about, representing and discussing outcomes that we cannot currently measure.  

Thinking more deeply about measurement, we need to realize that measurement costs money. It requires that infrastructure has been put in place, protocols developed and people employed to collect and analyze the data. 

By definition, measurement will be most intense in those areas which we have focused on in the past. But most of us these days are wanting to be innovative whether we are working in the public or private sectors. It is likely that we will be thinking about new ways of formulating or combining outcomes, and identifying at least some new steps in the processes we think need to happen to achieve our outcomes. 

Therefore we want to use a planning methodology which encourages us to ‘think out of the box’ when we are initially identifying our outcomes. Once we have identified our outcomes plus the steps leading to them we can, of course, move onto looking at the question of measurement. 

What is a good tool for making sure we use an outcomes first - measurement second approach in our planning? One way of doing this is using the DoView Planning approach. In this we first draw a visual model of the outcomes we’re trying to achieve and only then, once we’ve drawn the model, do we go through it, look at each box and ask the question: ‘can we measure this’?

It may be the case that we need to develop some new measurement infrastructure in order to measure a new outcome or one of the steps we believe leads to that outcome. If that is the case then so be it. It is much better to approach it in this way than to have never been allowed to think about the outcome because it was something that up until now we have not been able to measure.

This way of working makes sure we’re not locked into a set of outcomes that are defined by the measurement legacy and infrastructure we happen  to have in place. 

You can see a very simple example of working in this way in Diagram 2 here. I will be posting other examples up on this blog in the future.

Paul Duignan, PhD, follow on Twitter.com/PaulDuignan; contact me here. Discuss this post in the Linkedin DoView Community of Practice.

Back to the DoView Blog.  

B238



marketing-word-on-key


DoView Version 5.0 is coming up - do you want to participate in our Beta testing program? 

We've been working on DoView Version 5.0 and we hope to release it within 3-4 weeks.

As usual, we’re going to provide a new releases of DoView on both the Mac and PC platforms so that you can interchange files betweeen these two

If you want to participate in the Beta testing program for Version 5.0 contact us using the contact form and we'll give you access to the new version. We'll be blogging here over the next couple of weeks about the new features Version 5.0 will include.  What's in it for you? You get to try DoView 5.0 before everyone else!

One thing to note is that while DoView 5.0 (for Mac and PC) will be able to read files from all earlier versions of DoView, earlier versions of DoView will not be able to read files created in DoView 5.0.

And yes, DoView 5.0 will be a free upgrade to licensed users of DoView 4 and earlier!

Happy DoViewing

Download a DoView trial now.

Back to the DoView Blog.  

B215     Image source : http://www.morguefile.com/archive/display/851785




Dr Paul Duignan on Outcomes 

We’re constantly innovating in the face-to-face and on-line workshops we run for governments and nonprofits on outcomes, strategy, evidence and communicating impact. We’ve run a number of workshops in the last month or so which have given us the opportunity to improve the experience further. 

We vary our workshop style from time to time and lately we’ve been experimenting changing it around three issues. First, how theoretical or practical to be; second, whether to tightly follow a slide-set, or to use a workshop process fully responsive to the flow of the workshop; and, third, whether to use a pre-prepared outcomes model in the workshop or to develop an outcomes model in real time with the participants on a topic they have suggested. 

The results on the first two points seem to confirm what we generally used to do in the majority of our workshops in the past. This has been to take a theoretical approach, but to illustrate it with practical examples, and to 'go with the flow' in the workshops rather than just follow a strict slide-set. The result on the last question is not quite so clear but probably again it’s what we’ve done traditionally -  developing the model in real time with the group on a topic they've selected themselves.

We are continuing to innovate because we want to improve what we are providing. At the momement we’re looking in detail at how to make the exercises in our workshops more entertaining for participants, while still having them communicate the key underlying principles we’re wanting to get across.

Paul Duignan, PhD, follow on Twitter.com/PaulDuignan; contact me here. Discuss this post in the Linkedin DoView Community of Practice.

Back to the DoView Blog

B215




Dr Paul Duignan with a three minute summary of a recent training workshop he did in Canberra Australia with participants from the Australian Public Service on evidence-based policy and practice. 

He was joined by Rob Richards from Evidentiary. Rob covered how to develop evidence questions, evidence collection and synthesis while Paul talked about the use of visual outcomes models for identifying the evidence you need to collect, embedding evidence within models that are used by practitioners and how this can overcome the problem of ensuring that practitioners actually use evidence. 

Back to the DoView Blog.  

B216




marketing-word-on-key


Hi there, I’m Melissa Bethwaite, DoView’s newest employee, I have joined DoView as a Digital Marketing Coordinator. I come from a software marketing background and I am passionate about digital marketing.

My goal here at DoView is to share as much as knowledge with our user community as I can via the web.  I hope to be able to reach out to our community/ users in a variety of ways that help them to use DoView to its full potential and in turn to do their jobs more efficiently.  

In the next few months you will see some positive changes here at DoView Headquarters, we will be enhancing our website, and will be providing many free online resources such as regular blog posts, e-newsletters and social media updates. We’ll also be providing affordable online education resources with e-books available for sale and training courses conducted by Outcomes Specialist Dr Paul Duignan.

As they say around here, Happy DoViewing. 

Download a DoView trial now.

Back to the DoView Blog.  




With DoView version 4.0 we updated the interface and dropped the ‘group’ function from the toolbar at the top of the screen so we could include the ‘count links’ function. Count links shows the number of boxes connected to any box in a model. It’s a very cool function we wanted to make more prominent. It lets visualise if you have ‘line-of-sign’ alignment between boxes within your model - say between your projects and your priority outcomes when doing strategic planning

Keeping the interface simple is one of our design principles at DoView and we don’t want the top toolbar to become too cluttered. A cluttered toolbar is confusing for the participants when you’re facilitating a group DoView session - it makes it look like you’re ‘playing with your software’ rather than the people in the room building a model. 

Today one of our long-time users emailed us a support query as to where ‘group’ had gone. 

Rest assured that is has not disappeared from. All you need to do is to do a right-click anywhere on the page and you will see the Group as one of the menu items. 

Download a DoView trial now.

Back to the DoView Blog.  





map-of-the-mind

Image source Ddpavumba http://www.freedigitalphotos.net/images/Ideas_and_Decision_M_…

010215krugman1-blog480

Dr Paul Duignan on Outcomes 

Paul Krugman is today criticizing those who think that the UK is doing better than France in terms of progressing out of the global financial crisis. While the UK's GDP growth rate is up, it you look at overall progress since 2007 France has done better. 

Krugman calls this the growth rate fallacy - 'no matter how badly an economy has done over an extended period, you proclaim success after a year or two of good growth'. 

This is a common problem in indicator tracking and is a twist on the outcomes theory principle of "equality of input, equality of outcome' which I had an Op Ed on a while ago in regard to school measurement. Here's the formal statement of the principle

It's not enough to just look at the level of an indicator or even the rate of improvement of an indicator when comparing the relative success of different instances. You also need to know something about the base they are coming off before you can make a judgment on how they are doing compared with each other.  

Paul Duignan, PhD, follow on Twitter.com/PaulDuignan; contact me here. Discuss this post in the Linkedin DoView Community of Practice.

Back to the DoView Blog

B139

Image source: http://krugman.blogs.nytimes.com/2015/01/02/britains-success-story/  


painting-final-2014

Image source: http://recode.net/2014/05/27/googles-new-self-driving-car-ditches-the-…

071014krugman1-blog480.png

Image source: Urban Institute

Dr Paul Duignan on Outcomes (bit wonkish):  

Paul Krugman today argues that Obamacare has had an impact on the percentage of uninsured people in the US. He provides classic examples of the type of reasoning behind two of the seven impact evaluation designs that are used within outcomes theory to attribute improvements in an outcome to an intervention - time-series designs and constructed comparison group designs.

Outcomes theory is an approach which provides a formal and systematic way of thinking about all of the problems you face when you start thinking about, and working with, outcomes and the interventions that it is hoped will improve them. What we are discussing here is one specific topic in outcomes theory, how and when we can attribute a change in an outcome to a particular intervention. In the outcomes theory Outcomes System Diagram, impact evaluation designs are the second place you go when you are wanting to attribute improvements in an outcome to an intervention. The principle from outcomes theory that covers this is discussed formally in this link

Using an outcomes theory approach your initial tactic when wanting to attribute outcomes to an intervention such as Obamacare is to see if you have a controllable indicator reaching to the top of your outcomes model. Usually you won't, but it's worth thinking in this way if you want to be systematic in your approach to outcomes work. If you do have a controllable indicator reaching high up your outcomes model (a visual model setting out all of your outcomes and the steps you believe will lead to them) then attribution of improvements in outcomes is simple.

The mere measurement of the controllable indicator showing that it has occurred will have established attribution of an improvement in the outcome simply by virtue of its measurement. By definition, the controllable indicator has been caused by the intervention (that's what the definition of controllable means) so there's no possible dispute about attributing it to the intervention. If it is at the 'top' of your outcomes model then you've established high-level attribution. 

An example of an intervention with a controllable indicator near the top of its outcomes model is immunization for diseases where you get a high rate of protection from immunization e.g. measles, mumps, rubella etc which protect more than 95% of children who have a course of immunization. This means that just measuring the controllable indicator that you've immunized a certain number of children, by virtue of its measurement, proves that you've reduced morbidity amongst that group (the high-level outcome). In other words, it means that by just measuring the number of children immunized you have achieved attribution of improvements in the high-level outcome to your intervention, end of story.

Of course, in the case of proving whether Obamacare is affecting the outcome of reducing the percentage of uninsured people in the U.S., there will be a range of factors influencing this outcome. By definition, this makes the percentage of uninsured people a not-necessarily controllable indicator when looking at Obamacare as an intervention. Because of this, merely measuring that the number of uninsured people is falling, as is the case with Obamacare, does not, in itself establish that Obamacare is causing this to happen. In other words, we have an attribution problem.

In such situations, outcomes theory tells us that there's only one other tactic we can employ to establish attribution and that is looking at what is possible in terms of more one-off impact evaluation designs. (If you did not go the the link before where this is set out formally as an outcomes theory principle, have a look now if you have time).

Paul Krugman in his blog post is trying to argue that Obamacare is making a difference, in his post he's attempting to counter those who are claiming that the reduction in the number of uninsured people is simply a result of the economy improving and not a result of Obamacare. In outcomes theory terms he is trying to 'attribute' a reduction in the uninsured to the introduction of Obamacare.

He takes the graph above which shows a fall in the number of uninsured and in response to those who are saying: 'It's not Obamacare, it's the improving economy', he replies: 'But it isn't. The decline is too sharp, too closely associated with the enrollment period to be driven by the at best gradual improvement in the job market.'

This is classic time series analysis reasoning. The logic of the time series approach to impact attribution evaluation is that you have a series of observations plus a clear point in time when an intervention commenced. This means that if you look at a graph of a high-level outcome at the point when the intervention started (or some credibly argued lag required for the intervention to kick in) and you see the outcome improving, then you can simply claim that you've established attribution because of this coincidence of timing. There are various ways that statistics can be used on time series with lots of data points, but the basic reasoning is what we have Krugman arguing here.

He then adds to his line of impact attribution argument by adopting the rationale behind another of the seven possible impact evaluation design types used in outcomes theory - constructed comparison group impact evaluation designs. He takes a graph produced by the Urban Institute (where, by the way, I had the pleasure of undertaking a Fulbright Senior Scholar Award a few years ago). This graph breaks down reductions in those who are uninsured based on whether states are helping implement Obamacare or blocking it. This is done by looking at whether they are expanding Medicaid (helping states) or not (blocking states).



071314krugman1-blog480


Image source: Urban Institute

This graph provides a conceptually different line of argument attempting to attribute improvements to Obamacare. It allows a comparison arising from a 'naturally occurring experiment' which is one way in which a constructed comparison group impact evaluation design can be used. Naturally occurring experiments can be contrasted to specifically set up true randomized experiments (the first type of impact evaluation design within outcomes theory) in that there's no experimenter who assigns units to either receive the intervention or to remain as untreated controls. 

Looking at the Urban Institute graph, Krugman's logic here is that: '. . . an improving economy can't explain why the decline in uninsured is three times as large in pro-Obamacare states as it is in anti-reform states.'

Note that from a technical point of view, this constructed comparison group design, while the graph shows a series of observations, does not have to rely (in contrast to the time series design) on there being multiple measurements over time. It could, in theory just use a 'before and after' observation.

Arguments about impact attribution, where, as is often the case, you'd don't have a controllable indicator near the top of an outcomes model (as in the case of immunization) always have to look at the possibility of impact attribution evaluation for establishing attribution. As is the case here, you can either rely on a single impact attribution design or a combination of designs as is the basis of the argument Paul Krugman was advanced today. 


Paul Duignan, PhD. You can follow me on Twitter.com/PaulDuignan or contact me here. Discuss this post in the Linkedin DoView Community of Practice at http://tinyurl.com/doviewplanningln.

Back to the DoView Blog.





Image source: Urban …

AR-140525758

Image source: Alan Laroche

Dr Paul Duignan on Outcomes:  

Measuring stuff costs money. This was brought home by a group of 6th graders in the U.S. recently when they started demanding payment for being involved in piloting academic tests. 

The tests are part of the system for English and Math assessments in schools. The kids pointed out that the company that was developing the tests would make money from them, so why shouldn't they? Link.

Their demands highlight the fact that it's not trivial and cost-free to measure many things.  There's often a huge infrastructure needed to generate the measurements we use in our outcomes work. 

Decision-makers often blithely say to a program or organization, 'you must measure your outcomes'. However what's always needed is a consideration of the feasibility and affordability of measurement in the particular sector. 

This varies enormously between sectors and decision-makers need to take this into account when setting up accountability arrangements. If they don't  then they run the risk of privileging programs and organizations in which it's either cheap and easy to measure results or there's a legacy of investment in measurement within the sector. 

Innovative programs are, by definition, more likely to face measurement issues because there's not the legacy of measurement behind them. Once again this illustrates the point that an overly simplistic approach to measurement can work against innovation and effective strategy.

Paul Duignan, PhD. You can follow me on Twitter.com/PaulDuignan or contact me here. Discuss this post in the Linkedin DoView Community of Practice at http://tinyurl.com/doviewplanningln.

Back to the DoView Blog.





Today we're releasing DoView Standard 4.0 for PC and Mac. In addition, we're releasing the new DoView Pro 4.0. We're also totally upgrading this DoView website. During the next day we'll be ironing out any issues which may emerge with the new website. If you have any comments or need any assistance please contact us on the Comments page. Thank you for your patience if any issues do arise in the next 24 hours.  We will be blogging more about the new releases in the coming days.

Download DoView 4.0 trial now.

Back to the DoView Blog.



Paul Duignan seminar flyer

Dr Paul Duignan on Outcomes:  

I will be presenting this week in New Zealand on the further enhancement of my outcomes theory work since I won the Fulbright Senior Scholar award and was at the Urban Institute in Washington, D.C. More information from the Fulbright website.

Paul Duignan, PhD. You can follow me on Twitter.com/PaulDuignan or contact me here.

Back to the DoView Blog.



guard


Dr Paul Duignan on Outcomes:  This guy is not actually guarding a DoView model, but he characterizes the eternal vigilance that an organizational Keeper of the DoView Model is expected to maintain at all times. It's also essential that you look kind of glum while you're doing it.

It's not mandatory to actually need to dress up in a uniform (unless you want to, of course), but someone in an organization needs to be charged with making sure that the organization's DoView model maintains its integrity. 

The concept is for an organization's DoView outcomes model to become the central 'DNA' of the organization. It should be a representation of the outcomes the organization is trying to achieve and of all of the steps it is planning to take along the way. All key organizational discussions should take place against the one DoView model. Here is an example of a DoView for a regional health organization.

That is why DoView Software is optimized to work across different platforms - when data projected within DoView; when made into a webpage model; when printed out as a small booklet for a senior management team; as a standard letter-size print out and as a printed poster version. So it is always easy to have at hand and amend if necessary.

Your organizational or project DoView therefore needs to be maintained, updated and protected by a single person. It's just like an important organizational spreadsheet which the Chief Financial Officer would not just trust anyone with. 

In larger organizations with multiple departments that are using DoView, it's also a good idea to have a group which is authorized to make decisions about amending the organizational DoView. They should make the ultimate decisions about what boxes are in and out, consistency of wording etc.. This might be an existing management group, or one set up specifically for this purpose.

This does not meant that others won't also be using DoView Software right across the organization for a range of purposes at different levels. In some instances they might be looking after sub-parts of the DoView file that have been delegated to a department or individual. 

Note that the soon to be released DoView Pro will allow folders to be delegated within DoView and then synced - DoView will blog on this more soon. This will make delegating specific parts of your DoView really easy.

The main point is that if your DoView is to do the work it's designed to do then it needs to be looked after properly, uniform or no uniform!


Paul Duignan, PhD. You can follow me on Twitter.com/PaulDuignan or contact me here.

Back to the DoView Blog.

Image source: http://morguefile.com/

steps-and-indictors-with-icons


Here is another posting with more information on new features in soon to be released DoView 4.0 - this time we're talking about traffic-lights and priority icons. Any element in DoView 4.0 (boxes and the elements from the advanced right-click menu - indicators, questions and items) will be able to be traffic-lighted with traffic-light icons. In earlier versions, this could only be done for boxes (steps). Traffic-lights can be used when working with DoView to create dashboards of any type. 

One example is when you create a Performance Improvement DoView. This is where you analyze an organizational process, identify problem areas by traffic-lighting the relevant boxes and then specify how you're going to fix these problems areas (usually the ones you've traffic-lighted red or yellow). 

In addition, a new priority icon has now been added in DoView 4.0 which can be placed on any of the DoView elements listed above. The new priority icon is useful whenever you want to prioritize any element. One way of using this is when doing DoView 'line-of-sight' analysis - you can prioritize a box (e.g. an outcome) then link other boxes to it which are focusing on it (e.g. projects). 

DoView line-of-sight analysis is the process where you check whether your project boxes are focused on your priority outcome boxes or whether they are just focused on a number of lower priority boxes. You do line-of-sight analysis by turning on the count links function. You can do this from the view menu. However in DoView 4.0 you'll also be able to just click on the count links icon in the toolbar at the top of the screen to make the number of links in and out of a box appear in white numerals on any box that has links to it. Information on doing line-of-sight analysis.

We've included BAU as one of the priority icons in addition to A, B, C, D, E  because people often want to include this as a type of priority.

The screenshot at the top shows elements with the icons on them and the one at the bottom shows you how to select these icons to put on a box or other element. You select them from the small window at the bottom of the screen that appears whenever you click on an element that can have a traffic-light or priority icon placed on it.

P.S. Our designers have also tweaked the traffic-light colors so they're somewhat more user-friendly for people with red-green color-blindness. 

Download DoView 3.06 trial now.

Back to the DoView Blog.



entering-tags

The new tag feature in DoView 4.0 will let you put tags on the major elements that you use in DoView (boxes and the elements under the advanced menu - indicators, questions and items). 

You can use tags to categorize these elements in any way you like. For instance, you might like to categorize your boxes into outputs and outcomes and indicators into current and proposed.

We've used a '#' at the start of the tags as it has now become the universal symbol for a tag thanks to its use within Twitter. 

You can have as many tags as you like on any element. 

Tags are entered in the box that appears at the bottom left of the screen beneath the page list. You only see the box when you click on an element that can have tags. You can see what the box looks like in the screenshot below. To enter additional tags, put a comma after the first tag (e.g. after 'This is a tag').

We'll be blogging about the new priority icons you can also see in the box below during the coming week.  


Download DoView 3.06 trial now.

Back to the DoView Blog.




new-software-release-a


DoView 4.0 is ready for release soon and we're starting a series of blogs about its new features. Also watch out soon for a detailed posting on the brand-new DoView Pro 4.0 version which will include a completely new syncing feature to let different people work on the same DoView and then sync it into a master file.

Against the advice of our accountants we'll be making this one a free upgrade (as we've done in the past for our loyal customers). What's 4.0 include?

First, we've totally redone the icons and streamlined the interface to make DoView consistent with the current clean aesthetic used throughout the software and apps world. We've already blogged in more detail about this feature.

Second, we've given you 'tags' - so you can categorize any element with a tag (box, indicator, question etc.). They appear as '#tag' at the bottom of any element. This means you can categorize your elements as 'outputs' 'outcomes' - whatever you like.

Third, in response to user feedback, we've introduced a cool new Smart Layout feature that makes tidying up the layout of columns and rows really easy.

We've also introduced a new 'priority' icon next to the traffic light icon at the top of elements and we now have both these icons available on all elements within a DoView model (indicators, questions etc.).

Back to the DoView Blog.


736px-Music_rests.svg


Dr Paul Duignan on Outcomes:  Being obsessed with visualization and outcomes I often find myself reflecting on how things are visualized in other domains. At the moment, I'm dabbling in music theory as part of an attempt to improve my guitar playing. Any evaluators who were subjected to my rendition of the Rolling Sones Sympathy for the Devil at a our recent national evaluation conference dinner will have some sympathy for this endeavor.

Anyway, my initial impression is that musical notation could do with a little optimization in the visualization department. Just one example is in regard to musical rests - small pauses that one makes when playing. These are of different lengths and the symbols that are used to represent them are shown above (source Wikipedia http://en.wikipedia.org/wiki/Rest_(music)).

In our work in visualizing outcomes and designing DoView outcomes software we have worked from a set of principles so as to optimize DoView as a visualization tool. The most basic is, of course, 'keep it simple' and a related one is 'consistency of representation'. This means that if you are representing the same basic thing, you should always try to represent it in a similar way. 

The symbols for the quaver onwards in the sequence above (quavers, semiquavers, demisemiquaver and hemidemisemiquavers (I will not be getting into the language they use in music here!) conforms to this principle of consistency in representation. However the first four (long, breve, semibreve and minim) use an entirely different system and are somewhat arbitrary in the way they vary the use of the underlying symbolic form - the use of a simple block of black. 

No doubt there's a long history to these symbols which a music historian could tell us all about and I don't hold out any hope of reforming musical notation. I'm sure that if I looked on the web I would find many schemes for rationalizing the way music is written.

However, in contrast to a field like music, in emerging fields such as outcomes visualization we still have the luxury of being able to explore a range of visualization possibilities. This means we have the time to ensure that we make our visualizations as simple as we possibly can while still communicating what we need to communicate. 

On a related note (excuse the pun) if you're interested in thinking about the level of complexity that should be represented in outcomes work check out the outcomes theory principle of Representing complexity in outcomes models - 'simple but not simplistic'.


Paul Duignan, PhD. You can follow me on Twitter.com/PaulDuignan or contact me here.

Back to the DoView Blog.


gap-between-outcomes-and-outputs


Dr Paul Duignan on Outcomes:  One of the hazards of my job is that I end up at dinner parties buried in protracted technical discussions about other people's program planning woes. Dry stuff indeed, but often safer than religion or politics if you get the wrong person!

Over dessert a while ago I got into a detailed discussion with a program manager who had found himself dealing with a new governance group. The new group did not know a great deal about the program.  

At the first meeting they started querying how the program activities were actually going to achieve the very high-level outcomes that had been set for the program. This is actually encouraging in that it is something that a governance group should concern itself with!

However the program manager's complaint was that he had a clear concept of this connection and that it has been implicitly understood by the previous governance group but now the new group was querying it.

The problem here is probably just one of communication. We are living in an environment where there is a wide-spread demand that outcomes be 'true outcomes'. As a result the wording of outcomes is being struck at a very high-level. This tends to open up the 'gap' between program activities and outcomes because outcomes are heading off into the stratosphere in the quest of being 'true-outcomes'. 

Dealing with this problem is discussed in the outcomes theory principle that All levels of a program's outcomes model should be visible

This principle holds that every program needs to work out the links between its activities/outputs and outcomes, but that just working this out is not sufficient. If the rationale for the links is buried in long text-based program documentation it's not really of much use in the cut and thrust of real-world program management. Just a few comments at a governance group can be enough to cast doubt on whether a whole program is justified. 

One approach is to refer governance groups to some detailed text-based documentation that exhaustively sets out the links between activities/outputs and outcomes. However in the middle of a governance group meeting it's usually not practical to tell people they need to read 50 pages of documentation describing the program rationale and that that will clarify everything for them!

Even if you've circulated it previously, you really can't be assured that they've all read such documentation. What a program manager usually does when challenged by a governance group in this way is that the manager tries to verbally summarize the links between activities and high-level outcomes as they see them. Depending on the time available, their eloquence, and the predisposition of the governance group members, this may, or may not, work.

A better way is to always use a full visual outcomes model of the program as the basic framework against which all discussions about the program take place. In this case the program manager could have put a poster of the model on the wall and distributed such a model to the governance group at its first meeting. They could have also presented discussion about program activities/outputs with such a visual model.

When governance groups work with such a visual model they tend to immediately see that the work has been done setting out the links between program activities/outputs and high-level outcomes.

The accessibility of the visual model also means that the governance group is in a better position to quickly query the logic of the connection between activities/outputs and outcomes if they wish. A visual outcomes model provides them with a technical working tool to trace through the logic of claims about the way in which a program will have an effect on high-level outcomes.

The fact that a visual outcomes model can be used for all aspects of program planning, monitoring and evaluation means that there are plenty of opportunities to get a governance group familiar with it and short circuit the type of problem the program manager found himself in. 

And it's not just governance groups that can be worked with in this way. Any group of stakeholders or decision-makers can be communicated with using a visual model and therefore avoiding the problem of a 'gap' opening up between lower-level activities/outputs and higher-level outcomes.  

 

Paul Duignan, PhD. You can follow me on Twitter.com/PaulDuignan or contact me here.

Back to the DoView Blog.