'There's nothing so practical as a good theory' Kurt Lewin.
What is outcomes theory?
Why organizations should have something called an 'outcomes system'.
Why organizational outcomes models ('Outcomes DoViews') are the most effective way of working with organizational outcomes.
What are the six types of evidence that can be used to show that a program 'works'.
What is outcomes theory?
- Outcomes theory is a new theory which identifies all of the basic concepts needed to specify, work with, measure, attribute and hold parties to account for outcomes of any type. It has provided the conceptual basis for constructing the DoView Visual Planning approach.
- It improves the coherence of outcomes thinking in: strategic planning, monitoring, performance management, evidence-based practice, evaluation, delegation and contracting. It can be applied to: projects, programs, joint ventures, coalitions, sectors and also when thinking about the outcomes of governments as a whole.
- It encourages the use of visual outcomes models whenever working with outcomes because of the advantages they provide over traditional text-and-table approaches to working with outcomes.
- It encourages organizations, projects and programs to think in terms of having an 'outcomes system' (just like organizations have an 'accounting system') rather than traditional thinking which puts all the focus on producing 'outcomes reports'.
- It provides conceptual tools (e.g. Duignan's Sources of Evidence That a Program Works Diagram) to reduce the current confusion encountered when doing outcomes work. For instance, the current confusion about the level of outcomes parties should be held accountable for.
In contrast to many other organizational strategic planning, results-based measurement and outcomes systems, DoView Visual Planning rests on a solid theoretical foundation. Some other outcomes systems have just been built-up piecemeal as they are used in the field. Such systems tend to run into problems when they try to deal with difficult questions such as: what should you do about hard to measure outcomes; how do you attribute changes in specific outcomes to particular parties; and how do you set accountabilities when it's hard to attribute changes to any one particular party? DoView Visual Planning deals with these challenges in an elegant and coherent way because it has a sound coherent conceptual basis drawn from outcomes theory.
Outcomes theory clarifies the common set of conceptual issues lying behind a range of different program, organizational or sector work.
Several insights from outcomes theory are discussed below to give a flavor of how outcomes theory can inform strategic planning and results-based organizational and strategy work.
Why organizations should have something called an 'outcomes system'
Outcomes theory argues that it's useful to think in terms of all programs, organizations, whole governments etc. needing an 'outcomes system'. This is similar to organizations having to have a formal 'accounting system'. In the accounting realm, the accounting reports published by any organization are produced out of its underlying accounting system. Accounting systems are, in turn, based on the principles and conventions from accounting theory and applied in practice by the accounting profession.
However, in the outcomes area, many organizations currently feel that it's enough to simply think in terms of producing outcomes and results reports of various types such as strategic plans, annual reports etc. Often such organizations do not have anything that they could point to as an outcomes system underpinning the outcomes and results reports they produce. In addition, few organizations think in terms of using outcomes theory and its conventions to inform the construction of their outcomes systems.
A project, program or organizational outcomes system is a formal system for specifying, prioritizing, measuring, attributing and holding parties to account for the outcomes their organization is attempting to achieve. It includes both high-level outcomes for the organization and the lower-level steps it's believed are needed to achieve them. It also identifies priorities and can include other elements such as projects, indicators and evaluation questions.
Outcomes theory provides the basis for constructing sound outcomes systems from which an organization can be assured it will produce robust outcomes and results reports. Having an underlying system is the appropriate way to set up accounting within an organization, it should also be the way things are done when setting up an organization's outcomes work.
Why visual organizational outcomes models ('Outcomes DoViews') are the most effective way of working with organizational outcomes
Outcomes theory argues that dealing with outcomes in anything but the most simple of cases can quickly become very complex. Traditionally, outcome sets have been represented in text, bullet-point or table format within strategic plans, monitoring plans and reports. Unfortunately, such formats cannot efficiently represent all of the complex relationships that may exist between different levels of outcomes and lower-level steps within any outcomes set.
For instance, many textual 'list-based' approaches unintentionally result in 'siloed' outcomes sets. Siloed outcomes sets are ones which only allow each lower-level step to be linked to a single higher-level outcome. Outcomes theory identifies this as a technical mistake because in the real world, good lower-level steps (e.g. strategies) may influence a number of higher-level outcomes. For instance, a school camp for children may be attempting to improve educational, physical and social outcomes all at the same time. It does not make any sense for planners to be forced to put this strategy only under a single higher-level outcome. However this can occur unintentionally just because of the format (e.g. a table or list of bullet points) being used to list the outcomes set. A properly structured visual representation of an outcomes set avoids this.
Programs and organizations need efficient ways to represent and work with the complexity in their outcomes sets and outcomes theory argues that the most efficient way of doing this is to do it in the form of a visual outcomes model. Such outcomes models set out in boxes all of the higher-level outcomes that are being sought and the lower-level steps that it is believed will lead to these outcomes.* One of the technical requirements of any software used to draw such models is that it should allow any box in the model to potentially be linked to any other box in the model - thus avoiding unintentional siloization.
What are the six types of evidence that can be used to show that a program 'works'
It is important for outcomes systems to be structured in a way that does not create internal contradictions. It is easy for such contradictions to be hard-wired into outcomes systems (an example - the 'single indicator list problem' is discussed below). Outcomes theory provides a framework for clearly identifying the basic building blocks of any outcomes system. These are set out in Duignan's Sources of Evidence That a Program Works Diagram below.
Duignan's diagram above argues that every organization, program etc. should have thought about providing the following six types of evidence:
- An outcomes model (e.g. an Outcomes DoView) setting out its high-level outcomes and the steps leading to them; the current priorities the program is focused on; and showing 'line-of-sight' between its current activities and its priorities.
- Measures of progress - routinely measured indicators of change in boxes in the Outcomes DoView, without assuming that the mere measurement of such changes proves that it is the program that has caused them to change.
- Controllable indicators - a subset of indicators which are controllable by the program and therefore by merely measuring them, you accept that it is the program which has changed them. Where controllable indicators do not reach to the 'top' of the Outcomes DoView, then there is an attribution problem. This attribution problem is that just the measurement of high-level outcomes, on its own, will not prove that they have been changed by the program. It will only show that they have changed.
- Impact evaluation - more one-off, specific evaluation efforts, which claim that they are attempting to address the attribution gap identified in the previous bullet point. For instance, experiments and time series analysis. It should be noted that no program should promise, before the fact, that robust impact evaluation will be able to be done. It may not be appropriate, feasible or affordable. This, however, does not mean that the program should not be funded merely because it is hard to robustly prove its impact.
- Economic and comparative evaluation - evaluation which attempts to compare one program with another. This can be in terms of program impacts, or by translating program impacts into the common metric of dollars.
Duignan's diagram can be used to analyze any outcomes system to see if it includes all of the six types of evidence and clearly distinguishes between them. Why should outcomes systems distinguish between the types of evidence? If they don't, confusion and contradictions can arise within an outcomes system. In particular, there should always be a clear distinction in an outcomes system between controllable (evidence type 3) and not-necessarily controllable progress indicators (evidence type 2). The problems which arise when this distinction is not made in an outcomes system are described below.
Outcomes theory simplifies outcomes work by using a fully visual approach which means that it doesn't need to make a number of the distinctions between terms made when working in non-visual ways. However, there are several distinctions which outcomes theory cannot eliminate and which are essential to working with outcomes systems. One of these is the distinction between a step or outcome and its measurement (called an indicator in outcomes theory). This distinction is often neglected in other ways of working with outcomes. In particular outcomes systems which just focus on measurable outcomes without allowing the possibility that one may be seeking outcomes which are currently not measurable fail to make this crucial distinction. Another essential distinction for outcomes theory is the distinction between a controllable and a non-controllable outcome (or indicator). This distinction is discussed in the next section.
Want to explore the questions of simplifying terms used in outcomes work in more detail, or list a reference to the points made in this section? See the Outcomes Theory Knowledge Base Article: Simplifying terms when working with outcomes.
Distinguishing between controllable and non-controllable indicators - the 'single indicator list problem'
One of the tasks of outcomes theory is to identify common technical problems which occur up within real-world outcomes systems and to point out the confusion and inconsistencies these problems create. Further, outcomes theory has the job of showing how such technical problems can be prevented by setting up outcomes systems in accordance with sound outcomes theory principles (e.g. using the Duignan Diagram above). One example of such a problem is the 'single indicator list problem'. This problem arises when an outcomes system includes a single list of accountabilities for an organization (or other party) being held to account by those who control them.
In the past, in an 'outputs' orientated world, just having a single list of accountabilities did not cause any practical problem because the only indicators which were included within the list were those that were controllable by the organization. Because the indicators in the accountability list were always controllable indicators (i.e. ones that were only influenced by the party being held to account) there was obviously no problem in holding the party to account for all of the indicators within the list.
However, things become more complicated in the current 'outcomes-orientated' world. We are now in a context in which it is not sufficient to just hold parties to account for the lower-level activities which only they control (often referred to as outputs). In the new world in which we're living, there's pressure to hold organizations to account for 'higher-level outcomes'. The problem is that these higher-level outcomes are often influenced by a range of additional factors over and above what the organization does - i.e. they are not necessarily controllable indicators, called progress indicators in Duignan's Diagram.
If only a single list of accountabilities is used, what usually happens is that the list will end up containing both controllable indicators (usually at a lower-level) and not-necessarily controllable indicators (progress indicators) at a higher-level. This is because the attempt is being made to include higher-level outcomes within the list.
Because such single lists contain a mix of controllable and not-necessarily controllable indicators the are vulnerable to criticism. This criticism takes the form of either that they include too many lower-level 'output type' indicators, or that some of them are higher-level non controllable outcomes and therefore cannot be used as 'accountability indicators'. The response to this criticism can lead to another problem identified by outcomes theory as the often futile quest to find non-output, controllable higher-level indicators.
From an outcomes theory point of view, using a single list for indicators is a recipe for confusion. However this conclusion is easily solved by using two lists and distinguishing between indicators on the basis of their controllability. This means that organizations can be held directly to account for indicators that they control (included in the first list). But they can also be expected to show how they're attempting to influence the higher-level indicators included in the second indicator list, without being held directly to account for them (progress indicators as they are called in the Duignan Diagram).
A good way of working in this way is for a program to present the information about how they're attempting to influence higher-level indicators in the form of a visual outcomes model (e.g an Outcomes DoView). The practical way all of this is handled within DoView Visual Planning is to simply map indicators onto the program or organization's visual Outcomes DoView and mark-up those indicators which are controllable, and hence being used as direct accountabilities (by convention an @ sign is placed next to such indicators). The Outcomes DoView can then be used for a discussion between funders and providers about what the organization or project's strategy is regarding its attempts to influence higher-level outcomes. In outcomes theory this is called analyzing whether there is 'line-of-sight' between activities and priorities. Using this visual approach, discussions between funders and providers becomes much more sophisticated and coherent than when just working with a single list of indicators. See DoView Visual Planning on this site for how to work in this way.
* Outcomes Models (also called Outcomes DoViews when they are drawn in DoView Outcomes Software are a sub-set of the wider set of models which are being used in various planning and organizational monitoring and evaluation work. They are distinguished by being drawn according to a particular set of rules to ensure that they are fit-for-purpose. Other names for similar types of models, which may not all conform to the rules used when drawing outcomes models, are: program logics, theories of change, logic models, intervention logics, ends-means diagrams and strategy maps.
[R HS 2012-4-12] A#1,2,3,4,5,6