This is the fourth in a series of six articles that discuss outcome measurement: what it is, how to do it, and most importantly, how it will help your organization. The content of these articles draws heavily on a framework designed by the United Way of America, in addition to the author’s experience and research. While this is not the only system for measuring outcomes, it has been proven effective by many organizations in both Canada and the United States. Watch for the following articles on the third Monday of each month.
In last month’s article, we defined outcomes and logic models. This month we will learn how to select indicators to measure outcome achievement and identify influencing factors that can affect whether or not that achievement occurs. Once these indicators and influencing factors has been selected, you will be ready to design, test and implement your data collection methods.
What is an outcome indicator?
An outcome is the change intended by a program’s activities. An outcome indicator is a measurable statistic that tells us if that change has happened. For an indicator to be useful it must be specific and unambiguous, observable and measurable, and linked to outcome achievement. A well chosen indicator will not only represent the achievement of an outcome, but will also help to summarize your program’s level of performance.
Selecting outcome indicators is often viewed as the most challenging aspect of outcome measurement, but while it does require care and consideration, it doesn’t need to be scary, or even that complicated. In its simplest form, an indicator is just a description of change that has already happened. We might not be able to know directly if our friend is happy, but we can see a smile and hear laughter — these are indicators of the happiness outcome.
Painting a clear picture
Some outcomes have indicators that are easily identified, while others are less obvious. Fully capture all aspects of an outcome using quantifiable traits that describe the desired change once it has occurred. These may include positive traits describing something achieved, or negative traits describing something avoided. One outcome may have multiple indicators, provided they are all needed for a complete description.
Subjective terms, such as substantial or adequate, are ambiguous and should be avoided. While numerical targets are very specific, they should not be assigned unless data exists to support that target. If this is the first outcome measurement experience for your program, it is advisable to define success as any gain or reduction, until objective targets can be set.
Aggregate outcome data is a reflection of your program’s performance; consider this when deciding which numerical format to use. A percentage is often the best choice, but should always be accompanied by an absolute number to provide context. In some cases, such as where the indicator is the number of actions taken, an absolute number is all that can be reported. Occasionally ratios, rates and other expressions may be more appropriate. Choose the format that best summarizes your program’s performance when presented as an aggregate.
Selecting your indicators
The process of selecting indicators requires a great deal of effort and discussion from your workgroup. You may find it helpful to start the process with questions, including:
- How will we know when the outcome has occurred?
- What does the outcome look like when it occurs?
- What will we see?
Once potential indicators have been identified, you can adopt, adapt, or omit these suggestions by asking refining questions, including:
- Can we observe and measure this indicator?
- Does this tell us if the outcome has been achieved?
- Could we know if an outcome has been achieved without this information?
Remember, you may amend your indicators after designing your data collection methods, or after examining the results of your pilot test. Once you have selected your indicators, check your work against the following criteria:
- Is there at least one indicator per outcome?
- Does each indicator measure an important aspect that is not measured by another indicator?
- Do they clearly identify the change they are seeking to measure?
- Do they all provide data that will summarize the performance of the program? Will the reported data convey this performance effectively?
Once all your indicators meet these criteria, your next step is to identify what factors could influence outcome achievement.
Influencing factors
Your program’s activities are just one of many life forces affecting your participants. Some of these, such as steady work or good health, may make it easier for a participant to achieve an outcome, while others, such as cognitive impairment or lack of housing may act as a hindrance. Similarly, variances in your program implementation, such as location, method, or duration, may also affect performance. By identifying and measuring these influencing factors, you can understand and improve your program’s performance.
When identifying your influencing factors consider each outcome individually, as many factors will influence some outcomes but not others. Don’t collect information simply because the opportunity is there, as having too many factors selected will dilute the data and render comparisons useless. As always, strive for concise, meaningful choices; if you measure the wrong things, you will manage in the wrong direction.
Participant factors vs. organizational factors
Influencing factors come in two basic varieties: participant factors and organizational factors. Participant factors belong to the participant, affect them individually, and change with each participant. Organizational factors belong to your organization, affect groups of participants, and will change with variances in delivery. Knowledge of both is needed to bring excellence to your programming.
If you find that participant factors are strongly affecting outcome achievement, this may indicate the need for different service options for different participant groups. If you find that results are dependent on organizational factors, you can use this knowledge to refine your methods. Some delivery methods may have high overall success, but fail in challenging cases; others may succeed with those cases but have a lower overall success. Being able to articulate differences in outcome achievement ensures effective methods are not lost due to perceived poor performance.
Participant factors can include demographic information, socio-economic status, health status, education, and so on.
You may find it helpful to define levels or streams to describe the difficulty participants may face in achieving an outcome. This simplifies data analysis by grouping participants by level of difficulty, creating a basis for comparison, and reduces the possibility that difficult cases will be avoided to boost performance. The number of streams and their meaning will vary from program to program, but they should always describe the ease or challenge in achieving an outcome due to influencing factors specific to the participant.
The criteria for these streams must be clear and unambiguous, as inconsistent interpretation will quickly render the categories meaningless upon implementation. In addition to documenting the criteria, draft assessment procedures to ensure your staff act consistently when streaming participants.
Organizational factors refer to who is delivering the service, and how it is delivered.
You can evaluate the methodology by tracking the amount, duration, delivery format, participant sourcing, or any other data that indicates how the program was delivered. By tracking this data you can test and improve your current methods, and evaluate new methods with less risk while yielding comparative results. This is especially useful if multiple levels of participant difficulty demand different methodologies, as you can cross-tabulate results to compare.
Whenever there is more than one team, unit, or location working with separate groups of participants, knowing who delivered a service will enable you to compare team performance, and provide team-specific feedback. Be careful how you communicate this, as some may see this as an attempt to manage performance rather than evaluate methods. Outcome measurement is intended to evaluate the performance of a methodology, not the staff charged with implementation. Staff should only be held responsible for the correct and responsible implementation of their area of responsibility; reviewing with staff how they are evaluated can highlight that difference and prevent misunderstandings.
You’re not done yet…
Once indicators and influencing factors have been selected for measurement, it’s time for your workgroup to update key stakeholders (staff, volunteers, etc.) on the progress to date. Your workgroup should also solicit feedback on the work done since the last update as this may reveal needed changes.
With this feedback in hand, your workgroup is now ready to decide how the chosen data will be collected and used.
In next month’s article, we will learn how to plan and test your data collection methods.
Eli Bennett has been serving the Canadian philanthropic sector for seven years. A graduate of Humber’s Fundraising and Volunteer Management postgraduate program under Ken Wyman, Eli has extensive experience raising millions of dollars through various media across Canada. Currently, Eli is applying his passion for objective management to service provision and program design. If you have any questions on applied measurement in the philanthropic sector, please contact Eli at elibennett@gmail.com.