This is the last in a series of six articles that discuss outcome measurement: what it is, how to do it, and most importantly, how it will help your organization. The content of these articles draws heavily on a framework designed by the United Way of America, in addition to the author’s experience and research. While this is not the only system for measuring outcomes, it has been proven effective by many organizations in both Canada and the United States.

In last month’s article, we learned how to design and pre-test your data collection tools and methods. In this month’s article, we will use a pilot test to evaluate our outcome measurement system, including data analysis and reporting. We will conclude by looking forward to the ongoing management of the system and important uses for the data.

Prepare for take-off

After working so diligently on your outcome measurement system, your group is finally ready to see it in action! As tempting as it may be to race off towards the horizon, you might want to do one last fly-by to ensure everything is working properly. A pilot test is a full run-through, similar to a dress rehearsal, with everything in place as it would be when the system goes live. You will save time, energy, and a great deal of expense and frustration by making any needed corrections prior to a full roll-out. During this practice run, look for missing outcomes, poorly defined indicators, and cumbersome data procedures. Using the data from the pilot, you can also identify analysis and reporting issues.

Before you begin your pilot test, however, you need to develop a strategy. Unless your program has only a few participants, you might want to measure only a subset rather than the whole group. If your program runs in multiple locations, units, classes, or participant groupings, you might choose to measure only a few of these divisions. For best results, use random sampling procedures and ensure that your chosen subset is fully representative of the entirety of your participant base.

Up, up, and away

Since you will be running the pilot exactly as you intend to implement the system, this would be a good time to review the timeline your group developed and revise it to reflect your planned data collection procedures. Be sure to allow time to recruit and train those performing the data collection and entry, and to monitor the process, in addition to running the test. Those collecting and entering data during the pilot should have the same qualifications as the ones used in the full roll-out, and should be fully trained prior to beginning the test.

While people gather and enter data, someone must be assigned to monitor the entire data process, in order to evaluate the process later. This person should know beforehand what data is needed, track the collection attempts, protect privacy, and check the quality of incoming data sheets. This monitoring can reveal problems in process and structure, and help you gain a clearer understanding of the time, money and other resources needed. Ongoing monitoring an integral part of your outcome measurement system, so plan to include this role after launch.

While conducting the pilot test, try to avoid making changes to the system, even to correct an obvious process error. Changing course mid-test weakens the final results by dividing data into pre-change and post-change findings, potentially yielding data samples that are too small to be significant. A mid-test change is only appropriate if the error will render your findings meaningless, and the post-change data will still be significant and representative of the whole program.

Check your work

Once you have completed your pilot test, it’s time to put all this hard-earned data to work. However, before you can analyze and report the data, you must first ensure that it is correct. Modern spreadsheet applications are powerful enough to handle data entry, analysis, and reporting for all but the most complicated program measurement systems. In addition, they have tools such as input filters and logical dependence formulas that can help you ensure the validity of the data entered. While these can help and should be used where possible, they are no substitute for proper quality assurance.

The best method is to double enter a portion of the data. Depending on the amount of data being entered this could be between 10% and 100% of the total volume. Once the data has been double-entered, have someone else compare the two data sets for discrepancies. Perfection in data entry is an unrealistic expectation. You should strive to keep the error rate below 1%, while an error rate over 5% indicates a problem that must be corrected.

Know the numbers

Once you are assured the data is correct, it’s time to calculate the aggregate statistics you will use to evaluate and communicate your program’s performance. Percentages are often helpful but should always be accompanied by the numerical amount to provide weighting. Averages are also frequently used, although they can be significantly affected by outliers (extreme high or low scores). Alternatively, consider choosing a performance threshold and quoting the percent and amount scoring above or below that threshold. For example, “78% of participants showed great or moderate improvement in nutritional knowledge.”

You can learn how different characteristics affect outcome achievement by showing results according to participant difficulty, demography, or other factors. Pivot tables (or other cross-tabulation tools) are especially well suited for this task, and learning to use them is time well spent. Cross-tabulation can reveal important findings that might otherwise have been misinterpreted.

Sharing your findings

Statistics are critical, but convey little without context. Take some time to review and explain your findings in writing; this will help readers understand the story your numbers present. Use this opportunity to relate key findings, as well as mitigating factors that may have reduced performance. Factors could be external, such as economic changes, or internal, such as staffing difficulties. You should also present your plan moving forward; this demonstrates forethought and can shift the perception of poor performance.

This text forms the foundation of your outcome reporting. Add your data to the report visually, using tables, charts, graphs or maps to make the data as self-explanatory as possible. If full data needs to be included, it is often best to include them as appendices. Upon completing a draft, seek feedback from staff and volunteers to ensure the report is complete and easy to understand.

Review your pilot test

With your report in hand, it’s time to review the results of your pilot test. Start by reviewing your outcomes:

  • Did you get the data you needed?
  • Did you measure what you intended to measure?
  • Does the data represent important outcomes for which your program should be held accountable?

 

You should be able to say “yes” to all of these questions. Next, review the following processes:

  • Data collection instruments
  • Data collector training
  • Data collection procedures
  • Data entry procedures
  • Resources needed to collect and analyze data
  • Monitoring procedures

     

 

Make any needed revisions, tracking the rationale for any changes made to inform future decisions. If the revisions are not major and random sampling techniques were used, your pilot data can be the start of your historical program data.

The fruits of your labour

You are now ready to launch your outcome measurement system! Be sure to continue to review your system regularly, and adjust as your program continues to evolve. Communicate your findings to staff, volunteers, funders, and your various publics; the uses for and benefits from this work are numerous:

  • Improve staff and volunteer motivation, retention, and direction
  • Inform planning at all levels: strategic, programming, resource allocation, training, etc.
  • Increase capacity to recruit skilled staff and promote the organization and its programs
  • Build partnerships, benchmark your work, and share best practices
  • Retain and increase funding by providing real data to development staff

     

     

    Most importantly, you will be able to focus the entire organization on delivering the mission. And after all, that’s what it is all about, right?

    Eli Bennett has been serving the Canadian philanthropic sector for seven years. A graduate of Humber’s Fundraising and Volunteer Management postgraduate program under Ken Wyman, Eli has extensive experience raising millions of dollars through various media across Canada. Currently, Eli is applying his passion for objective management to service provision and program design. If you have any questions on applied measurement in the philanthropic sector, please contact Eli at elibennett@gmail.com.