Creating Positive Change with Data and Evaluation – Part III: Impact Evaluation

staff-brian-beachkofski

Brian Beachkofski leads Third Sector’s efforts in data, evaluation, and modeling to include developing and executing strategy, capturing best practices, and providing training.

This post is based off a recent speech I made at the Harvard University Center on Education Policy Research for a meeting of the Proving Ground partners. Proving Ground is an initiative committed to helping education agencies meet their practical needs, by making evidence cheaper, faster, and easier to use. Their goal is to make evidence-gathering and evidence-use an intuitive part of how education agencies conduct their daily work.

This is part three of a three part series.


The connection between outcomes and payment requires a rigorous evaluation. Whether we’re determining if a Housing First model increases non-housing outcomes, working with kids to prevent future justice involvement, or giving kids better tools to learn, we owe them the fulfillment of our promises. When Third Sector works with people we tell them that if they work with us, their lives will be better. Many people have heard similar words many times and haven’t seen their lives improve. They rightly feel betrayed, helpless, and distrustful. We owe it to them to make sure that our promises are real, that we don’t become one more person giving empty promises. The key to keeping our promises is to test ourselves and rigorously evaluate our programs to see if they are improving lives.

Rigorous evaluation can mean many things. It can mean a Randomized Controlled Trial (RCT). It can be a quasi experimental design. What it can’t mean is anecdote. Anecdotes are stories and stories resonate with people. They make ideas seem real and because they are personal narratives, they are often compelling. But because they are personal narratives, they are by definition non-universal. I like the phrase “data is not the plural of anecdote.”

But anecdotes are useful even if they aren’t evidence. Personal stories bring data to life but they don’t replace evaluation. They add the needed human dimension to the evidence and data. They appeal to emotions while data appeals to the mind. When used together, they make the other more impactful.

Lafitte-187Let’s take an example of a school district providing learning software for use in the classroom. They want to make the biggest impact they can which means using the most effective package based on prior evidence and evaluating the impact in their classrooms. An approach would be to perform the evaluation by providing the material to a portion of the schools in the district. Each school is matched with a similar school in terms of predictors of reading level. One of each pair gets the new material and the other continues with the existing materials. One school provides the baseline and the other the estimate of the intervention’s impact.

After a program launches the stakeholders meet regularly, using quarterly data to monitor the program, the evaluation, how the program is adhering to the model, and other uses. The governance committee helps with mutual accountability and program learning.

This contrasts with traditional procurements where performance only comes out at the end of a school year. If observed impact doesn’t meet the provider’s claims, we often hear the response that the model requires elements that weren’t implemented. There isn’t mutual accountability nor is there a formal feedback and coordination loop during program execution.

In our example, the system provider helps make sure that the system is being used as intended. The schools provide realtime feedback on the system as it’s being used. The evaluation provides data for the learning feedback loop. Moreover, the evaluation means that the district can explain to the board and community how effective the spending was.

It’s important to use our scarce resources efficiently. There is a great opportunity to fix the incentive issues by using Pay for Success approaches to procurement decisions. The direct connection between payment and better outcomes reduces the incentives to overstate impact, creates an understanding of the baseline outcomes, and the evaluation creates a feedback loop which supports mutual accountability. Data and evaluation hold the potential to align incentives, provide accountability, and most importantly improve outcomes for those we’re trying to help.