Updates from the Administrative Data Pilot: Weaving data for a holistic picture of program outcomes

In late 2016, Third Sector and Stanford’s Center for Poverty and Inequality (Stanford) were awarded a Social Innovation Fund (SIF) grant under the Administrative Data Pilot (ADP) category.  After a rigorous competition, three sub-grantees were chosen to receive two years of technical assistance from Third Sector and Stanford to develop administrative data infrastructure and outcomes contracting processes, culminating in September 2019. This blog post illuminates how these three projects are using quantitative and qualitative data to improve outcomes for participants in social service programs.

“Not everything that can be counted counts, and not everything that counts can be counted.” This famous idiom, which has been attributed to both the physicist Albert Einstein and the sociologist William Bruce Cameron, suggests two strategies for making sense of data:

  1. Sift through data to find what’s relevant and worthwhile and
  2. Investigate what else matters that isn’t in that data.

Third Sector is helping governments participating in the Administrative Data Pilot do both by blending quantitative analysis of participant data from multiple datasets and qualitative data from service providers and participants. When these two forms of data are used in tandem, they become durable drivers of change: quantitative data gauges and measures the qualitative, and qualitative data describes and contextualizes the quantitative. Together, they create a comprehensive account of a program’s impact by uncovering both the facts and stories behind the data. With all three government agencies participating in the ADP now having established data usage or sharing agreements (each a small victory in themselves!), we are taking a step back to identify how each site is leveraging insights from both quantitative and qualitative data.

Using qualitative data to put “what can be counted” into context

Speaking with service providers, participants, and other stakeholders is surfacing insights that matter. Discussions with people on the ground also give analysts a look under the hood at what processes or circumstances could have affected data collection. In two ADP projects, service providers play a large role in collecting and validating information. Unsurprisingly, the people who enter data, such as frontline staff, have deep insight into how accurate or reliable the data are. Those insights are helping government agencies to troubleshoot poor data quality and come up with solutions to improve it, such as digital tools or participant incentives.

Stanford’s retrospective analysis is investigating program impact and validating factors that could have played into participants’ long-term outcomes. Stakeholder feedback guides this analysis by centering key factors and intersectional issues, like the combination of languages spoken in the home and providers’ access to interpreters. Those clues help project teams draw conclusions about why some elements appear to be more effective. Contextualization is essential when making program changes or incentive structures based on quantitative data.

Using quantitative data to guide conversations about “what can’t be counted”

Participant listening sessions are helping one ADP site unearth drivers of success that are not in the datasets, such as the resources or perspectives that participants hold. Quantitative data can play another role here, guiding an equity-focused approach by revealing disparities among subgroups. If data analysis identifies that people of certain races, ages, or geographies do not achieve the same long-term outcomes, listening sessions can elevate the voices of these groups and uncover how services can better meet their particular needs.

Quantitative data is supporting stakeholder conversations by acting as a rudder, directing those conversations towards challenges and improvement opportunities for critical topics. Variables like participation level, provider efforts or health history are evaluated to predict success and suggest whether improvement ideas should focus on engaging participants or helping them transition to the next step. Two communities have even used data to map participants’ flows through the program’s phases, giving agencies a clearer picture of how provider performance matched or differed from their expectations.

Quantitative data is important to objectively understand drivers and outcomes. Qualitative data is important to contextualize these findings. Together they tell a more complete story of the day-to-day lived experience than either does alone. These three projects demonstrate the value of using both quantitative and qualitative data for better decision-making related to program improvements and new contracting structures.

Stay tuned in and look forward to more updates and lessons as the work continues in Washington, San Diego, and Santa Cruz!