We’ve Got a Big Problem

There is a problem related to how we effectively help people receiving social services and public benefit programs. It’s a problem that we have been thinking, talking, and writing about for years. It’s a problem that once you see it, you can’t unsee it. It’s also a problem that you’re likely familiar with, whether you have direct experience with the dynamics themselves, or you’ve been frustrated by how these dynamics impact your work. In February, we organized a convening at Georgetown University in collaboration with Georgetown’s Massive Data Institute to discuss how so many of us can be frustrated by the same problem but haven’t been able to really make any headway toward a solution. 

For as long as social services have existed, people have been trying to understand how to manage and evaluate those services. How do we determine what to scale and what to change? How do we replicate successes and how do we minimize unsuccessful interventions? To answer these questions we have tried to create, use, and share evidence about these programs to inform our decision-making. However – and this is a big however – despite our collective efforts, we have difficulty determining whether there’s been an increase in using evidence, or most importantly, whether there’s actually been an improvement in the quality and impact of social services and public benefit programs.

This is because the incentives for creating, using and sharing evidence about these services and programs are fundamentally flawed. Services and programs rely on funding to continue operating and the funding bodies prioritize positive results. This doesn’t sound so problematic at first blush – successful projects receiving more funding sounds logical –  but in practice, it means that organizations that deliver social services and public benefit programs have no incentive to create or share evidence that demonstrates a program doesn’t work. As a result, mostly positive evidence is created, shared, and used, impacting the quality of evidence overall. Maybe you’ve heard of a related issue known as the “file drawer problem” that afflicts academic research. With the “file drawer problem” only statistically significant, positive results are published. Most research never sees the light of day because it doesn’t support the research hypothesis.
  

It’s not (just) about barriers to evidence

This is a complicated issue. The dynamics at play make it difficult to zoom out from specific organizational struggles and look at how incentives impact evidence practices more broadly. Often, the conversation focuses on what seems like an immediate issue, like capacity building for organizations to improve their technology and analytical capabilities to access and work with data. However, this is only one piece of a bigger puzzle that must consider and address incentives. From our work at DARO we know that even in contexts where evaluation data is accessible, there are still disincentives to create, use and share evidence. 

The Georgetown convening in February helped us further the conversation beyond the common talking points on evidence practices that often pull discussions away from incentives. We explicitly asked the group to consider evidence practices beyond barriers like capacity and access and to consider what would happen if those issues were solved instead: Would removing all barriers to creating or using evaluations actually lead to more evidence-informed social services? Are incentives misaligned with organizations creating, using, or sharing evidence? How does this affect our ability to learn and improve our programs? With this framing, we heard from many people at the convening that there are still many obstacles to creating and using evidence, even when barriers to data on interventions are removed.

Keep the conversation going 

We also heard from February convening attendees that it’s rare for people with different roles and perspectives to come together to discuss the unspoken, shared reality of how incentives impact their work. Existing forums are ill-suited for the conversation, usually focusing on one type of stakeholder, making it difficult to develop common, shared language to understand the bigger problem. 

The February convening brought together people from public policy, philanthropy, nonprofit management, and more, and key themes emerged on why it is so challenging to work collectively to  find solutions for such a complex problem. You can read our convening report, summarizing our discussions and the big takeaways here. 

We think having more conversations that include a range of users and uses of evidence is a critical first step in understanding the complicated issue of incentives. We need to better understand how incentives impact our ability to create, use, and share evidence, and evaluate the effectiveness of our social services and programs. Convenings, like the one we co-hosted in February, can help further our understanding of the problem and identify possible paths forward to improve evidence practices.

Are you interested in learning more? We are continuing to find new ways to bring people together to work on this issue. We’ll be sharing more insights and initiatives aimed at improving the evidence and evaluation landscape in the social services sector on our blog. You can keep up with our posts by following us on LinkedIn.

Next
Next

The Times They Are A-Changin’ (we have some news!)