People keep citing this statistic that only 1% of federal spending is evidence-based.
In a way, this actually isn’t so bad. A lot of federal spending ought to have such obvious effects that it doesn’t need a formal impact evaluation. (For instance, nobody needs a randomized experiment to figure out whether all the tanks we sent over to Iraq had their intended effect—though of course whether those intentions were a good idea is another question.) I’m sure that if we actually ran impact evaluations on all government programs, we’d find that much more than 1% of them had the effects we’d expect.
On the other hand, as the tank example shows, that’s a pretty weak statement. There’s a lot of difference between “having the effects we’d expect” and “being a good way to accomplish policy goals.”
Take teaching as an example. There are a couple bits of federal spending here that actually are backed by evidence, like Small Schools of Choice, which is often held up as a success of evidence-based policy:
In 2002 New York City closed 31 large, failing high schools and replaced them with small schools of choice (SSC) that featured specialized curriculums, close associations with outside groups such as businesses and non-profit organizations, and teachers and principals who developed their school philosophy together and advertised it to students and parents. Students entering high school (at grade 9) were allowed to apply to several schools. As a result of student (and parent) self-selection, 105 of the SSCs were oversubscribed. This overflow of students caused the New York school system to randomly assign students to the SSCs and other types of schools. This procedure was repeated for four consecutive years, creating the opportunity to study four cohorts with a total of about 21,000 students who were assigned randomly to either an SSC or a different type of school. Nearly 95% of the students were black or Latino and nearly 85% of the students were from low-income families as measured by eligibility for free or reduced-price school lunches.
As shown in several reports by the research firm MDRC (see here and here), the SSC schools have produced substantial impacts on two measures that have been difficult to impact in previous education evaluations:
Students in SSC’s had significantly higher graduation rates than control students (71.6% vs. 62.2%).
Students in SSC’s had significantly higher rates of enrollment in colleges (49.0% vs. 40.6%).
This is evidence-backed, but it only got that way by accident! The only reason they did a rigorous evaluation on it is because students were assigned to the oversubscribed schools by lottery for a completely unrelated reason. So although it’s “supported by evidence” in some sense, it’s still not exactly evidence-driven, which is what we should be aiming at. We’re still looking under the lamppost.1
On the other hand, designing an evidence-driven school system from the top down is, frankly, a task that I wouldn’t want to entrust to any one body. In some sense that would actually be in tension with the idea of evidence-backed policy, because a single body would be in danger of becoming a partisan of their own ideas and resist admitting failure or changing.
I’m sure a top-down designed evidence-backed system would end up no worse than the current one, but it seems like a better idea might be to design a meta-system that allows systemic innovations to percolate efficiently, rather than trying to solve all the object-level problems with schooling in one fell swoop. I’m basically stealing this idea from Lant Pritchett, who talks about the distinction between “spider systems” (where the decisionmaking is all centralized and everything feeds into/out of the head) and “starfish systems” (whose different pieces can move around independently—the starfish as a whole follows a gradient of which limbs are pulling more strongly).
Thinking of ways to make schooling (and big systems in general) starfishier seems fairly productive. This task has two parts:
Allowing different pieces of the system (e.g. individual schools or school systems) to experiment independently
Making sure the different systems are incentivized to move in the right direction
Of course, a classic example of a starfishy system is a competitive market, where individual firms do their own thing, and they’re incentivized to do whatever provides the most value because that makes them the most money. So perhaps Pritchett is just rebranding capitalism to make it more palatable to the left, which tends to be more strongly in favor of school reform and less excited about the power of free markets.
But I think the good parts of starfishiness can probably be extracted from market-based systems and made to work elsewhere. Plenty of corporations are not markets internally, but still have innovations percolate from the bottom up—like early Google, or Valve, which is famous for having no bosses and a pretty wacky but awesome corporate structure generally. In policy-land, China is often brought up (PDF) as an example of having strongly experimentally-driven policies.
The Chinese example demonstrates pretty thoroughly that starfishiness can be divorced from free-market idealism. A more potent political obstacle might be its elitist/technocratic overtones, which seem to be more acceptable in China than in the US.
There’s also a pretty serious selection-bias problem in the study, as they only evaluated oversubscribed schools. For all we know, splitting up the schools might have had no causal effect on the average student’s outcome, because the non-oversubscribed schools ended up worse. ↩︎