A while ago (because I’m slow at blogging) Zach Groff attempted a quantitative estimate of the impact of participating in collective action on the Effective Altruism Forum.
The comments there did a good job of criticizing Zach’s specific quantitative estimate, but I want to make a broader point—I’m actually having a hard time imagining a quantitative cost-benefit argument that I would find very convincing here. The problem is that the causality is so hard to assess that the best thing you can do is get really wide bounds via cheesy counterfactuals (a la Carl Shulman’s point that even if the Black Lives Matter protests eliminated every police shooting death for a decade, this would only save ~10k lives, and that’s assuming an implausibly large effect.) But in many cases, you can’t get useful bounds this way, and so you’re left guessing that the cost-effectiveness estimate falls somewhere between “everyone should do this all day” and “better than a poke in the eye with a sharp stick.”
Under these circumstances, the effective altruist tendency towards quantitative analysis, even when highly speculative, probably misses the forest for the trees. Rather than see Zach try to multiply a bunch of large benefits by small probabilities and end up with something big but with even bigger error bars, I’d prefer if he (and other EAers) would focus on being able to robustly show some effect. I’d ask them to start from the other direction: what’s the evidence that mass collective action works at all? OK, the Tea Party/rainy day study—what else? What about niche issues? Is there any reason to think that open-borders protesters will be less forgotten than all the other ones Zach mentions? What’s the counterfactual (how well do movements without collective action do)?
Maybe once we’ve answered those questions we could get more into the quantitative estimation problem. But first we need to address the basics and get a thorough qualitative understanding of how collective action works.