Effective altruists and outsiders

The effective altruism community somewhat frequently has outsiders come in claiming that one particular intervention is Definitely The Most Effective Thing Ever. These people tend to start a bunch of discussions about their favored intervention, don’t talk very much about anything else, and argue very forcefully for their point of view. This causes the community at large to ignore them, which is (understandably) upsetting: it seems hypocritical for the effective altruism community to claim to value a spirit of open-minded inquiry and then go around shunning outsiders who are arguing for a particular intervention.

I think it’s very important that we learn to talk to outsiders productively. And I largely understand where outsiders are coming from: from their perspective, the EA community looks like an amazing untapped resource they could be using for their preferred cause! And it’s so amazing that they need to push really hard to mobilize it. Here’s my attempt at explaining why that’s a bad strategy, even though their intervention of choice is probably awesome.

Right now the EA movement is very young. There are lots of interventions that seem promising, and there’s not very much research on any of them. Even within the domain of global health and poverty, GiveWell has three interventions that it can’t decide between, and they think it’s likely that global health isn’t even the best option (just a good one that’s easy to assess). When an organization as objective, rigorous, and thorough as GiveWell doesn’t know which interventions will be the best, it makes us pretty skeptical of anyone else who claims to have certainty.

Overall, there are four possible explanations for why such a person could disagree with what I’ll call “mainstream EA” be really sure of an intervention.

  1. They have significantly different epistemological views from mainstream EA (I’d say MIRI is an example of this, given their disagreement over astronomical waste);

  2. they have significant information that mainstream EA thought doesn’t (obviously I can’t cite examples here);

  3. they are avoiding significant biases that affect mainstream EAs (anti-aging research like SENS thinks it falls into this category);

  4. they aren’t actually engaging with the idea of effective altruism and are just trying to use the existence of a group of enthusiastic people for their own ends.

Hopefully, as an outsider think you belong to one of the first three. The problem is that the base rate of (4), as compared to the others, is really high, so you need a ton of evidence to outweigh that. And arguing repeatedly about your favored intervention actually isn’t a good way to produce such evidence, since it’s hard to make sufficiently persuasive arguments purely through text; furthermore, this plan is exactly what someone in (4) would come up with.

Instead, the best kind of evidence that you fall into one of the first three categories would be trying to engage with the community on its own terms—to cooperate in the epistemic prisoner’s dilemma that you and we fall into. For example, I was recently at the EA summit and Eliezer Yudkowsky (founder of MIRI) was also there. Honestly, I somewhat expected him to harp on AI risk for the whole conference. But even Eliezer was trying to contribute to EA thought on other topics—see his new post on LessWrong about GiveDirectly. I think because he was willing to engage on other topics unrelated to AI risk, Eliezer actually ended up persuading more people about his views than he would have if he had soapboxed the entire time.

I don’t mean to say that it’s bad for outsiders to be enthusiastic about their intervention. Believing strongly in one thing can be a great motivator. (In fact, if you’re very enthusiastic about getting EAs interested in something, maybe you could start doing some research of the type that GiveWell does on it! That would be super valuable.) It’s just that from an outside view, it’s very hard to distinguish people who have a reason to be confident from people who are trying to bend the EA community to their own goals.

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Jonah Sinick

The claim “intervention X is the most effective” is a very strong claim. Substantiating it requires examining every class of interventions, and making a compelling argument for why X is superior, on a case by case basis.

I think that it’s reasonable for the effective altruist community to ask outsiders who make such claims to attempt to substantiate their claims in this way.

In regards to the example of Eliezer engaging with the details of GiveDirectly: I agree that this is an example of somebody engaging with the EA community on its own terms, but I’m wary of viewing it as an exemplar. I think that a norm of the type “first learn the things that we know, and then we’ll listen to what you say” can be conducive to group think. Even when outsiders are wrong, they may have valid arguments, and it’s better to solicit these arguments from them than to repel them by imposing barrier to entry.

In practice, making a credible case for the superiority of an intervention will require engaging with at some of the details of at least some of the interventions that the EA community has considered. But I prefer the framing “Here are the interventions that we’ve considered: why do you think that yours is better?” over “There’s a strong prior that people who are trying to push their favored cause aren’t engaging with the idea of effective altruism, so for us to listen to you, you have to signal that you’re serious, by engaging with the community on its own terms.”

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

Jonah, I’d agree with you except that often the people pushing intervention X aren’t even arguing that “intervention X is the most effective”–they’re jumping straight to “you guys should focus on intervention X”, and as a result the arguments that they give end up being unconvincing. (Well, sometimes they get as far as claiming that intervention X is the most effective, but seemingly because they believe that saying “X is the most effective” is some kind of password that will get us to pay attention, rather than an argument that has to be supported.)

So I’m not arguing that we need a norm of “first learn the things that we know, and then we’ll listen,” at least not for object-level things. But for fundamental meta-level things like our standards of rigor, methods of discourse, epistemology, and basic values, I think we do need to insist that people pay attention to what we currently think.

Ultimately, it’s a question of signal:noise ratio. I agree that abstractly, as you say, “it’s better to solicit these arguments from [outsiders] than to repel them by imposing barrier to entry.” The problem is that the vast majority of outsiders don’t yet have a good argument, such that trying to engage with all outsiders who argue for their own intervention is probably not a productive use of our time. And for those who don’t have a good argument yet, but stick around, the barrier to entry helps them to create one, so it’s doubly effective as a filter.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Jonah Sinick

Thanks Ben.

I’m sympathetic to the point about there being a need for a strong filter. I know that in practice, a large majority of outsiders will fall into your category (4).

I’m proposing that “explain why you think that intervention X is the most effective” is a better filter than “learn our standards of rigor, methods of discourse, epistemology, and basic values.”

I know that you might have more experience than I do interacting with such people, and therefore have a better sense for what would work well.

I’ve repeatedly (though not frequently) had the experience of somebody who initially looks to be poor at reasoning, or exhibiting motivated cognition, or having different values, making good points that hadn’t occurred to me.

I think that the problem of group think is hard to overestimate. In many cases, after leaving a community that I had been a part of for a while, I realized that I had unknowingly acquired a number of unsound beliefs while I was a part of the community, some of which seem obviously wrong in retrospect. The cost of interacting with people who are likely wrong is high, but the cost of not getting pushback from people with different world views is also high.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Jess Riedel

What bias does SNES think the EA movement is succumbing to?

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

Jonah, thanks, that’s interesting. I haven’t had either of the two experiences very often, so you may just have evidence I don’t. Can you give some examples of both?

Jess, status-quo bias–same one they think everyone else has, which cases pro-deathism. And to be honest I think they’re right at least to some extent. We might reject SENS as EA even if we didn’t have status quo bias, but some people definitely have status quo bias; viz. all the threads in the EA Facebook group with people beating the dead horse of “but death is such a good thing, really, when you get down to it!”

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Jess Riedel

Hmm, OK thanks Ben. I always figured the difference between most EAers and SENSers were questions of time-scale, confidence, and the value of future people. Of the EAers who think X-risk or other far-future concerns trump near-term stuff like malaria, I haven’t heard any of them criticize SENS research except insofar as it distracts from even more efficient causes. I don’t know anything about the Facebook group, though.

With regards to your question for Jonah: I’ve seen EAers use the alleged reasoning mistakes of outsiders as a reason to dismiss outsider evidence/arguments many more times than I’ve seen them use the identification of such mistakes constructively. Some examples: (1) Dismissing the strong warning signal that the works that EAers hold to be important and correct often have very little mainstream academic citations/discussion. (2) Dismissing the historical evidence for the effectiveness of traditional human institutions, either because of small changes in environment or because the supporters of those institutions often use imperfect reasoning. (3) Dismissing the opinions of people with experience in the philanthropic world (especially with regards to what causes sound good but don’t work in practice, or vice versa) simply because no RCTs exist. (4) Dismissing the possibility that a certain cause is the most effective just because its supporters support it for reasons other than it being the most effective.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

Jess, can you be more specific about those examples? Like, on the level of actual concrete events that occurred? Not that I don’t believe you, but I haven’t noticed this occurring myself, which probably means I’m doing the same thing.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Jonah Sinick

Ben — email me for examples.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Jess Riedel

(1) People frequently citing and discussing the various papers put out (and only occasionally published) by MIRI, with no acknowledgement that they have attracted extremely little attention from academia. If pressed on this, they often then respond that either academia doesn’t care about this stuff (with no attempts to research whether this is true, which is non-obvious but potentially extremely valuable) or by citing very broad (non-topic-specific) biases that prevent academics from working on it.

(2) The very high proportion of EAers who are polyamorists, polyphasic sleepers, non-traditional mediators, etc, combined with their response when asked about why they think these things have not been adopted by otherwise mainstream people. “Oh sure, there are many CEOs out there that would stand to make millions more if they had 10% more waking hours, but they are falling prey to status-quo bias.”

(3) At an EA event, I’ve personally brought a very bright person who had tremendous experience actually building a philanthropic organization. But this person was non-technical (and not really fitting into the EA culture/mannerisms/jargon), and his organization was not aimed at optimizing good. He had a whole lot to teach us, but we were generally not interested in learning from him. (This applies equally to me.)

(4) Most than likely, many of the situations you are describing in the original blog post.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Jess Riedel

Ha, looks like Jonah was a lot more tactful than me…

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

Jess, thanks for the examples.

The thing about donating to SENS is that it doesn’t actually cause more of an aging cure to exist than would have previously. Because it’s in people’s own interests to cure their aging once the cure is discovered, the only thing donating to SENS does is speed up the process of finding a cure, so most of the benefits are realized today, not in the far future–if they speed up research by a year, an additional year’s worth of people will never have to experience age-related morbidity/mortality.

Also, empirically it just is true that people (including EAs) presented with SENS will make bad arguments for why death is good, and it seems much more plausible to me that this is due to them being biased than that they actually have a good reason for rejecting SENS (not caring about far future concerns) but refuse to give their true rejection.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.