A critique of effective altruism

I recently ran across Nick Bostrom’s idea of subjecting your strongest beliefs to a hypothetical apostasy in which you try to muster the strongest arguments you can against them. As you might have figured out, I believe strongly in effective altruism—the idea of applying evidence and reason to finding the best ways to improve the world. As such, I thought it would be productive to write a hypothetical apostasy on the effective altruism movement.

How to read this post

Hopefully this is clear, but as a disclaimer: this piece is written in a fairly critical tone. This was part of an attempt to get “in character”. It does not indicate my current mental state with regard to the effective altruism movement. I agree, to varying extents, with some of the critiques I present here, but I’m not about to give up on effective altruism or stop cooperating with the EA movement. The apostasy is purely hypothetical.

Also, because of the nature of a hypothetical apostasy, I’d guess that for effective altruist readers, the critical tone of this piece may be especially likely to trigger defensive rationalization. Please read through with this in mind. (A good way to counteract this effect might be, for instance, to imagine that you’re not an effective altruist, but your friend is, and it’s them reading through it: how should they update their beliefs?)

If you want to comment, I’ve cross-posted to Less Wrong, which has high-quality discussion and a better comment system than the one I hacked together. If you have a Less Wrong account, please comment on that one!

Finally, if you’ve never heard of effective altruism before, I don’t recommend making this piece your first impression of it! You’re going to get a very skewed view because I don’t bother to mention all the things that are awesome about the EA movement.

Abstract

Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.

By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.

Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.

Below I introduce various ways in which effective altruists have failed to go beyond the social-satisficing algorithm of establishing some credibly acceptable alternatives and then picking among them based on essentially random preferences. I exhibit other areas where the norms of effective altruism fail to guard against motivated cognition. Both of these phenomena add what I call “epistemic inertia” to the effective-altruist consensus: effective altruists become more subject to pressures on their beliefs other than those from a truth-seeking process, meaning that the EA consensus becomes less able to update on new evidence or arguments and preventing the movement from moving forward. I argue that this stems from effective altruists’ reluctance to think through issues of the form “being a successful social movement” rather than “correctly applying utilitarianism individually”. This could potentially be solved by introducing an additional principle of effective altruism—e.g. “group self-awareness”—but it may be too late to add new things to effective altruism’s DNA.

Philosophical difficulties

There is currently wide disagreement among effective altruists on the correct framework for population ethics. This is crucially important for determining the best way to improve the world: different population ethics can lead to drastically different choices (or at least so we would expect a priori), and if the EA movement can’t converge on at least their instrumental goals, it will quickly fragment and lose its power. Yet there has been little progress towards discovering the correct population ethics (or, from a moral anti-realist standpoint, constructing arguments that will lead to convergence on a particular population ethics), or even determining which ethics lead to which interventions being better.

Poor cause choices

Many effective altruists donate to GiveWell’s top charities. All three of these charities work in global health. Is that because GiveWell knows that global health is the highest-leverage cause? No. It’s because it was the only one with enough data to say anything very useful about. There’s little reason to suppose that this correlates with being particularly high-leverage—on the contrary, heuristic but less rigorous arguments for causes like existential risk prevention, vegetarian advocacy and open borders suggest that these could be even more efficient.

Furthermore, the our current “best known intervention” is likely to change (in a more cost-effective direction) in the future. There are two competing effects here: we might discover better interventions to donate to than the ones we currently think are best, but we also might run out of opportunities for the current best known intervention, and have to switch to the second. So far we seem to be in a regime where the first effect dominates, and there’s no evidence that we’ll reach a tipping point very soon, especially given how new the field of effective charity research is.

Given these considerations, it’s quite surprising that effective altruists are donating to global health causes now. Even for those looking to use their donations to set an example, a donor-advised fund would have many of the benefits and none of the downsides. And anyway, donating when you believe it’s not (except for example-setting) the best possible course of action, in order to make a point about figuring out the best possible course of action and then doing that thing, seems perverse.

Non-obviousness

Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.

The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them. If a meme as simple as effective altruism hasn’t taken root yet, we should at least try to understand why before throwing our weight behind it. The absence of such attempts—in other words, the fact that non-obviousness doesn’t make effective altruists worried that they’re missing something—is a strong indicator against the “effective altruists are actually trying” hypothesis.

Efficient markets for giving

It’s often claimed that “nonprofits are not a market for doing good; they’re a market for warm fuzzies”. This is used as justification for why it’s possible to do immense amounts of good by donating. However, while it’s certainly true that most donors aren’t explicitly trying to purchase utililty, there’s still a lot of money that is.

The Gates Foundation is an example of such an organization. They’re effectiveness-minded and with $60 billion behind them. 80,000 Hours has already noted that they’ve probably saved over 6 million lives with their vaccine programs alone—given that they’ve spent a relatively small part of their endowment, they must be getting a much better exchange rate than our current best guesses.

So why not just donate to the Gates Foundation? Effective altruists need a better account of the “market inefficiencies” that they’re exploiting that Gates isn’t. Why didn’t the Gates Foundation fund the Against Malaria Foundation, GiveWell’s top charity, when it’s in one of their main research areas? It seems implausible that the answer is simple incompetence or the like.

A general rule of markets is that if you don’t know what your edge is, you’re the sucker. Many effective altruists, when asked what their edge is, give some answer along the lines of “actually being strategic/thinking about utility/caring about results”, and stop thinking there. This isn’t a compelling case: as mentioned before, it’s not clear why no one else is doing these things.

Inconsistent attitude towards rigor

Effective altruists insist on extraordinary rigor in their charity recommendations—cf. for instance GiveWell’s work. Yet for many ancillary problems—donating now vs. later, choosing a career, and deciding how “meta” to go (between direct work, earning to give, doing advocacy, and donating to advocacy), to name a few—they seem happy to choose between the not-obviously-wrong alternatives based on intuition and gut feelings.

Poor psychological understanding

John Sturm suggests, and I agree, that many of these issues are psychological in nature:

I think a lot of these problems take root a commitment level issue:

I, for instance, am thrilled about changing my mentality towards charity, not my mentality towards having kids. My first guess is that - from an EA and overall ethical perspective - it would be a big mistake for me to have kids (even after taking into account the normal EA excuses about doing things for myself). At least right now, though, I just don’t care that I’m ignoring my ethics and EA; I want to have kids and that’s that.

This is a case in which I’m not “being lazy” so much as just not trying at all. But when someone asks me about it, it’s easier for me to give some EA excuse (like that having kids will make me happier and more productive) that I don’t think is true - and then I look like I’m being a lazy or careless altruist rather than not being one at all.

The model I’m building is this: there are many different areas in life where I could apply EA. In some of them, I’m wholeheartedly willing. In some of them, I’m not willing at all. Then there are two kinds of areas where it looks like I’m being a lazy EA: those where I’m willing and want to be a better EA… and those where I’m not willing but I’m just pretending (to myself or others or both).

The point of this: when we ask someone to be a less lazy EA, we are (1) helping them do a better job at something they want to do, and (2) trying to make them either do more than they want to or admit they are “bad”.

In general, most effective altruists respond to deep conflicts between effective altruism and other goals in one of the following ways:

  1. Unconsciously resolve the cognitive dissonance with motivated reasoning: “it’s clearly my comparative advantage to spread effective altruism through poetry!”
  2. Deliberately and knowingly use motivated reasoning: “dear Facebook group, what are the best utilitarian arguments in favor of becoming an EA poet?”
  3. Take the easiest “honest” way out: “I wouldn’t be psychologically able to do effective altruism if it forced me to go into finance instead of writing poetry, so I’ll become an effective altruist poet instead”.

The third is debatably defensible—though, for a community that purports to put stock in rationality and self-improvement, effective altruists have shown surprisingly little interest in self-modification to have more altruistic intentions. This seems obviously worthy of further work.

Furthermore, EA norms do not proscribe even the first two, leading to a group norm that doesn’t cause people to notice when they’re engaging in a certain amount of motivated cognition. This is quite toxic to the movement’s ability to converge on the truth. (As before, effective altruists are still better than the general population at this; the core EA principles are strong enough to make people notice the most obvious motivated cognition that obviously runs afoul of them. But that’s not nearly good enough.)

Historical analogues

With the partial exception of GiveWell’s history of philanthropy project, there’s been no research into good historical outside views. Although there are no direct precursors of effective altruism (worrying in its own right; see above), there is one notably similar movement: communism, where the idea of “from each according to his ability, to each according to his needs” originated. Communism is also notable for its various abject failures. Effective altruists need to be more worried about how they will avoid failures of a similar class—and in general they need to be more aware of the pitfalls, as well as the benefits, of being an increasingly large social movement.

Aaron Tucker elaborates better than I could:

In particular, Communism/Socialism was a movement that was started by philosophers, then continued by technocrats, where they thought reason and planning could make the world much better, and that if they coordinated to take action to fix everything, they could eliminate poverty, disease, etc.

Marx totally got the “actually trying vs. pretending to try” distinction AFAICT (“Philosophers have only explained the world, but the real problem is to change it” is a quote of his), and he really strongly rails against people who unreflectively try to fix things in ways that make sense to the culture they’re starting from—the problem isn’t that the bourgeoisie aren’t trying to help people, it’s that the only conception of help that the bourgeoisie have is one that’s mostly epiphenomenal to actually improving the lives of the proletariat—giving them nice boureoisie things like education and voting rights, but not doing anything to improve the material condition of their life, or fix the problems of why they don’t have those in the first place, and don’t just make them themselves.

So if Marx got the pretend/actually try distinction, and his followers took over countries, and they had a ton of awesome technocrats, it seems like it’s the perfect EA thing, and it totally didn’t work.

Monoculture

Effective altruists are not very diverse. The vast majority are white, “upper-middle-class”, intellectually and philosophically inclined, from a developed country, etc. (and I think it skews significantly male as well, though I’m less sure of this). And as much as the multiple-perspectives argument for diversity is hackneyed by this point, it seems quite germane, especially when considering e.g. global health interventions, whose beneficiaries are culturally very foreign to us.

Effective altruists are not very humanistically aware either. EA came out of analytic philosophy and spread from there to math and computer science. As such, they are too hasty to dismiss many arguments as moral-relativist postmodernist fluff, e.g. that effective altruists are promoting cultural imperialism by forcing a Westernized conception of “the good” onto people they’re trying to help. Even if EAs are quite confident that the utilitarian/reductionist/rationalist worldview is correct, the outside view is that really engaging with a greater diversity of opinions is very helpful.

Community problems

The discourse around effective altruism in e.g. the Facebook group used to be of fairly high quality. But as the movement grows, the traditional venues of discussion are getting inundated with new people who haven’t absorbed the norms of discussion or standards of proof yet. If this is not rectified quickly, the EA community will cease to be useful at all: there will be no venue in which a group truth-seeking process can operate. Yet nobody seems to be aware of the magnitude of this problem. There have been some half-hearted attempts to fix it, but nothing much has come of them.

Movement building issues

The whole point of having an effective altruism “movement” is that it’ll be bigger than the sum of its parts. Being organized as a movement should turn effective altruism into the kind of large, semi-monolithic actor that can actually get big stuff done, not just make marginal contributions.

But in practice, large movements and truth-seeking hardly ever go together. As movements grow, they get more “epistemic inertia”: it becomes much harder for them to update on evidence. This is because they have to rely on social methods to propagate their memes rather than truth-seeking behavior. But people who have been drawn to EA by social pressure rather than truth-seeking take much longer to change their beliefs, so once the movement reaches a critical mass of them, it will become difficult for it to update on new evidence. As described above, this is already happening to effective altruism with the ever-less-useful Facebook group.

Conclusion

I’ve presented several areas in which the effective altruism movement fails to converge on truth through a combination of the following effects:

  1. Effective altruists “stop thinking” too early and satisfice for “doesn’t obviously conflict with EA principles” rather than optimizing for “increases utility”. (For instance, they choose donations poorly due to this effect.)
  2. Effective altruism puts strong demands on its practitioners, and EA group norms do not appropriately guard against motivated cognition to avoid them. (For example, this often causes people to choose bad careers.)
  3. Effective altruists don’t notice important areas to look into, specifically issues related to “being a successful movement” rather than “correctly implementing utilitarianism”. (For instance, they ignore issues around group epistemology, historical precedents for the movement, movement diversity, etc.)

These problems are worrying on their own, but the lack of awareness of them is the real problem. The monoculture is worrying, but the lackadaisical attitude towards it is worse. The lack of rigor is unfortunate, but the fact that people haven’t noticed it is the real problem.

Either effective altruists don’t yet realize that they’re subject to the failure modes of any large movement, or they don’t feel motivation to do the boring legwork of e.g. engaging with viewpoints that your inside view says are annoying but that the outside view says are useful on expectation. Either way, this bespeaks worrying things about the movement’s staying power.

More importantly, it also indicates an epistemic failure on the part of effective altruists. The fact that no one else within EA has done a substantial critique yet is a huge red flag. If effective altruists aren’t aware of strong critiques of the EA movement, why aren’t they looking for them? This suggests that, contrary to the emphasis on rationality within the movement, many effective altruists’ beliefs are based on social, rather than truth-seeking, behavior.

If it doesn’t solve these problems, effective-altruism-the-movement won’t help me achieve any more good than I could individually. All it will do is add epistemic inertia, as it takes more effort to shift the EA consensus than to update my individual beliefs.

Are these problems solvable?

It seems to me that the third issue above (lack of self-awareness as a social movement) subsumes the other two: if effective altruism as a movement were sufficiently introspective, it could probably notice and solve the other two problems, as well as future ones that will undoubtedly crop up.

Hence, I propose an additional principle of effective altruism. In addition to being altruistic, maximizing, egalitarian, and consequentialist we should be self-aware: we should think carefully about the issues associated with being a successful movement, in order to make sure that we can move beyond the obvious applications of EA principles and come up with non-trivially better ways to improve the world.

Acknowledgments

Thanks to Nick Bostrom for coining the idea of a hypothetical apostasy, and to Will Eden for mentioning it recently.

Thanks to Michael Vassar, Aaron Tucker and Andrew Rettek for inspiring various of these points.

Thanks to Aaron Tucker and John Sturm for reading an advance draft of this post and giving valuable feedback.

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Vaniver

First, good on you for attempting a serious critique of your views. I hope you don’t mind if I’m a little unkind in responding to to your critique, as that makes it easier and more direct.

Second, the cynical bit: to steal Yvain’s great phrase, this post strikes me as the “we need two Stalins!” sort of apostasy that lands you a cushy professorship. (The pretending to try vs. actually trying distinction seems relevant here.) The conclusion- “we need to be sufficiently introspective”- looks self-serving from the outside. Would being introspective happen to be something you consider a comparative advantage? Is the usefulness of the Facebook group how intellectually stimulating and rigorous you find the conversations, or how many dollars are donated as a result of its existence?

Third, the helpful bit: instead of saying “this is what I think would make EA slightly less bad,” consider an alternative prompt: ten years from now, you look back at your EA advocacy as a huge waste of your time. Why?

(Think about that for a while; my answer to that question can wait. These sort of ‘post-mortems’ are very useful in all sorts of situations, especially because it’s often possible to figure out information now which suggests the likelihood of a plan succeeding or failing, or it’s possible to build in safeguards against particular kinds of failures. Here, I’m focusing on the “EA was a bad idea to begin with” sorts of the failures, not the “EA’s implementation disappointed me, because other people weren’t good enough,” a la a common response to communism’s failures.)

Philosophical differences might be lethal. It could be the case that there isn’t a convincing population ethics, and EAers can’t agree on which causes to promote, and so Givewell turns into a slightly more effective version of Charity Navigator. (Note this actually showed up in Charity Navigator’s recent screed- “we don’t tell people which causes to value, just which charities spend money frivolously”)

It might turn out that utilitarianism fails, for example, because of various measurement problems, which could be swept under the rug until someone actually tried to launch a broad utilitarian project, when their impracticality became undeniable. (Compare to, say, communists ignoring problems of information cost or incentives.)

Consider each of the four principles. It’s unlikely that maximization will fail individually- if you know that one charity can add 50 human QALYs with your donation, and another charity can add 20 human QALYs with your donation, you’ll go with the first. Gathering the data is costly, but analysts are cheap if you’re directing enough donations. But it could fail socially, as in http://xkcd.com/871/ - any criticism of another person’s inefficiency might turn them off charity, or you. EA might be the hated hipsters of the charity world. (I personally don’t expect that this is a negative on net, because of the huge quality difference between charitable investments- if you have half as many donations used ten times as well, you’ve come out ahead- but it could turn out that way.)

Similarly, consequentialism seems unlikely to fail, but what consequences we care about might be significantly different. (Maximizing fuzzies and maximizing QALYs looks different, but the first seems like it could be more effective charity than the second!)

Egalitarianism might fail. The most plausible hole here seems to be the existential risk / control the singularity arguments, where it turns out that malaria just doesn’t matter much in the grand scheme of things.

Altruism might fail. It might be the case that people don’t actually care about other people anywhere near the level that they care about themselves, and the only people that do are too odd to build a broad, successful movement. (Dipping back into cynical, I must say that I found the quoted story about kids amusing. “My professed beliefs are so convincing, but somehow I don’t feel an urge to commit genetic suicide to benefit unrelated people. It’s almost like that’s been bred into me somehow.”) Trying looks sexy, but actually trying is way costlier and not necessarily sexier than pretending to try, so it’s not clear to me why someone wouldn’t pretend to try. (Cynically again: if you do drop out of EA because you landed a spouse and now it just seems so much less important than your domestic life, it’s unlikely you’ll consider past EA advocacy as a waste if it helped you land that spouse, but likely you’ll consider future EA advocacy a waste.)

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

Vaniver–first of all, thanks for the feedback! This is exactly the kind of response I want, and please don’t worry about being unkind.

Second, I just realized that these comments are going to get pretty unwieldy for discussion, so I cross-posted to Less Wrong. Do you mind pasting your comment over there so we have threading (and so people can upvote you properly)?

Thanks again for your response! I’ll wait to go more in-depth until you paste it on LW if you want to.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Vaniver

Done!

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Diego Caleiro

I have written a lengthy response that deals with only one of the points in the critique above, the suggestion that, as a whole, the Effective Altruist movement is pretending to really try, here: http://lesswrong.com/r/discussion/lw/j8v/in_praise_of_tribes_that_pretend_to_try/

My main argument is that pretending to try is quite likely *a good thing(, in the grand scheme of EA.

Disclaimer: I support the EA movement.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Jo

You missed the point in the monoculture part where we tend to sound like snobs on a moral high ground. As in dealing with the issue of how to share EA out of the academic sector when it’s incredibly difficult to talk to people who don’t know what altruism is and won’t immediately understand that when you mention ‘doing what is ethically right’ you mean it in a thoughtful ‘I want to do the most good’ way not a ‘hey look at me I am better than you’ way.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.