I’m Ben! I’m a junior at Harvard studying math and computer science. When I’m not doing that, I enjoy making things, coding, reading, playing piano, singing, contra dancing, hiking, effective altruism, and other strange activities.


Replaceability in altruism


in effective altruism

I’ve been thinking a lot lately about GiveWell’s difficulties finding opportunities for funding. Shockingly enough, despite lots of inefficiencies in the market for other people’s QALYs, it turns out that if you get a group of impressive people to run a transparent charity with an evidence-based intervention, you will probably get funding. (Surprise!) This has led me to be unsure about the effectiveness of having effective altruists donate to charities like GiveWell’s recommended ones.

In other words, replaceability applies to altruistic decisions just as much as to career choice. While it’s hard to get people fully into the mindset of effective altruism, there are some things, like funding proven charities where you can easily persuade outsiders to help. So, to maximize the leverage we get as altruists, we should focus on areas where replaceability applies the least. What are such comparative advantages for us?

Starting new things

Starting a thing is scary. It takes a lot more additional agency and dedication than just donating a chunk of income. Fortunately, the EA community seems to select (somewhat) for agenty dedicated people, probably by dint of filtering for people who think “I should try really hard to help people” and then start trying really hard. For instance, people proposed that a fundraising organization would be a highly cost-effective charity; one year after the linked article was published, it exists.1

Starting a thing doesn’t have to be full-time, of course. There are probably many really helpful projects, both small and large, where actually doing the project would be more effective than earning money and paying someone else to do it. For instance, I and some other folks are exploring the possibility of developing web apps for the EA community. It seems valuable to have these apps built and maintained by people who care about EA, not just people who did it for hire—valuable enough that I think we should do it ourselves.

Working for EA organizations

All the EA organizations that I’ve talked to have mentioned difficulty in finding people. This is despite the fact that I know many people in the movement who seem like they would be quite good candidates. I’m wondering if this is because the earning-to-give meme has propagated so strongly that everyone decides they would rather earn money and fund someone else working there, and then they don’t apply, leading to a shortage of qualified applicants. At any rate, few enough people have the required skills and attitudes to work at e.g. GiveWell, Giving What We Can or 80,000 Hours that it seems pretty non-replaceable.

Building community/ideology

So far, we’ve done a great job of growing without compromising on intellectual standards. The Facebook group, for instance, is about six times the size it was when I joined, but the discussion is still lively and interesting. But we’ll need to devote lots of resources to making sure this continues through the next few orders of magnitude, and that’s not something we can easily outsource. There’s some progress being made right now, on Wikipedia editing and discussion media, but I think we can do much more to ensure that as the community grows we maintain its high quality of thought.

Being risk-neutral in donations

Donors driven by signalling, prestige, or warm fuzzies tend to be unhappy when charities they donate to don’t get results. But effective altruists know that individually, we should just be maximizing expected outcome, and if that requires a high-risk strategy, so be it. In other words, even if we’re personally risk-averse we should be altruistically risk neutral. This (hopefully) means that we can operate something like philanthropic venture capitalists—fund pie-in-the-sky ventures that are too risky for most donors, and thus collect a risk premium (paid in QALYs, not dollars, but it’s the same idea).

Funding meta things

It’s relatively easy for object-level effective interventions to get funds, because they can appeal to those even without the effective-altruist mindset. For “meta” organizations like 80,000 Hours and Giving What We Can, though, that’s not the case, so fundraising is harder for them. So donations from effective altruists are much less replaceable.

Well, these are the things I can think of off the top of my head. I’m guessing that because of replaceability concerns, at least one of these would be more effective than donating to GiveWell’s top charities for most people. Thoughts?


  1. A year is still a lot of lag time, but I think we’re getting better at this as the movement grows and the people who comprise it get more do-things-y. 

7 commentscomment


Personal takeaway: this argument has caused me to slightly shift my assessment of whether research or ETG is a more altruistic career choice for me, in the direction of research.


One important concern (at least for some) to give to object level causes like GiveWell’s top charities is signalling. All the other things you mention are a bit weirder to common sense than “giving money to the poor” (or “buying malaria nets to the poor”), and one gives ammunition to lazy cynicism if one’s energies are entirely meta-directed (“So, you all give your money to this EA org which happens to use this money to salary you? Riiight”)

Obviously, this concern is much less significant if one isn’t very public facing (advocacy, leadership, etc.)


While I agree with “if you get a group of impressive people to run a transparent charity with an evidence-based intervention, you will probably get funding”, I don’t see how concluding that this makes funding GiveWell recommended charities less appealing follows. Even if AMF is decently funded (as it is), that doesn’t diminish the power of your marginal dollar any. Same applies to Give Directly and SCI. On the flip side on working for EA orgs, I would imagine that if it is a well run charity each additional worker will inherently be working on something less useful as it gains more staff assuming they prioritize correctly. That’s not to say that working at say GiveWell wouldn’t be extremely high impact, it’s just something to consider.


Lucas, it makes donating to GiveWell less appealing for a couple reasons:

  1. It makes doing other things, like starting a new organization, more appealing. For instance, if you start a strong new direct intervention, you could likely find non-EA funding for it, which gives your resources significant leverage.

  2. GiveWell saturated Village Reach with funding. AMF was holding substantial cash in reserve at the end of 2012 because their efforts to spend it immediately failed. So the net effect of your donation is to speed up saturating their current charity. Given the scarcity of good opportunities, it’s not clear that this speedup is very helpful.

Even if AMF is decently funded (as it is), that doesn’t diminish the power of your marginal dollar any. … On the flip side on working for EA orgs, I would imagine that if it is a well run charity each additional worker will inherently be working on something less useful as it gains more staff assuming they prioritize correctly.

So you don’t expect AMF to be subject to diminishing returns, but you do expect this for EA orgs? Can you explain your reasoning here?


As I’m aware (and I could be wrong) AMF simply wasn’t ready to deal with the surge of funding, which is an abnormal situation, and now has room for many millions of dollars. The part about it maybe being less useful relative to it being easier to do start ups since it seems easy to get funding would be true, my point was just that it doesn’t disvalue GW charities.

For why expect diminishing returns in EA orgs, it’s simply that their first few staff will be doing the most important things they can, and any additional staff will likely be doing something less important. A super simplified version (maybe too simple, I’m tired and can’t come up with something better) would be researchers. One researcher would focus on the most high priority topic. Every additional researcher would be focusing on something farther down the priority line. Essentially the more people you get, the more likely they will be put on less core/important stuff. This is not at all a knock down argument, I was just pointing out that the well run orgs having funding doesn’t necessarily diminish their returns, but we should consider that in some non-direct impact charities.


Lucas, we’ll probably have more information on AMF in six months or so when the 2013 figures come out. However, if GiveWell’s money moved continues to grow apace (approximately doubling every year), they’ll move $10m to AMF this year and $20m next. Given the room for more funding analysis that GiveWell currently has up, which has an upper bound of $35-67m, it looks like saturation in the next couple years is still a worry.

Also, maybe I wasn’t clear in my previous comment, but I understand why EA orgs might be subject to diminishing marginal returns to people. (However, note that the importance of the work they’re doing isn’t the only factor; the quality of the people is probably more important.) Instead, I was asking why you don’t apply the exact same argument to GiveWell’s charities with money—if anything it seems more likely to apply there, since unlike people, all money is the same:

Even if AMF is decently funded (as it is), that doesn’t diminish the power of your marginal dollar any.

Anyway, this is a tangent, since the original post was about replaceability; I think the effect of replaceability is probably much larger than that of diminishing returns, and in addition replaceability varies much more across different options.


“…it looks like saturation in the next couple years is still a worry”

I don’t think “worry” is the correct word - every low-hanging fruit dispensed with should be hailed as a success. EA for me is all about driving up the cost of reducing suffering (i.e. eliminating easy-to-alleviate suffering). It should be viewed as a project that it’s actually possible to complete!*

I agree with the sentiment of the post though: EtG should perhaps be viewed more as a fallback position for EAs who aren’t up for doing something more meta or high risk.

*to as great an extent as is practically possible (i.e. eliminating large-scale poverty)

comment