Common objections to earning to give

June 2013

Recently the Washington Post’s Wonkblog had some in-depth coverage of a bunch of people who are earning to give.

It’s really exciting to see effective altruism’s ideas hitting the mainstream. This article in particular also has a very interesting new feature: a large comments section full of people who don’t already agree with us.

The level of discourse there is pretty low, but if we want the ideas behind effective altruism to spread thoroughly, they have to be able to survive in that kind of environment. So I decided it might be instructive to hold my nose, read the comments section and try to draw lessons about how we can refine our ideas for a wider audience.

Reading through the negative responses, there are roughly five principal components of the objections. Some combination of these components accounted for all the attacks against earning to give that I saw (I read through the whole thread up to 12 PM on June 1st, although my eyes may have glazed over a couple of times). So here are the five main strains:

You’re supporting the system!

One common response is that it’s still not OK to participate in the Wall Street “system”.

If you make commodities more expensive through trading and excessive speculation, than donate your a portion of your profits to starving people, it’s a bit disingenuous, because you are part of the reason that the price of their food is beyond their reach. –mhdrgdajveeah

Nonsense. Wall Street is ruining this country. Dirty, rotten, greedy scum. All of them. So what if they give away some of their money. How about the hard working people who have been ruined by Wall Street? –dubhlaoich

The problem you run into –

If you are making the money by financing a company that is doing more harm than you can possibly do good from your profit.

For instance a company may want to exploit emerging markets for tobacco products. –Mark in Colorado

These kind of retorts are why it’s very important to mention replaceability: that if you don’t work in an unethical industry, someone else (who’s probably more unethical)1 will take your place. For a lot of people, after they hear the replaceability argument, it seems so obvious in retrospect that they don’t mention it except as a counter-argument when they realize that other people haven’t figured it out.

But in fact, it seems replaceability (and more generally, thinking about marginal, rather than total, effects) is highly non-obvious to most people. So I think we need to talk about replaceability a lot more, especially in mainstream sources.

What about big problem x?

This type of comment complains that those who are earning to give should be working on some piece of a bigger problem. In this case the main target is “fixing the system”:

I suppose that’s noble to play against the greedy at their own game, but I’d rather see the ‘game’ simply not exist. Or how about simply assessing a fair tax rate on trades, to support our government’s efforts to eradicate malaria, among many other things? –naomi94112

and, just what ecocide is being supported or expanded by the companies whose stocks are being blindly manipulated for profit? Wouldn’t it be better to attempt to truly change the world by changing US policy toward the environment, for example? –getsmart4

We need the smart, dedicated people who want to do the right thing to put their energies and talents directly into reforming a system that is dragging people down, or at least not enabling it. –jgunne

We need to emphasize more that we care about these problems, we just can’t do much about them. Take financial reform. The financial industry currently records profits of about $120 billion a year (source). If proposed reforms halve those profits, the industry should be willing to spend up to $60 billion a year on lobbying to prevent such reform, which means we’d need to muster the same kind of resources to pass such reform. That’s about the total estimated cost of curing malaria by 2020 (source), every year. So just because something is a big problem, doesn’t mean it’s the best problem to work on.

You should attack the causes, not the symptoms.

Another common argument is that solving disease is curing a symptom, not a cause.

I’ve nothing against charity, but the belief that this sort of charity will “save the world” is naive and ignores the structural conditions that perpetuate poverty. So long as property and wealth continues to be concentrated in a small number of hands – Wall Street being emblematic of this – there will continue to be those deprived of the basic necessities of life. –GeorgeStevens

I do find this talk about saving lives curiously bloodless. It’s a little like pro-lifers forcing women to give birth to their fetuses, only for them (the pro-lifers) to walk away from the resulting infant and call back to them, “You’re on your own now!” These do-gooders are just kicking the can a little further down the road. –barnesgene

I don’t know too much about development economics, but I think there’s a fairly clear response here: it’s not a cause/symptom relationship, it’s a vicious cycle. Poor countries have no health infrstructure, so they have high disease burd en, so they’re economically disadvantaged, so they can’t build health infrastructure. Attacking any portion of the loop is helpful, and disease happens to be the easiest to solve from outside.

But we can’t ever know how much good we’re doing!

The trend that is most annoying to me personally is the insistence that since we don’t know things for sure, we shouldn’t do anything.

Is it really ethical (or smart) to fund mosquito nets in lieu of a seeing eye dog on the premise that thousands of nets can be bought for one dog? Can they really know per-emptively [sic] the value will come from the distribution of those nets versus the placement of one service dog? And if they do know, what are the criteria that they use to justify their decision? Is it reasonable to assume that mosquito netting will make a contribution to African society in way that one seeing eye dog cannot make too somebody in this country? –cleverrevolution

A lot of the comments stem from the ultimate problem with the sort of Utilitarian arguments Singer presents. Because of the incredible complexity of the world and the actions of our consequences, attempts to treat the maximization of minimal global goods often end in absurdity or obscurity. Which action, given the infinitude of unforeseeable consequences of any action, will maximize goods? Unfortunately, much more difficult to say than just ‘save this girl!’ Then, how far do you go? The best objection I’ve seen in this vane is Parfit’s repugnant conclusion. For those interested: http://plato.stanford.edu/entries/repugnant-conclu…Calicles

It’s not an accident that the only objecter actually to engage with philosophy fell under this category: the counter-argument is a bit more nuanced and difficult to explain. Unfortunately, I have no idea how to boil it down into a form that can survive and replicate in a comment thread–though I think it’s pretty important that we figure something out.

Anyway, the basic form of the counter-argument is that, sure, we don’t know what the end result will be, but we can make a really good guess. Good enough that using such estimates gets way, way better results than doing what feels intuitively right, or whatever other rule you’re picking actions by. I’ve been reading a lot of Slate Star Codex recently, so I’ll refer interested readers to Scott Alexander for a more detailed analysis of this problem.

But deontology/virtue ethics!

Possibly the most important thing we can learn from these comments is that lots of people just aren’t consequentialists–not even because of philosophical objections like Calicles, but because the default morality that most people grow up with isn’t consequentialism.

I don’t know anything about hedge funds but isn’t that generally unethical?? The ends do not justify the means. –GerriM

Oh, please. Saving the world is Jesus’ job. Working on Wall Street makes you a financial industry employee, not a savior of anything. If you want to do the world some good, then act like a man. Remember manhood? Remember value? Remember not speculating and gambling, but building and constructing? Stop ripping people off with fake math. Call it like it is.

It’s not the money, but the merit, that gives a man the glow of sunshine.

Figure it out, kid. –agx48

Trigg’s mistake is dicounting the human element that makes his money able to do something. Without the human element the money doesn’t make a difference. Such a petulant child. –Frazil

I think this might be the most important objection to get better at answering–not only to get people to accept earning to give, but to raise the sanity waterline more generally. I hope to touch more on what exactly is going on with these commenters in a future post. But the upshot is that making consequentialism more viscerally appealing to people is a really important thing to do.

Conclusion

This is only intended to be one step in the process of responding to mainstream views of earning to give. I don’t have very many thoughts yet about how to make effective-altruism memes more resilient to objections like these. I mostly just want to do the legwork of categorizing the most common objections, so that more skilled rhetoricians than I can figure out how to spread memes that answer them.


  1. UPDATE: I no longer think we should repeat this part of the point to a broad audience, at least until we have more evidence for it. See the discussion with Adam Shriver in the comments for details. 

Enjoyed this post? Get notified of new ones via email or RSS. Or comment:

Public (as ) submit ⤇

Nick Ryder

This was a really cool breakdown Ben. Thanks for compiling!


William MacAskill

Ben, I love this blog! Thanks for doing some of my work for me (and better than I would have done)! On your last point: I don’t think that EtG is inconsistent with either deontology or virtue ethics - only superficially so (because people don’t factor in the replaceability effect). I have a forthcoming paper about this, available on my academia.edu page. Trouble is that the arguments take much longer than can be expressed in a reasonable comment-reply.


Ben

Will, I’m sure it’s not incompatible with most kinds of deontology/virtue ethics–my worry is specifically about the kind of “folk deontology” that many of the commenters seem to subscribe to, where you just say “the end doesn’t justify the means!!” to anything you don’t like, and leave it at that.


Jeff Kaufman

I’m not sure defending replaceability in comment threads is that helpful: http://www.jefftk.com/news/2013-06-03


Adam Shriver

There needs to be more care in how the replaceability argument is stated. If the argument is just, “if I don’t take the job, someone else will,” fine. But it starts to sound implausible when it’s put in the terms (as it often is) that it’s actually better for an ethical person to take the job than “someone who’s probably more unethical.” This seems like nothing more than speculation, which stands out in a movement that prides itself on respect for data-driven approaches. Is there any evidence of effective altruists working for unethical companies and causing the companies to behave more ethically? What justifies the confidence that the ethical employee will make the company more ethical rather than the opposite occurring?


Ben

@Adam, the parenthetical note about an altruist being more ethical was intended to be just that–a parenthetical note, not the main claim, which as you say is more rigorously justified. (Though if you’re campaigning for data-driven approaches, you might do well to take all of replaceability with a grain of salt, as I don’t think anyone has tried to find out the actual elasticity of demand for workers in finance firms; I have a hunch that depending on which firm you work for, you may not actually be replacing anyone.)

Personally, however, I think altruists would do well to let go of some of our attachment to that which is precisely and rigorously quantifiable. Measurability bias exists and is something we should guard against, and it seems likely to me that many of the best things we can accomplish are also the hardest to quantify: for instance, compounding effects on productivity from curing malaria or schistosomiasis, or benefits from reducing existential risk.


Adam Shriver

@Ben, as I said, the claim that it’s “better” for ethical people to take the jobs is widespread; I wasn’t specifically focusing on your parenthetical, although it does seem to be an example of what I meant. Another example, from the 80,000 hours FAQ: “The cause of many of the problems with banking today is probably that bankers are often not the sort of people who care about the impact of their decisions on others. Arguably, the best way to fix the problem is for people who do care to get into the industry.”

Again, this strikes me as nothing more than wishful speculation.

David Brooks recently wrote an article critical of the movement where he said: “Gradually, you become a different person. If there is a large gap between your daily conduct and your core commitment, you will become more like your daily activities and less attached to your original commitment. You will become more hedge fund, less malaria.”

Of course, that might be true, or it might not. We have no way of knowing. More importantly, it’s a bad argument because he’s just throwing out an intuition and not providing any evidence.

But the claim “Nice person makes hedge fund nicer” is the mirror image of the claim “Hedge fund makes nice person more hedge fundy.” The problem isn’t just that the claim isn’t supported by quantifiable data; it’s that there’s no evidence marshalled in support of the claim at all. Which means there’s no more reason to believe it than to believe the claim from Brooks.


Ben

@Adam: Good point–the amount of direct evidence for the two is nearly the same. They’re not exactly mirrors for a number of reasons that mean I still have a better prior probability, but we should get more evidence before repeating this claim to a broad audience.


Ben

(Addendum: I’ve updated the post to strike out the parenthetical, and made a note as to why.)


Adam Shriver

Thanks Ben. And though I’ve been effectively trained by the academy to focus first on the parenthetical with which I disagreed, I should say that I thought your responses to the various common objections were excellent. Cheers.


Benjamin Todd

@Adam

I totally agree it would be nice to have more data on the point about whether it’s realistic to expect people to improve finance from inside. I also think the position often advanced on 80k is more reasonable than you make out.

Either one or the other is true:

  1. There are opportunities to improve finance from within that don’t benefit the people inside finance, thus are only taken by people with more ethical motivations than the people already in finance

  2. There are no such opportunities

Is there are no opportunities, then putting EAs into finance has no effect (except insofar as they overall increase the scale of finance - as Ben mentions, it’s not clear how much people add at the margin and it might be quite a bit)

  1. These opportunities do exist, so, assuming EAs are more ethically motivated than the people already in finance, we’d expect EAs to improve finance from within.

I agree we don’t have a study showing that EAs are on average more ethically motivated than the people already in finance, but it seems very likely to me.

It also seems likely to me that at least some opportunities exist. That’s because I don’t think market incentives always track what’s socially best (due to the existence of market failures), so there will be situations when social and selfish motivations diverge, and socially motivated people could take advantage of these situations to do good.

That’s the theoretical argument, but there seems to be real examples in the history of finance e.g. times when people have defrauded people (e.g. Madoff) in order to make more money for themselves, times when people have created financial instruments that make them a lot of money for several years before blowing up etc.


John Maxwell IV

Here’s a cynical perspective on the Washington Post thread. If you read about the evolution of altruism, it seems that the role of punishment was important… specifically, for an altruistic group to prosper, it has to punish defectors. (I remember reading research suggesting that people will also punish people who are significantly more altruistic than the group average, which also seems like an important fact, but I can’t track that research down.) Anyway, maybe the people commenting on the article were feeling threatened that the standard for what constituted a good person was being raised. A lot of their comments look like rationalizations to me; for instance, let’s say that the Post was to do an article on AMF’s efforts only, without mentioning the effective altruists involved. Could you imagine commenters responding that AMF was “just kicking the can down the road”? (It really isn’t, by the way, see here.)

See also the cynical comments in this HN thread about Andreesen Horowitz giving away their earnings: https://news.ycombinator.com/item?id=3891043

So if people feel threatened by effective altruism, how can we solve that problem? Perhaps it’d be better to frame it as an interesting purpose you can give yourself in life rather than some kind of strict moral obligation. That’s how I see it myself: http://lesswrong.com/lw/hel/keeping_choices_donation_neutral/8you Having fun revitalizes me to do more EA work, and EA work gives me a goal to work towards and a purpose in life. I don’t think there has to be a huge self-sacrifice aspect.

Getting yourself (and other people) to donate money is a bit of an emotional engineering problem.


Craig

Nobody knows for sure which path is more effective, there are too many variables.

It all comes down to opinion.

My opinion is that avoiding a corrupt system and working directly, rather than infiltrating that system, is a better path and more personally fulfilling.

You cite replaceability, which I personally believe to be wrong. If you believe it to be moral and right, that is fine.

I think that logic is dangerous. “Might as well use this gun, because if I don’t use the bullets to kill someone evil, someone else might use them to kill someone who is good.”

Well then, I am just paving the way for someone to create new bullets.

You don’t have to agree with me, but I will continue to believe the Confucius logic of teaching a man to fish.

The original article stated that high-earners will have equal or more impact than a teacher. I completely disagree. If half or even a quarter of those students attitudes are shifted towards a more service-minded ideal, the earned income of that teacher will equal much more than the Wall Street trader, in my opinion of course.

Thanks for this discussion.


Carl Shulman

“If proposed reforms halve those profits, the industry should be willing to spend up to $60 billion a year on lobbying to prevent such reform, which means we’d need to muster the same kind of resources to pass such reform. “

This is a very questionable model of political activity. Victory in political struggles is not very well predicted by funding levels (changes in funding levels predict better, but also reflect non-financial changes), and certainly not in such a linear way. And no industry has ever been able to muster such a large share of profits for lobbying (how could they even spend that much money, let alone avoid a decisive backlash?).