Note: this should probably not be the first thing you read about effective altruism. It’ll give you a pretty biased impression! If that’s you, try something from the reading list I compiled instead.
the joke in the effective altruist movement that it contains “all kinds of people—mathematicians, economists, philosophers, and computer scientists”
We’re entrepreneurs and economists. CEOs and scientists. Students and philanthropists.
I’ve met exactly one person over 50 who’s active in the EA community. (It’s Peter Singer.)
I know maybe two political conservatives. Three non-philosophy liberal arts majors. One artist. One lawyer. Two journalists. One person who’s active in politics. One teacher. Maybe five people total in the 25 largest US occupations.
And I know I’m missing out because of it. Those three liberal arts majors are some of my favorite people to talk to about effective altruism—because they’re not predictable! I already know the utilitarian-computer-programmer answer to most conversations about effective altruism (being one myself). Whereas my friends who have studied history or English or social studies have read things I haven’t and thought in ways I haven’t. Absorbing their view of the world gives me new ways to think about things in ways that Utilitarian Computer Programmer #1,001 just can’t.
The numbers I listed above are pretty abysmal, but you’d think it wouldn’t be hard to improve them. For instance, about half of GiveWell’s staff have liberal arts degrees (though most of them don’t engage that much with the broader EA community). The top 25 occupations make up a third of the workforce—50 out of 150 million. So what gives? Why are we doing so abysmally at piquing the interest of anyone different from us?
Though I may be able to understand it, there’s no way I can go to a town hall meeting talking about “Utilitarians”. Somebody would thunder punch me in the throat mid-sentence.
I’m sorry to say that I don’t feel welcome at EA Global. And sadly, I didn’t feel very welcome at the EA conference last year either.
This actual question on the application to attend is one of many things that has stuck me as elitist, exclusive, and unfamiliar:
“Describe some non-technical system you’ve hacked and how you hacked it.”
Nobody is trying to make people feel unwelcome. But that’s not enough. Even if we don’t consciously exclude people, we fall into patterns of in-group jargon, we steer the conversation towards our favorite subjects, we let the newbies sit quietly in a corner at meetups. We talk about how low-socioeconomic-status people “aren’t valuable recruits” or have barbecue-themed EA meetups or have only tech CEOs and Peter Singer featured in the advertisement for our annual mega-meetup.
I emphatically don’t mean that the people who did these things were bad people with bad ideas. In fact, the problem is precisely that they weren’t. This is the kind of thing that happens by default, unless we think carefully about how we’ll be interpreted—not just by people who think like us, but by people who think very differently.
As you can tell, I’m not a philosophy student (one of the reasons I rarely post in this space).
Speaking from my own introduction to EA, at first touch the feel of “the EA community” was very exclusive and hostile, because even though I was already essentially a consequentialist and already wanted to have a high positive impact in the world, I hadn’t thought through what most of the “EAs” I was meeting had spent months or years thinking about already, and even if something made sense as soon as I heard it and considered it, I felt negatively judged for not having already thought of it just because I hadn’t happened to have previously had any prompting to consider that specific thing from any of the conditions of my life up until then.
—EA Forum poster
Okay, so what can we actually do? Here are some non-exhaustive suggestions.
Get interested in other people’s experiences. Don’t try to convince them to “become an EAer;” try to get them to convince you of something. Figure out how their perspective can add to effective altruism, not what effective altruism should subtract from them.
Experiment with thinking about different parts of effective altruism. Lots of people love getting into the philosophical weeds or geeking out about QALY weightings, but there are vast expanses of conversational territory that we leave relatively unexplored that would be comparatively more interesting to outsiders.
Make special outreach efforts. People who are the only representative of some group tend to feel uncomfortable and have bad experiences, even if they’re treated the same as anyone else. They’re also less likely to be reached without special effort, since most people’s friends are similar to them. That means it helps to put in extra effort just to counter the distorting effect of everyone else being a utilitarian computer programmer.
For a concrete example, suppose you’re running a meetup. Extend special invitations to people you think could add a new perspective to a discussion about effective altruism. Ask other meetup regulars to do the same. Then ask the specially-invited people to invite their friends the next time. It particularly helps if one of the people who would otherwise be perceived as an outsider is hosting the event.
Once people are there, keep tabs on the conversation to make sure it’s not drifting off into an attractor that bores people. (I love getting into the weeds of QALY weighting as much as anyone, but I’d rather not impose this view on other people!) If there are new folks who don’t seem engaged in the large group, it might help to break into smaller groups, as these can be less intimidating.
If you’re presenting something to a group, try to look at it as an outsider would (or even better, several outsiders). Would they feel out of place? Try not to use too much in-group jargon or present the group as totally homogeneous. You don’t need to lie or distort things—but if you find yourself caught between stretching the truth or making the event feel exclusionary, think of ways to fix that next time.
Editor’s note: The comment thread stopped being useful. Further comments on this post will be deleted with extreme prejudice.