Some folks think that effective altruist organizations “spend already too long evaluating and not enough time growing.” Some people read my post on being welcoming and worried that being too welcoming to people would dilute the quality of the EA movement, or require us to compromise on our principles. These two statements seem to be somewhat in tension. How much should EAers prioritize explicit attempts at movement growth?
In my opinion, not much, especially if it comes at the expense of evaluation.
For one thing, we need to practice what we preach. Part of effective altruism’s main contribution to the discourse about how to improve the world is that we should be more thoughtful, more careful, and smarter about how we do our altruism. The drive towards transparency, self-evaluation, thoughtfulness, and caution is an integral part of this message.
In that sense, transparency and self-evaluation may not even come at the expense of growth: they’re part of what makes EA compelling in the first place. GiveWell appealed to their niche and grew fast because, not despite, their incredible depth and thoroughness. They’ve barely devoted any time to growing, yet they’ve been incredibly successful, nearly doubling in money moved every year since their founding.
But what’s more, I think transparency and self-criticism are still incredibly important for first-order reasons as well. They’re integral part of what makes me confident that effective altruism is going to do awesome things in the long term. It’s not just that we have good ideas for how to improve the world right now, it’s that we have the ability and desire to change our minds when we realize the stuff that we’re doing isn’t optimal—which should happen often if we’re doing things right!
It’s compelling to think that we don’t need to focus on transparency and self-evaluation as much any more, because we’ve already figured out a ton about how to improve the world, and nobody has mounted very credible or convincing attacks on the core EA arguments. It’s tempting to think that we should shift from exploration mode into exploitation mode—that we should stop concentrating on figuring out what to do and start to optimize for doing it.
But I think that’s ultimately a seductive error. In the grand scheme of things, we’re still very early in the lifespan of effective altruism. We have so much more room left to improve our ideas! It would be a terrible shame if we shifted prematurely into exploitation when we were still stuck at the top of the hill, when there are still mountains in the distance to optimize up.