You shouldn’t call everything that doesn’t optimize utility a failure. There are degrees of failure, and it’s actually important to calibrate how harshly you judge something to how bad it actually is.
If you spend all of your judgment on telling people (or yourself) that it’s a failure to donate $90 when you could give $100, then you don’t have any judgment left to spend on telling them/you that $90 is still better than $0. But then anyone who can’t manage the full amount has no incentive1 to donate anything at all. I predict that in general, this type of situation would cause people with a binary conception of failure/non-optimality vs. success/optimality burn out at a much higher rate than those with properly-calibrated reward gradients who focus on making their good-faith best effort rather than doing the one optimal thing.
Edited from a post in the Effective Altruism Facebook group.
Well, there’s still incentive to donate for the parts of the mind that respond to altruistic utilitarian arguments—just not the ones that only care about social reward/punishment. But it’s pretty important to have those parts on board too. ↩︎