Moral arguments

Thanks again to Gautam Mohan for inspiration and edits!

A while ago I wrote about how different moral systems are often logically irreconcilable, and why we disagree so much more about morals than facts:

When we’re thinking about morals and not facts, though, things change a bit. We’re still able to keep our rules of inference: we don’t deny that one’s moral system should be “logical”. But logic’s rules of inference can’t turn an “is” into an “ought”. Moral reasoning requires an independent set of “ought” axioms, and it’s here that the trouble begins. Factual reasoning evolved to help us understand something that, at its most basic level, doesn’t change—the laws governing how the world works. But evolutionarily, moral reasoning exists largely to help individual organisms navigate a complex social landscape in the a way beneficial to the species as a whole. This landscape is constantly shifting and much, much harder to understand. As a result, we haven’t converged on a single evolutionarily favored moral system (if such an optimal system even exists) the same way we have for descriptive reasoning.

This is why we see such a massive profusion of moral systems, and why normative debates like consequentialism vs. deontology often seem fundamentally irreconcilable in a way that factual debates don’t. In factual debates, our species largely agrees on the rules, but in moral debates we have no such luck.

A pessimist might conclude from this logical irreconcilability that it’s pointless ever to debate ethics. If someone starts off with different moral axioms from you, how can you ever change their mind? You have no common ground to work with!

Fortunately or unfortunately, one important feature of human brains is that they’re not actually pure abstract entities of reason. Your typical person will accept many types of argument that aren’t strictly logical, such as the Argument from Aesthetics (“someone wrote a good book in which X, therefore X”), Argument from Warm Fuzzies (“X makes my hindbrain feel good about itself, therefore X”), and Argument from General Awesomeness (“Wouldn’t it be cool if X? Therefore, X!”).

For instance, I had a friend in high school who took a class on Russian literature with me. By the end of the semester, he was a firm believer in Tolstoy’s anarchist-pacifist philosophy. This wasn’t because Tolstoy’s logical defense of his beliefs was particularly superior; it was because his aesthetic treatment was brilliant and incredibly appealing. Not for nothing is he a giant of Russian literature; I would agree with Tolstoy too if it would make my life as beautiful as his books.

In fact, I think these non-logical techniques comprise most, if not all, successful moral arguments. My immediate reflex is to be worried by this. After all, in descriptive reasoning, such appeals are often regarded as a “dark art” for encouraging bias and promoting sophistry over truth-seeking.

On reflection, though, I don’t think it’s bad to use so-called “dark arts” for normative debates. There, extra-logical appeals are really all we have. The “truth” is fuzzier (if it exists at all), and what’s a “bias” is less clear. To continue with the math analogy from before, it’s like convincing someone to switch to a different set of axioms. Within the system of reasoning, if you abandon your rules of inference, you’ll start contradicting yourself. But when translating between systems, the rules of inference don’t even have any meaning, so there’s no point in trying to use them.

So if you want people to listen to your moral statements, even if you normally value intellectual rigor and avoid rhetoric, don’t browbeat them with logic if they live in a different set of axioms. Instead, just make your ethics seem more viscerally appealing. That’s how you actually get people to listen.

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Alexander Gabriel

Agree.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.