The is-ought problem

Thanks to Gautam Mohan for inspiring this post and providing massively helpful edits.

A problem I’ve struggled with is the feeling that I don’t have a firm foundation for my moral beliefs. In fact, I’m not sure it’s possible to, even in principle. Basically, this is because of the is-ought problem: it appears to be impossible to make a logical transition from descriptive claims (about facts and how the world works) to normative claims (about what actions one should take). Here’s an analogy that I think sheds some light on the is-ought problem, if you’ll forgive the math-nerd mode.

Any system of abstract thought, such as one we use to reason about facts or morals, is based on rules of inference. These rules are steps of an argument that are always legal to make if the previous steps hold. For example, in deductive logic it is a rule of inference that if you know $P$ is true, and you know that $P$ implies $Q$, then you know that $Q$ is true. This rule of inference is called modus ponens (because the only thing more pretentious than breaking out the formal logic is breaking out the Latin).

Rules of inference only tell you how to get from one valid statement to another, though. They can get you from $P$ to $Q$, but they can’t give you a valid $P$ to start with. So a system of abstract thought also has to have axioms: things fundamentally asserted to be true. For instance, a common axiom in natural-number arithmetic is that $x=x$ for any natural number $x$. The beauty of arithmetic, and of descriptive logic, is that we can make astoundingly complex inferences from axioms and rules of inference that seem as common-sense and trivial as this.

A major question is, why do our axioms for math and descriptive logic seem so natural? Why does everyone agree (largely) on one particular system for making inferences about how the world works? After all, you could perfectly well imagine someone rejecting a rule like modus ponens; for a great example, see Lewis Carroll’s What the Tortoise Said to Achilles. Alternately, someone might deny certain axioms of descriptive reasoning, like Occam’s Razor—the idea that when choosing between hypotheses one should favor the simplest.1 2 But while such strange beliefs are possible, in practice there seems to be a broad consensus on a system for factual reasoning.

I think is partly explained by evolution. For whatever reason, the world we live in operates by one particular set of axioms and rules of inference. People (more generally, organisms) who “believe” in this system—i.e., whose brains happen to reason according to the same axioms and rules—are better at modeling the world, so they’re better at predicting and responding to novel situations. As a result, they survive longer, have more offspring, and propagate their particular reasoning systems in the gene pool, rather than “bad” systems that don’t conform to how the world works.3

When we’re thinking about morals and not facts, though, things change a bit. We’re still able to keep our rules of inference: we don’t deny that one’s moral system should be “logical”. But logic’s rules of inference can’t turn an “is” into an “ought”. Moral reasoning requires an independent set of “ought” axioms, and it’s here that the trouble begins. Factual reasoning evolved to help us understand something that, at its most basic level, doesn’t change—the laws governing how the world works. But evolutionarily, moral reasoning exists largely to help individual organisms navigate a complex social landscape in the a way beneficial to the species as a whole. This landscape is constantly shifting and much, much harder to understand. As a result, we haven’t converged on a single evolutionarily favored moral system (if such an optimal system even exists) the same way we have for descriptive reasoning.

This is why we see such a massive profusion of moral systems, and why normative debates like consequentialism vs. deontology often seem fundamentally irreconcilable in a way that factual debates don’t. In factual debates, our species largely agrees on the rules, but in moral debates we have no such luck.


  1. The mathematically rigorous cousin of Occam’s Razor is Solomonoff induction, which asserts that in the absence of evidence, we should believe that the “shorter” a model’s description is, the more likely it is to be true. This is handwavy because I don’t want to footnote a footnote; I suggest you jump down the Wiki rabbit-hole if you want to learn more. ↩︎

  2. Interestingly, Hume—who also articulated the is-ought problem—came to a similar conclusion that “inductive logic” based on Occam’s razor couldn’t be justified based on other, “deductive” axioms. While Hume concluded from this that inductive reasoning was impossible ever to justify, it doesn’t seem to be more or less unjustified than other axioms we reason by. ↩︎

  3. Organisms who didn’t believe Occam’s Razor, for instance, might not be able to infer that jumping off a tall enough cliff would kill them: nothing distinguishes the hypothesis “gravity always points down” from “gravity always points down except when you jump off a sufficiently large cliff”. ↩︎

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Anonymous

I’d venture morality is more related to psychology than it is to math.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Anonymous

i.e. blurry, and ever in need of reevaluation.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Luke Muehlhauser

See Pluralistic Moral Reductionism and By Which It May Be Judged.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Anders

This is an important topic. I am not sure I share your pessimism about axiomatic systems for ethics. I think the vast majority of mankind have internally inconsistent ethical beliefs, and that we could make a lot of progress by convincing people to adopt a formalist / hypothetico-deductive approach to Ethics. One can then choose between internally consistent ethical ought-axioms based on which model best captures our moral intuition. Like in any formalist philosophy, we cannot prove the axioms, but if nothing else, this approach ensures that ethics moves beyond ad-hoc arguments about the meaning of words

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

Anders, I agree that moral axioms are useful to enforce the internal consistency of one’s intuitions. But I don’t think it’s plausible that we could use those intuitions to get everyone to agree on a single set of axioms, because people’s intuitions vary. Sure, some of the consequentialist/deontologist debate is empty semantics. But some of it is because there are e.g. people who would genuinely refuse even to pull the lever in the trolley problem.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Anonymous

“….whose brains happen to reason according to the same axioms and rules–are better at modeling the world, so they’re better at predicting and responding to novel situations” <–This is very well put.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.