Useful ideas in debate

Over the past couple years I’ve picked up a number of tools that are really useful during debates and discussions. Unfortunately, I got a lot of them from reading LessWrong/other places on the Internet, and if you’re just looking for good truth-seeking debate techniques that’s way too much material to slog through. And as far as I know there’s no concise summary of the best ones. So I thought I’d start one. Please chip in with your own in the comments!

Tabooing words

Lots of arguments are secretly about definitions. Broadly, this kind of argument is not useful.

For example, the question “If a tree falls in the forest, and nobody hears it, does it make a sound?” is entirely about words. From Eliezer Yudkowsky’s classic essay Taboo Your Words:

Albert: “A tree falling in a deserted forest makes a sound.”

Barry: “A tree falling in a deserted forest does not make a sound.”

Clearly, since one says “sound” and one says “not sound”, we must have a contradiction, right? But suppose that they both dereference their pointers before speaking:

Albert: “A tree falling in a deserted forest matches [membership test: this event generates acoustic vibrations].” Barry: “A tree falling in a deserted forest does not match [membership test: this event generates auditory experiences].”

Now there is no longer an apparent collision—all they had to do was prohibit themselves from using the word sound. If “acoustic vibrations” came into dispute, we would just play Taboo again and say “pressure waves in a material medium”; if necessary we would play Taboo again on the word “wave” and replace it with the wave equation. (Play Taboo on “auditory experience” and you get “That form of sensory processing, within the human brain, which takes as input a linear time series of frequency mixes…”)

The point is that there’s not a specific thing that sound “should” mean. Language is a tool; definitions don’t come from some Platonic realm; they exist because they’re useful and carve up reality into convenient categories. Unless someone is abusing definitions really egregiously, if you find yourself thinking too hard about meanings, it’s probably more interesting to abort and go back to whatever more productive stuff you were thinking about before. Tabooing the relevant word is a good way to do this.

Sneaking in connotations

Sneaking in connotations is what causes some definitional arguments.

My favorite use case is during debates about “what is art?” On the surface, such arguments seem like they’re about what what the definition of art should be. But again, there’s nowhere for the “should” to come from except for what’s useful to us, in terms of delineating categories. But given how muddy are the various definitions of art, it doesn’t seem like any definition is going to actually help you communicate. So why argue?

But words like “art” are special, because anything that gets called “art” gets an instant halo of goodness and worthiness around it. People keep arguing about “whether video games can be art” because some people hear “video games” and think five-year-olds playing Mortal Kombat, and they don’t think that that deserves the same halo as something Rembrandt slaved over for months.

Other words subject to this kind of argument include “love”, “happiness”, “good”, “evil”, “right”, “wrong”, etc. Lots of them, really.

Steelmanning

Steelmanning is the opposite of strawmanning. It’s what you do to your “opponent” if you’re actually having a truth-seeking argument, rather than just pretending to.

Paul Graham wrote an essay proposing seven types of argumentation, of increasing quality, ranging from “name-calling” (DH0) to “refuting the central point” (DH6). Steven of Black Belt Bayesian added an eighth level:

When an argument is made, you learn about that argument. But often you also learn about arguments that could have been made, but weren’t. Sometimes those arguments work where the original argument doesn’t.

If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them.

To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.

As the name suggests, to steelman an argument is to make the strongest possible version of that argument before you refute it.

The least convenient possible world

People who are given the doctor parable tend to try and weasel their way out of it. From the eponymous LessWrong post:

You are a doctor in a small rural hospital. You have ten patients, each of whom is dying for the lack of a separate organ…. A traveller walks into the hospital…. You realize that by killing this traveller and distributing his organs among your patients, you could save ten lives. Would this be moral or not?

I don’t want to discuss the answer to this problem today. I want to discuss the answer one of my friends gave, because I think it illuminates a very interesting kind of defense mechanism that rationalists need to be watching for. My friend said:

It wouldn’t be moral. After all, people often reject organs from random donors. The traveller would probably be a genetic mismatch for your patients, and the transplantees would have to spend the rest of their lives on immunosuppressants, only to die within a few years when the drugs failed.

…[H]e completely missed the point and lost a valuable effort to examine the nature of morality.

So I asked him, “In the least convenient possible world, the one where everyone was genetically compatible with everyone else and this objection was invalid, what would you do?”

Fighting the hypothetical is a natural reaction when you’re afraid you might have to make a judgment that won’t please everyone. But usually, refusing to judge is siding with one side by default; and delaying the decision by avoiding thinking about it doesn’t help. And anyway, the point of asking this hypothetical question isn’t to figure out what to do in real life. It’s to probe the corner cases of ethics in order to figure out how one’s morals actually work. So fighting they hypothetical isn’t just useless, but counterproductive and logically rude. To forestall it, postscript your hypotheticals with “in the least convenient possible world.”

Breaking intuition

Intuition-breaking thought experiments are more or less the flip side of fighting the hypothetical.

Simplified thought experiments are one of the primary tools of moral debate. The idea is that if your ethics gives a repugnant answer in certain hypothetical scenarios, it’s not capturing some important aspect of morality and therefore needs refinement. But sometimes, if the structure of the experiment breaks our intuitions, a repugnant-seeming answer shouldn’t be held against it.

For instance, many people use the doctor question above as a counter-argument against utilitarianism: the utilitarian answer (in the least convenient possible world, you morally should kill the healthy traveller to save the ten transplantees) is so counterintuitive as to be repugnant, so utilitarianism is unacceptable.

But our moral intuitions aren’t structured to deal with simple, clear-cut scenarios. Selective pressures on these intuitions act in the real world, where nothing is simple and clear-cut, and surprising or unusual things are likely to go wrong. So our intuitions tend to add uncertainty even when logically we’ve been told that we’re completely sure in the hypothetical, and favor solutions that hold up well under uncertainty.

In the least convenient possible world of the doctor parable, you’re 100% sure that harvesting the healthy traveller’s organs will save all 10 of the people with no ill effects. But a mere human will almost never have such certainty, and so our intuitions shouldn’t be expected to give the “right” answer in that situation.

Reason to suspect vs. reason to believe

Argument from authority (“Einstein believes that toves are slithy, therefore toves are slithy”) is widely regarded as a fallacy. If Einstein believed toves are slithy, then tell me the process that got him to believe it, not just the results!

Yet when Andrew Wiles tells me1 $$\forall n \in \mathbb{N}: n > 2 \implies \not\exists a, b, c \in \mathbb{N} : a^n + b^n = c^n$$ (Fermat’s Last Theorem), I believe him, even if I don’t understand his proof. Why is that? Isn’t this just an argument from authority?

Well, it turns out that experts are quite often right about things, so “experts say toves are slithy” is usually evidence that toves are slithy. When people say that appeal to authority is a fallacious, what they mean really mean is that it’s strictly worse evidence than giving a convincing argument: If Einstein’s beliefs and the convincing arguments point in different directions, it’s much more likely that Einstein has fallen prey to some bias than that he’s right by accident. If you want to avoid semantic confusion when talking about experts’ beliefs, you can say that Einstein’s opinion gives you reason to suspect that toves are slithy; a convincing argument gives you reason to believe it.

Nerdy aside: this can be made very precise using belief networks. Let $E_T$ be “Einstein believes toves are slithy”; $A_T$ be “there exists a good argument that toves are slithy”; and $T$ be “toves are slithy”. Then the causal diagram goes $T \to A_T \to E_T$ (plus other factors like “Einstein wants toves to be slithy” pointing to $E_T$). So if we observe $E_T$, it might be confounded by other factors, but if we observe $A_T$, we’re much more certain about $T$: in fact, that observation screens off $E_T$ completely.

The thing to note is that it’s often much costlier to observe whether there’s a good argument than to observe experts’ opinions—for example, to observe whether Wiles’s argument was good I’d need a PhD in math. So just calling argument from authority a “fallacy” doesn’t capture the subtlety involved. It’s weaker evidence, but sometimes still useful.


  1. For the purposes of this statement, we take $0 \notin \mathbb{N}$, because it’s prettier that way. ↩︎

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Thomas

Good post, but I think in your part about intuition you gloss over an important point about philosophical reasoning in general.

A premise is always needed to reason. It will be an unprovable axiom, by definition. So we cannot really do better than using intuition for choosing our axioms. Reasoning can tell you whether two premises are logically compatible or not. (ie your thought experiment shows you that “all lives are worth the same” & “people have a right to life” are incompatible premises).

So it is not enough merely to talk about “breaking intuition”, because if you take away intuition then you will find yourself with nothing to stand on. In ethics, we must decide which intuitions we are more attached to.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

@Thomas: The point isn’t that you should ignore the results of all thought experiments because they break your intuition (although I see how you might have inferred that from my phrasing). Rather, thinking about this helps you figure out which thought experiments you should care most that your set of premises “gets right.” For instance, if I have some premises that output the correct answer in a lot of thought experiments that don’t have as many intuition-breakers as the doctor experiment, I probably won’t be too worried if they output a wrong answer in the doctor experiment.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Miles

With all the questions in philosophy that boil down to definitions, Eliezer somehow managed to pick one that doesn’t! I imagine the following dialogue:

George Berkeley, Bishop of Cloyne: If a tree falls in a deserted forest, it doesn’t make a sound. Eliezer Yudkowsky: Taboo your words! By “sound” you clearly mean “auditory experience”, whereas I mean “vibration of air”. Thus, I have dissolved our argument. GB: No, I actually mean “vibration of air”. EY: What!? GB: I hold that nothing exists unless it is perceived. EY: What!?!? GB: In fact, the sound is just a stand-in for a larger statement: the tree, the forest, etc, don’t exist at all while they’re not being perceived, so they can’t make a sound. EY: What!?!?!?!? GB: Fortunately, God is all-seeing, so we can avoid the problems caused by spontaneous silvological existence failure. EY: I’m… glad we had this discussion.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.