Probability, uncertainty, and markets

I originally posted this as a comment on another blog, but it recently came up again. I’m posting it here in edited form.

A common Less Wrong meme is that you should try to put subjective probabilities on any events that you care about, and that it’s important to calibrate your sense of probability so that you can avoid being systematically over- or underconfident. For instance, a popular side activity at CFAR workshops is prediction markets that reward being able to correctly state probabilistic beliefs. Some people I’ve run into at meetups will go as far as making fun of you for not being willing to put a number on things. If you claim Knightian uncertainty—that you’re too uncertain even to quantify your uncertainty—you probably won’t be taken seriously.

I think that claims of Knightian uncertainty are not only reasonable, but can be the optimal move, once you move from an abstract setting of “probabilities inside your head” to the more concrete environment of “probabilities that you would like to base actions on.” The rest of this post is an explanation of that position.


An interesting thing I’ve picked up from reading some literature on automated market making is the idea that agents can have a distinction between “internal probabilities”—the kind that you are forced to have by things like Cox’s theorem1—and “actionable probabilities” that you would be (for instance) willing to bet on.

In theory, if you’re a rational agent making a prediction market3 that, say, aliens exist, then you should be willing to buy that contract at probability $p - \epsilon$ (giving you an expected profit of $\epsilon$) or sell it at probability $p + \epsilon$ (for the same). In other words, unless the market agrees with you very precisely, you should rationally be willing to trade on it. But even if you’re sure that your probabilities are well-calibrated, you’re probably not about to write down a bunch of your subjective probabilities and start market-making on a prediction market. Why?

One possible reason is adverse selection.4 If you’re making a market on whether aliens exist and you offer to buy at $p - 0.0001$ and sell at $p + 0.0001$, but the market knows better than you do, then—no matter whether aliens exist or not—a lot more people will take you up on the correct side of the trade, and you’ll end up losing. So you have to make your $\epsilon$ large enough that it compensates you for “sticking your neck out” by offering to both buy and sell the contract.

I suspect that this is part of what people find compelling about the idea of Knightian uncertainty: it’s a heuristic to avoid being adversely selected. Asking someone to name their credence in something is like asking them to buy and sell a prediction market contract at the same price: they’re sticking their neck out and getting nothing in return. If someone asks you to name your credence in the existence of aliens, it’s generally because they have thought more about it than you have, and so it’s unfavorable for you (on expectation) to commit yourself to any number, even one that correctly represents your current epistemic state.

Another reason to claim Knightian uncertainty is bounded computation. Sometimes when I say “I have no opinion on X” I really mean “I think my opinion on X would be likely to change if I thought more about it.” In fact, in this case naming a number could be epistemically dangerous for me—it would force me to commit to a position would then be subject to status quo bias.

I’ve had several quite frustrating conversations with people from the Less-Wrong-osphere where I tried and failed to explain that their questions about my internal probabilities weren’t going to be informative for these reasons. Not everyone thinks this way, certainly,5 but it’s an annoyingly pervasive attitude. In general, when someone claims Knightian uncertainty, I think it’s much more productive to treat that claim as reasonable, and figure out why they’re claiming it—than to try to back them into a corner and try to wrench a probability out of them somehow. But unfortunately I see the latter response all too often.


  1. Actually, you’re only even forced to have consistent internal probabilities if you’re not computationally limited. Cox’s theorem doesn’t say anything about computational limitations that might be an obstacle to deriving subjective probabilities in practice. For instance, as far as I know, the question

    How much should a computationally bounded rational agent bet that a given large instance of 3-SAT is solvable?

    is still open; Cox’s Theorem would tell you “bet nothing or everything, depending on whether the instance is solvable,” which isn’t very helpful. ↩︎

  2. Actually, you’d pick some easier-to-define event, like paying out $1 if aliens are discovered by 2100, so that it has a definite end date. ↩︎

  3. In a prediction market, you trade contracts where the seller pays the buyer, e.g., one dollar if aliens turn out to exist and nothing otherwise.2 Then if you’re risk-neutral and your subjective probability that aliens exist is p, you should be willing to buy a contract for any price less than p dollars and sell for any price more than p dollars. ↩︎

  4. It might also be because of the incredibly sketchy nature and uncertain regulatory favor of most prediction markets, but let’s pretend we live in a hypothetical world where people recognize the value of prediction markets. You probably still wouldn’t be jumpting to sign up. ↩︎

  5. For instance, Eliezer’s When (Not) To Use Probabilities:

    To be specific, I would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities. Numbers should come from numbers.

     ↩︎

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Timothy Johnson

Related to footnote 1, you might want to look up the satisfiability threshold. (It’s something that has fascinated me ever since I first learned about it.)

It is conjectured that a large randomly chosen 3-SAT instance with a ratio of clauses to variables below about 4.2 will almost always be satisfiable, but that one with a ratio above about 4.2 will almost always be unsatisfiable. In practice, heuristics tend to work very well for solving most instances of 3-SAT, except near that critical ratio.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Alex Zhu

If you were to ask me for my subjective probability that aliens exist, I would be happy to report a number to you, $p$. If you then asked me to trade on it, I wouldn’t be as happy—the very fact that you want to trade on it updates my subjective probablity, which is now different from $p$. (This is basically adverse selection, as you mentioned.)

In general it may very well be that there does not exist a number $r$ such that I would agree to force a buy/sell with you at $r$. But I wouldn’t say this is because I am too uncertain to quantify my probability.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Robert

I’d be really interested in reading up on the internal vs. actionable probabilities in HFT you mentioned.

I believe I’m falling into the category of people not on board with Knightian uncertainty, but searching for good (non-psychological) arguments.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

@Robert: check out A Practical Liquidity-Sensitive Automated Market Maker by Othman, Sandholm, Pennock, and Reeves (yes, that Reeves).

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Robert

Thank you! I’ll have a look.

I wrote up my notes on the topic here (don’t feel compeled to read it! :))

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.