Links II: Epistemology, &c.

Do rationalists exist?

On her excellent new blog, Sarah Constantin wonders: to what extent are there people of extraordinary ability to avoid typical human cognitive biases? How much does lack of bias in a laboratory setting translate to modelling and predicting the real world? While I still want to know more about the link between low cognitive bias and forecasting ability (as observed by e.g. the Good Judgment Project), the research that does exist is fascinating.

Sequence Thinking vs. Cluster Thinking

Holden Karnofsky of GiveWell lays out another piece of his way of thinking, comparing two different possible models of a research process. His dichotomy sort of roughly maps onto the fox/hedgehog distinction—a sequence argument appeals to a single strong logical chain of argument, while a cluster argument appeals to a broad set of considerations and heuristics. They also have an associated website page on modeling extreme model uncertainty.

It seems to me that a (naive) mathematical model of an ideal agent would probably work via sequence reasoning—building one coherent model of their environment and Bayes-updating the whole thing on every piece of evidence—but that a good approximation to this ideal is not necessarily approximately as good as reaching the ideal. In practice, cluster arguments are way more robust to some part of your model going haywire and being off by two orders of magnitude, simply because the influence of any one part of a clustered ensemble is capped.

The cluster-ish GiveWell approach—evaluating causes based on a diverse and evolving set of criteria—actually has some interesting parallels to the technique of boosting in machine learning, which I’m learning about for my day job.

Harvard Effective Altruism’s Spring 2014 research

HEA publishes the results of last semester’s research, about how to present charities in giving games for maximum engagement (as measured by post-game mailing list signups). Findings: presenting three global health causes produced more engagement than presenting one global health charity and two more-speculative causes (in this case 80,000 Hours and MIRI), and adding information about how effective the charities were may have decreased engagement as well (p = 0.14; more research is needed).

Faltering Innovation Confronts the Six Headwinds

Most people have a sense that our productivity as a society has grown enormously over the past few decades due to the computer revolution (“the third Industrial Revolution”). But GDP growth doesn’t actually bear out that impression. Although there certainly was a period of high growth in the technology industry, it was neither as large nor as widespread as e.g. the period of growth following the previous industrial revolution, and the explosive phase of growth seems to be on the wane already. Taking a longer historical view raises questions about to what extent growth will continue at all.

Once the spin-off inventions from IR #2 (airplanes, air conditioning, interstate highways) had run their course, productivity growth during 1972-96 was much slower than before. In contrast, IR #3 created only a short-lived growth revival between 1996 and 2004.

With 12 additional years of data (2000-2012), it appears that my initial skepticism was appropriate, as the productivity benefits of IR #3 had faded away by 2004… In the past decade the nature of IR #3 innovations has changed. The era of computers replacing human labor was largely over… Attention in the past decade has focused not on labor-saving innovation, but rather on a succession of entertainment and communication devices that do the same things as we could do before, but now in smaller and more convenient packages.

Standard caveats about predicting the future apply—this is certainly speculative and could be wrong—but I think it’s an outside view that gets relatively little attention from a lot of people.

The Deliberate Practice of Disruption

Venkatesh Rao of Ribbonfarm notes that the psychological literature on development of expertise and deliberate practice—e.g. the meme that it takes 10,000 hours to get good at anything—almost universally studies sharply-defined “closed-world” pursuits like music, sports or chess. To generalize to open-world pursuits, deliberate practice is still important, but for a different reason: instead of achieving a state of flow/working-at-peak-ability, the point is to expose yourself to and foster lucky breaks/serendipitous mutations.

That sounds like a summary, but there’s a lot of interesting tidbits and a lot more depth in the post that I glossed over. It’s a nice synthesis of a lot of the cool stuff Venkatesh thinks about. As someone moving from the (fairly) closed world of college to the (fairly) open world of a tech startup, it’s a good piece of advice that I should focus on generating luck rather than eliminating my mistakes.

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Aaron

Huh. I take Bayesianism as being basically cluster thinking over a set of sequence arguments – within each hypothesis you do a sequence-style update on the evidence, and then your posterior depends on normalizing things to take into account the cluster of arguments. Computing your prior probability of the event is a cluster argument, since you’re just summing over your entire hypothesis space – you don’t just marginalize out parameters of your model, you marginalize out what family of models you use weighted by your prior probability of that model being true/applicable.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.