Reading lately

A Fire Upon the Deep

Vernor Vinge

Highly recommended, especially if you’re interested in the intellectual history of the Singularity. Reading it made some of Eliezer’s worldview became a lot more understandable. Enjoyable throughout, primarily because Vinge does a great job of world-building. Incidentally, according to Wikipedia it’s pronounced “vinjee.”

Quantum Computing Since Democritus

Scott Aaronson

I expected this to be pretty soft and pop-science-y, but I was pleasantly surprised to find that it was pitched for a pretty reasonable level of mathematical maturity. Aaronson’s tone rankled a bit sometimes—“What’s the proof that the rational numbers are countable? You haven’t seen it before? Oh, alright."—but in general I appreciated reading a denser exposition of the material. So far I’ve picked up a bit about complexity theory, but I only just got to the “quantum” section, where I expect the meat of the learning to happen. My curiosity’s already been piqued by Aaronson’s promise of his own take on learning QM:

There are two ways to teach quantum mechanics. The first way – which for most physicists today is still the only way – follows the historical order in which the ideas were discovered. So, you start with classical mechanics and electrodynamics, solving lots of grueling differential equations at every step. Then, you learn about the “blackbody paradox” and various strange experimental results, and the great crisis these things posed for physics. Next, you learn a complicated patchwork of ideas that physicists invented between 1900 and 1926 to try to make the crisis go away. Then, if you’re lucky, after years of study, you finally get around to the central conceptual point: that nature is described not by probabilities (which are always nonnegative), but by numbers called amplitudes that can be positive, negative, or even complex.

The second way to teach quantum mechanics eschews a blow-by-blow account of its discovery, and instead starts directly from the conceptual core – namely, a certain generalization of the laws of probability to allow minus signs (and more generally, complex numbers). Once you understand that core, you can then sprinkle in physics to taste, and calculate the spectrum of whatever atom you want.

Capital in the Twenty-First Century

Thomas Piketty, trans. Arthur Goldhammer

I’ve only read Part One, which is about 14% of this tome. So far I’ve found it remarkably readable for an author who at times sounds like he wants to be the next Marx (if admittedly a Marx of a more cautious and rigorous variety). Piketty analyzes the dynamics of income, wealth, and inequality using an extraordinarily broad array of data spanning two millennia (though with a focus on the period 1700-2013). I haven’t read enough economics to have much to compare it to, but from what I understand it’s something of an analytic tour de force.

The arguments of Part One can be summarized basically as claiming that the extreme growth seen for much of the past century is largely anomalous, due to “catch-up” effects as Europe rebuilt after World War II, unprecedented demographic growth, and breakaway innovation which is now (apparently1) slowing.

I found the arguments persuasive in general, but on a meta level I’m not sure how swayed I should be by that fact. I don’t know very much about economics, especially the kind of macro stuff that Piketty deals with, so I can’t be sure that he’s not eliding some obvious counterargument. I’m also not sure of the use of historical precedent going back that far is very helpful—it seems pretty plausible that we’re just in a totally different regime than the world 300 years ago.

The book also includes a lot of interesting observations that are noncentral to Piketty’s main point. For instance, something I don’t often think of is that inflation basically used not to be a thing. Until the beginning of the 20th century, nominal prices were rock-stable—so much so that novels of the time period (e.g. Austen, Balzac) frequently mentioned specific incomes in their work. Then World War 1 happened; countries accumulated massive public debts, and floated their currencies and printed money to finance them. That’s not something that typically comes to mind when I think about how the world was different 100 years ago.

Uncharted: Big Data as a Lens on Human Culture

Erez Aiden and Jean-Baptiste Michel

This is almost entirely a popular presentation of their Science paper on culturomics in which they invent the Google Books ngram viewer and find out some cool facts with it. The ngram viewer is pretty awesome, and I’m a fan of generally being more quantitative about studying history and culture. My only big beef with the book was that it probably could have been half as long and equally informative; there was a lot of fluff.

The most interesting experiment that Aiden and Michel ran was trying to automatically detect censorship regimes. It turns out that generally from year to year people’s change in “fame” (frequency with which their name is mentioned in the Google Books corpus) follows a normal distribution—a few people get a lot more famous, a few get a lot less, and the vast majority are in a bell curve with a mean of about zero (so as many people get more famous as get less). But during e.g. the Nazi regime, the tails of the curve get much heavier: lots of “subversive” artists got massively less popular because their work was being censored, and lots of Nazi officials got massively more popular due to propaganda. It would be interesting to apply the same analysis to e.g. datasets from Sina Weibo (which is censored extensively by the Chinese government) and see if you can detect what’s being censored that way.

The Secret Life of Pronouns

James W. Pennebaker

Like Uncharted, I read this (well, half of this) for a social psychology class. It covered many different ways in which minute variations in word frequency correlated with various things: in one study, for example, when people wrote about their traumas, they showed better health outcomes if they

There was a lot of super interesting stuff here, but at least in the half that I read they didn’t study very many interventions, only correlations, and I find the causal inferences somewhat questionable. In fact, they tried several tacks to show causation for the trauma recovery experiment and all failed. The author himself notes that “[T]he evidence is convincing: Word usage generally reflects psychological state rather than influences it.” I would have been more interested if the authors had tried to build a better model of the underlying mechanisms.

  1. Piketty cites an NBER paper by Robert J. Gordon, Is U.S. Economic Growth Over? in support of this, which basically finds that successive waves of innovation have been decreasingly disruptive, with tech being one of the least so:

    The computer and Internet revolution (IR #3) began around 1960 and reached its climax in the era of the late 1990s, but its main impact on productivity has withered away in the past eight years. Many of the inventions that replaced tedious and repetitive clerical labor by computers happened a long time ago, in the 1970s and 1980s. Invention since 2000 has centered on entertainment and communication devices that are smaller, smarter, and more capable, but do not fundamentally change labor productivity or the standard of living in the way that electric light, motor cars, or indoor plumbing changed it.

    I haven’t read it, but it looks interesting. On the other hand, the point that labor-saving inventions from tech are occurring at a decreasing rate seems shaky with things like self-driving cars on the horizon. We may just be in the middle of a gap between the gains from automating easy things (e.g. math) and very hard things (e.g. driving). ↩︎


email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.