Why squared error?

Someone recently asked on the statistics Stack Exchange why the squared error is used in statistics. This is something I’d been wondering about myself recently, so I decided to take a crack at answering it. The post below is adapted from that answer.

Why squared error?

It’s true that one could choose to use, say, the absolute error instead of the squared error. In fact, the absolute error is often closer to what you “care about” when making predictions from your model. For instance, if you buy a stock expecting its future price to be $P_{predicted}$ and its future price is $P_{actual}$ instead, you lose money proportional to $(P_{predicted} - P_{actual})$, not its square! The same is true in many other contexts.

However, the squared error has much nicer mathematical properties. For instance:

I would say that these nice properties are merely “convenient”—we might choose to use the absolute error instead if it didn’t pose technical issues when solving problems. But some mathematical coincidences involving the squared error are more important. They don’t just pose technical problem-solving issues; rather, they give us intrinsic reasons why minimizing the square error might be a good idea:

Looking deeper

One might well ask whether there is some deep mathematical truth underlying the many different conveniences of the squared error. As far as I know, there are a few (which are related in some sense, but not, I would say, the same):

Differentiability

The squared error is everywhere differentiable, while the absolute error is not (its derivative is undefined at 0). This makes the squared error more amenable to the techniques of mathematical optimization. To optimize the squared error, you can just set its derivative equal to 0 and solve; to optimize the absolute error often requires more complex techniques.

Inner products

The squared error is induced by an inner product on the underlying space. An inner product is basically a way of “projecting vector $x$ along vector $y$,” or figuring out “how much does $x$ point in the same direction as $y$.” In finite dimensions this is the standard (Euclidean) inner product $\langle a, b\rangle = \sum_i a_ib_i$. Inner products are what allow us to think geometrically about a space, because they give a notion of:

By “the squared error is induced by the Euclidean inner product” I mean that the squared error between $x$ and $y$ is $\left|\left|x-y\right|\right|^2$, the (squared) Euclidean distance between them. In fact the Euclidean inner product is in some sense the “only possible” axis-independent inner product in a finite-dimensional vector space, which means that the squared error has uniquely nice geometric properties.

For random variables, in fact, you can define is a similar inner product: $\langle X, Y\rangle = E(XY)$. This means that we can think of a “geometry” of random variables, in which two variables make a “right angle” if $E(XY) = 0$. Not coincidentally, the “length” of $X$ is $E(X^2)$, which is related to its variance. In fact, in this framework, “independent variances add” is just a consequence of the Pythagorean Theorem:

$$Var(X + Y) = \left|\left|(X - \mu_X) + (Y - \mu_Y)\right|\right|^2 = \left|\left|X - \mu_X\right|\right|^2 + \left|\left|Y - \mu_Y\right|\right|^2 = Var(X) + Var(Y).$$

Beyond squared error

Given these nice mathematical properties, would we ever not want to use squared error? Well, as I mentioned at the very beginning, sometimes absolute error is closer to what we “care about” in practice. For instance, if your data has tails that are fatter than Gaussian, then minimizing the squared error can cause your model to spend too much effort getting close to outliers, because it “cares too much” about the one large error component on the outlier relative to the many moderate errors on the rest of the data.

The absolute error is less sensitive to such outliers. (For instance, if you observe an outlier in your sample, it changes the squared-error-minimizing mean proportionally to the magnitude of the outlier, but hardly changes the absolute-error-minimizing median at all!) And although the absolute error doesn’t enjoy the same nice mathematical properties as the squared error, that just means absolute-error problems are harder to solve, not that they’re objectively worse in some sense. The upshot is that as computational methods have advanced, we’ve become able to solve absolute-error problems numerically, leading to the rise of the subfield of robust statistical methods.

In fact, there’s a fairly nice correspondence between some squared-error and absolute-error methods:

Squared errorAbsolute error
MeanMedian
VarianceExpected absolute deviation
Gaussian distributionLaplace distribution
Linear regressionQuantile regression
PCARobust PCA
Ridge regressionLASSO

As we get better at modern numerical methods, no doubt we’ll find other useful absolute-error-based techniques, and the gap between squared-error and absolute-error methods will narrow. But because of the connection between the squared error and the Gaussian distribution, I don’t think it will ever go away entirely.

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Jeff Wu

Can you explain the second bullet again? Neither part of it seems true to me (and the claims seem somewhat unrelated)

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

Can you comment on what specific statements in the first part don’t seem true? I think “squared error of a vector is sum of squared errors of coordinates” is pretty uncontroversial.

You’re right that I didn’t explain the second part very clearly, and I didn’t state that it’s only true for re-parameterizations that preserve the norm (up to a scalar). The argument (and why they’re related) is as follows:

If that clears things up, I’ll edit this into the post.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Jeff Wu

Sorry for being so brief in my comment in the morning. The part I was objecting to in the first part is “You can’t do that with absolute error.” It seems like absolute error is a sum of absolute error of coordinates? But looking again, I’m not sure that I had in mind the same notion as what you had in mind.

I see - FWIW I do think the post is slightly misleading, in that it becomes untrue if you use the transformation Y1 = X1 + X2, Y2 = X1 - 2X2. At that point, it seems like the parameterizations you’re allowing are basically defined to be the ones that work. (But, nothing except swapping coordinates and negating works for absolute error, so it does still have a leg up!)

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

I guess I was equivocating between two senses of absolute error. Absolute error in the sense of “non-squared L2 distance between points” does not work that way, but is ok with orthogonal re-parameterizations. Absolute error in the sense of “L1 distance between points” works that way, but is not ok with any re-parameterizations (except for signed permutations). I’ll edit the bullet point when I think about what I actually want to say. Thanks for catching it!

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Matt

Var Y = Var(E[Y|X) + Var[Y|X]

Which is a combination of 1 and 3.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Matt

and E[E[Y|X]] = E[X]

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

@Matt: What do you mean by “Bayesian interpretation of regressions with gaussian prior”? Do you mean interpreting Tikhonov regularization as placing a Gaussian prior on the coefficients? And if so, is there not a similar interpretation of penalized quantile regression?

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

John Mount

Nice article. A point I emphasize is minimizing square-error (while not obviously natural) gets expected values right. So it tends to point you towards unbiased estimators. Some of my notes on this: http://www.win-vector.com/blog/2014/01/use-standard-deviation-not-mad-about-mad/

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

@John Mount: That’s true, but you could equally well say that minimizing absolute error tends to point you towards median-unbiased estimators!

In fact, I would say that unbiasedness could just as easily be motivated by the niceness of squared error as the other way around. Unbiasedness is defined in terms of expected value, but the reason expected value is a “special” statistic is that it minimizes squared error.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Kevin

Great post!

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Leon

`I would say that unbiasedness could just as easily be motivated by the niceness of squared error as the other way around […] the reason expected value is a “special” statistic is that it minimizes squared error'

I think averages are a much more primitive concept than “squared error”. We take averages of things all the time in pre-probability maths. Averages correspond to evenly distributing the pie. Averages play nice with affine transformations. (Higher-dimensional) averages correspond to centre of mass.

So I think it makes most sense to go from averages to squared error, normality, etc. (as I think Gauss did back in the day) rather than the other way around.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Anonymous

Finally understand inner products, woot.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Daniel Vainsencher

Ridge Regression:Lasso is about the regularization type, not about the loss, so it disagrees with everything else in your post. There is no reason not to use absolute errors (or Huber, or epsilon insensitive, or…) loss with either $l_1$ or $l_2$ or other regularization types.

Thanks for a good post on a point I care about, still trying to understand why I care about expected values (hence squared errors), and how I might convince myself not to.

Ben

Hand-waving follows:

One reason you might care about expected values is the Von Neumann-Morgenstern theorem, which roughly states that any decision-maker, whose decisions satisfy certain consistency properties, has a utility function for which they are trying to maximize the expected value.

If your utility function is smooth, then it’s locally linear in anything you care about, and so at least locally, you end up caring about the expected value of those things as well!

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Roland

Nice article. You’re mentioning the Gaussian distribution already, but I would also emphasize that the squared error occurs as a natural parameter of the Gaussian (as variance / standard deviation). And because Gaussian arises as the large-sample limit of means (the central limit theorem), the squared error becomes a central property in statistical theory.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Martin Roberts

This is a great post that tries to find a resolution to this commonly posed question, in a variety of different ways.

I think that one of the reasons why we naturally think that the squared error is more mathematically amenable is because our mathematics education has been traditionally and primarily driven with calculus at the pinnacle. This has been because a career in science or engineering (which fundamentally depends on calculus) has been typically considered more favorably than a career in statistics, and thus discrete maths has traditionally been considered a poor cousin of calculus.

This in turn, has meant that in many ways absolute function has been a poor cousin to the quadratic function. However, with the rise of computing / data science, and the ubiquitous use of computers that are able to handle absolute error as easily as mathematicians handle squared error, i believe we will see a rise in the popularity of the absolute function as a tool; and discrete maths as a branch of mathematics as important as calculus, and stats/maths a career as important as engineering.

To cite two quick examples that comes to mind. In the deep learning space, the fact that originally the most neural networks were originally based on the classic differentiable sigmoid functions such as the logistic function or hyperbolic tangent, whereas now the non-differentiable rectified linear units (ReLUs) are becoming the standard and default functions.

Similarly, prior to deep learning becoming all the rage, the data science geeks were discovering that many classification and regression algorithms (such as LASSO) that were originally formulated for the quadratic error, have much nicer and intuitive results (eg variable selection) if recast in terms of absolute error. And secondly,

That is, I do not think that the value of differentiability and mathematical formulations to admit closed-form solutions (including the quadratic loss), will decrease per se, but I do believe that we are only recently starting to discover (rediscover?) the potential of the absolute error especially in discrete maths and computational mathematics.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Nic Szerman

Spheres optimize volume to surface ratio. Spheres also have a shape defined by the square distance from the origin (along the space’s basis). If sphere volume ~ prediction error, minimizing square distance is akin to reducing sphere radius. Therefore, using square error sort of optimally improves accuracy.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.