Is X unethical?

I have a lot of opinions about ethics, and they tend to be different from other people’s. One question that often comes up in conversations, therefore, is “if I do such-and-such, am I a bad person?” or “is X unethical?”

In my experience, there are a couple things that this can mean. One of them is, “does your moral framework say that X is bad?” For utilitarians like myself, there’s not really a concept of “bad” actions and “good” actions per se, only actions that increase or decrease utility by whatever amount. So the actual translation of this question is “does action X maximize utility over the space of all possible actions?” or, if you want a finer-grained answer, “how far away is X from maximizing utility?”

Of course, on this perspective, since almost all actions1 do not maximize utility, most things people (including myself) do are “unethical” or “not perfectly ethical”.

Another thing that people can mean is “does X make me a Bad Person?” Now, I’m a moral anti-realist2, so Bad Person is just a label that I can choose to apply to people. So the utilitarian translation of this question is “would it increase utility for me to implicitly threaten to call you a Bad Person if you take action X?” Calling people Bad People is a fairly powerful social tool for getting them to do what I think is correct, but it requires that they (or their friends) take my Bad Person label seriously. So I need to choose the policy about how to apply my Bad Person label that optimizes expected utility.

It turns out that this doesn’t involve labeling someone a Bad Person every time they do something non-utilitarian does not maximize utility, for hopefully obvious reasons. Instead, a better policy seems to be something like: if you do something really egregiously non-optimizing, like become a totalitarian dictator, I’ll call you a Bad Person. If you make altruism-hedonism tradeoffs that lie pretty much within the Overton window of socially-acceptable things to say/do, I won’t call you a Bad Person, but I also won’t think you’re particularly good/awesome/exciting/interesting. If you look like you’re actually making a good-faith effort to maximize utility3 I will be extremely impressed and think you are excellent and awesome.


  1. Yes, this is true in the mathematical sense. ↩︎

  2. Roughly. I think. ↩︎

  3. By “good-faith effort” I essentially mean trying to figure out what actually maximizes utility and then doing that thing, rather than searching through your brain for a script labeled “things people do to maximize utility” and then executing that script. For instance, people who are executing utility-maximizing scripts tend to, say, build PlayPumps instead of normal hand pumps, or train guide dogs instead of curing trachoma, or spend massive amounts of time “raising awareness”↩︎

Comments

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.
Anonymous

So the actual translation of this question is “does action X maximize utility over the space of all possible actions?”

Somebody recently showed me the notion of instrumental rationality. To quote the Wikipedia article, it is “a specific form of rationality focusing on the most efficient or cost-effective means to achieve a specific end, but not in itself reflecting on the value of that end.”

When you refer all these moral judgments to a utility function, this sounds very much like instrumental rationality. The key characteristic here is that you don’t offer any rational justification for your choice of utility function, but rather take it as given.

In retrospect, this is what I was (incoherently) saying to Josh when we visited you in Boston: I think CFAR teaches a certain bundle of instrumental rationality, some tools for applying it, and—crucially—a default utility function to plug in.

The trick is to remember that their utility function is just as mind-dependent as the more conventional (presumably deontological) ethics you were contrasting it against.

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.

Ben

The trick is to remember that their utility function is just as mind-dependent as the more conventional (presumably deontological) ethics you were contrasting it against.

I think you are reading into this post a claim that I think my ethics are objectively correct. I emphatically do not believe this. I mentioned briefly that I’m a moral anti-realist, but I should have emphasized this more.

When you refer all these moral judgments to a utility function, this sounds very much like instrumental rationality.

I think you’re conflating Von Neumann-Morgenstern utility maximization with utilitarianism (I agree that the nomenclature could be better). Any “rational” agent, in the sense of having “reasonable” or “nice” preferences, has a VNM utility function–this is provable. However, a VNM utility function can include terms like “percentage of time I have followed deontologist rules” or “how well my actions reflect on my character” (if you’re a virtue ethicist). Instrumental rationality is orthogonal to ethics.

The key characteristic here is that you don’t offer any rational justification for your choice of utility function, but rather take it as given.

As per the link above, I don’t think there’s a “rational” way of justifying the choice of utility function. Here I follow Hume and many other philosophers in positing a divide between “is” (factual) statements and “ought” (ethical) statements. If you disagree, I’m curious about your opinion of what a rational justification of an ethical theory would look like.

I think CFAR teaches a certain bundle of instrumental rationality, some tools for applying it, and–crucially–a default utility function to plug in.

I would love to know how you got this impression. Can you elaborate?

email me replies

format comments in markdown.

Your comment has been submitted! It should appear here within 30 minutes.