… stuff about perverse utility functions …
Well, there’s a couple of things to say in response to this… one is that wanting to get the girl / dowry / happiness / love / whatever tangible or intangible goals as such, and also wanting to be virtuous, doesn’t seem to me to be a weird or perverse set of values. In a sense, isn’t this sort of thing the core of the project of living a human life, when you put it like this? “I want to embody all the true virtues, and also I want to have all the good things.” Seems pretty natural to me! Of course, it’s also a rather tall order (uh, to put it mildly…), but that just means that it provides a challenge worthy of one who does not fear setting high goals for himself.
Somewhat orthogonally to this, there is also the fact that—well, I wrote the footnote about the utility function being metaphorical for a reason. I don’t actually think that humans (with perhaps very rare exceptions) have utility functions; that is, I don’t think that our preferences satisfy the VNM axioms—and nor should they. (And indeed I am aware of so-called “coherence theorems” and I don’t believe in them.)
With that constraint (which I consider an artificial and misguided one) out of the way, I think that we can reason about things like this in ways that make more sense. For instance, trying to fit truth and honesty into a utility framework makes for some rather unnatural formulations and approaches, like talking about buying more of it, or buying it more cheaply, etc. I just don’t think that this makes sense. If the question is “is this person honest, trustworthy, does he have integrity, is he committed to truth”, then the answer can be “yes”, and it can be “no”, and it could perhaps be some version of “ehhh”, but if it’s already “yes” then you basically can’t buy any more of it than that. And if it’s not “yes” and you’re talking about how cheaply you can buy more of it, then it’s still not “yes” even after you complete your purchase.
(This is related to the notion that while consequentialism may be the proper philosophical grounding for morality, and deontology the proper way to formulate and implement your morality so that it’s tractable for a finite mind, nevertheless virtue ethics is the “descriptively correct as an account of how human minds implement morality, and (as a result) prescriptively valid as a recommendation of how to implement your morality in your own mind, once you’ve decided on your object-level moral views”. Thus you can embody the virtue of honesty, or fail to do so. You can’t buy more of embodying some virtue by trading away some other virtue; that’s just not how it works.)
I think you understand that, f other people noticed a pattern that everything you said was false, irrelevant, or unimportant, they would eventually stop bothering to listen when you talk, and this would mean you’d lose the ability to get other people to know things, which is a useful ability to have.
Yes, of course; but…
Whether the specific person you address is better off in each specific case isn’t materal because you aren’t trying to always make them better off, you’re just trying to avoid being seen as someone who predictibly doesn’t make them better off.
… but the preceding fact just doesn’t really have much to do with this business of “do you make people better off by what you say”.
My claim is that people (other than “rationalists”, and not even all or maybe even most “rationalists” but only some) just do not think of things in this way. They don’t think of whether their words will make their audience better off when they speak, and they don’t think of whether the words of other people are making them better off when they listen. This entire framing is just alien to how most people do, and should, think about communication in most circumstances. Yeah, if you lie all the time, people will stop believing you. That’s just directly the causation here, it doesn’t go through another node where people compute the expected value of your words and find it to be negative.
(Maybe this point isn’t particularly important to the main discussion. I can’t tell, honestly!)
I took great effort to try to right down my policy as something explicit in terms a person could try to do (even though I am willing to admit it is not really correct mostly because finite agent problems), because a person can’t be a real Rule Consequentialist without actually having a Rule. What is the rule for “Only lie when doing so is the right thing to do”? It sounds like an instruction to pass the act to my rightness calculator, but if I program that rule into my rightness calculator, and then give it any input, it gets into an infinite loop. I have an Act Consequentialist rightness calculator as a backup, but if I pass the rule “only lie when doing so is the right thing to do” into that as a backup I’m just right back at doing act consequentialism.
If you can write down a better rule for when to lie the than what I’ve put above (that is also better than the “never” or “only by coming up with galaxy-brained ways it technically isn’t lying” or Eliezer’s meta-honesty idea that I’ve read before) I’d consider you to have (possibly) won this issue, but that’s the real price of entry. It’s not enough to point out the flaws where all my rules don’t work, you have to produce rules that work better.
Well… let’s start with the last bit, actually. No, it totally is enough to point out the flaws. I mean, we should do better if we can, of course; if we can think of a working solution, great. But no, pointing out the flaws in a proffered solution is valuable and good all by itself. (“What should we do?” “Well, not that.” “How come?” “Because it fails to solve the problem we’re trying to solve.” “Ok, yeah, that’s a good reason.”) In other words: “any solution that solves the problem is acceptable; any solution that does not solve the problem is not acceptable”. Act consequentialism does not solve the problem.
But as far as my own actual solution goes… I consider Robin Hanson’s curve-fitting approach (outlined in sections II and III of his paper “Why Health is Not Special: Errors in Evolved Bioethics Intuitions”) to be the most obviously correct approach to (meta)ethics. In brief: sometimes we have very strong moral intuitions (when people speak of listening to their conscience, this is essentially what they are referreing to), and as those intuitions are the ultimate grounding for any morality we might construct, if the intuitions are sufficiently strong and consistent, we can refer to them directly. Sometimes we are more uncertain. But we also value consistency in our moral judgments (for various good reasons). So we try to “fit a curve” to our moral intuitions—that is, we construct a moral system that tries to capture those intuitions. Sometimes the intuitions are quite strong, and we adjust the curve to fit them; sometimes we find weak intuitions which are “outliers”, and we judge them to be “errors”; sometimes we have no data points at all for some region of the graph, and we just take the output of the system we’ve constructed. This is necessarily an iterative process.
If the police arrest your best friend for murder, but you know that said friend spent the whole night of the alleged crime with you (i.e. you’re his only alibi and your testimony would completely clear him of suspicion), should you tell the truth to the police when they question you, or should you betray your friend and lie, for no reason at all other than that it would mildly inconvenience you to have to go down to the police station and give a statement? Pretty much nobody needs any kind of moral system to answer this question. It’s extremely obvious what you should do. What does act and/or rule consequentialism tell us about this? What about deontology, etc.? Doesn’t matter, who cares, anyone who isn’t a sociopath (and probably even most sociopaths who aren’t also very stupid) can see the answer here, it’s absurdly easy and requires no thought at all.
What if you’re in Germany in 1938 and the Gestapo show up at your door to ask whether you’re hiding any Jews in your attic (which you totally are)—what should you do? Once again the answer is easy, pretty much any normal person gets this one right without hesitation (in order to get it wrong, you need to be smart enough to confuse yourself with weird philosophy).
So here we’ve got two situations where you can ask “is it right to lie here, or to tell the truth?” and the answer is just obvious. Well, we start with cases like this, we think about other cases where the answer is obvious, and yet other cases where the answer is less obvious, and still other cases where the answer is not obvious at all, and we iteratively build a curve that fits them as well as possible. This curve should pass right through the obvious-answer points, and the other data points should be captured with an accuracy that befits their certainty (so to speak). The resulting curve will necessarily have at least a few terms, possibly many, definitely not just one or two. In other words, there will be many Rules.
(How to evaluate these rules? With great care and attention. We must be on the lookout for complexity, we must continually question whether we are in fact satisfying our values / embodying our chosen virtues, etc.)
Here’s an example rule, which concerns situations of a sort of which I have written before: if you voluntarily agree to keep a secret, then, when someone who isn’t in on the secret asks you about the secret, you should behave as you would if you didn’t know the secret. If this involves lying (that is, saying things which you know to be false, but which you would believe to be true if you were not in possession of this secret which you have agreed, of your own free will, to keep), then you should lie. Lying in this case is right. Telling the truth in this case is wrong. (And, yes, trying to tell some technical truth that technically doesn’t reveal anything is also wrong.)
Is that an obvious rule? Certainly not as obvious as the rules you’d formulate to cover the two previous example scenarios. Is it correct? Well, I’m certainly prepared to defend it (indeed, I have done so, though I can’t find the link right now; it’s somewhere in my comment history). Is a person who follows a rule like this an honest and trustworthy person, or a dishonest and untrustworthy liar? (Assuming, naturally, that they also follow all the other rules about when it is right to tell the truth.) I say it’s the former, and I am very confident about this.
I’m not going to even try to enumerate all the rules that apply to when lying is wrong and when it’s right. Frankly, I think that it’s not as hard as some people make it out to be, to tell when it is necessary to tell the truth and when one should instead lie. Mostly, the right answer is obvious to everyone, and the debates, such as they are, mostly boil down to people trying to justify things that they know perfectly well cannot be justified.
Indeed, there is a useful heuristic that comes out of that. In these discussions, I have often made this point (as I did in my top-level comment) that it is sometimes obligatory to lie, and wrong to tell the truth. The reason I keep emphasizing this is that there’s a pattern one sees: the arguments most often concern whether it’s permissible to lie. Note: not, “is it obligatory to tell the truth, or is it obligatory to lie”—but “is it obligatory to tell the truth, or do I have no obligation here and can I just lie”.
I think that this is very telling. And what it tells us (with imperfect but nevertheless non-trivial certainty) is that the person asking the question, or making the argument against the obligation, knows perfectly well what the real—which is to say, moral—answer is. Yes, the right thing to do is to tell the truth. Yes, you already know this. You have reasons for not wanting to tell the truth. Well, nobody promised you that doing the right thing will always be personally convenient! Nevertheless, very often, there is no actual moral uncertainty in anyone’s mind, it’s just “… ok, but do I really have to do the right thing, though”.
This heuristic is not infallible. For example, it does not apply to the case of “lying to someone who has no right to ask the question that they’re asking”: there, it is indeed permissible to lie[1], but no particular obligation either to lie or to tell the truth. (Although one can make the case for the obligation to lie even in some subset of such cases, having to do with the establishment and maintenance of certain communicative norms.) But it applies to all of these, for instance.
The bottom line is that if you want to be honest, to be trustworthy, to have integrity, you will end up constructing a bunch of rules to aid you in epitomizing these virtues. If you want to try to put together a complete list of such rules, that’s certainly a project, and I may even contribute to it, but there’s not much point in expecting this to be a definitively completable task. We’re fitting a curve to the data provided by our values, which cannot be losslessly compressed.
- ↩︎
Assuming that certain conditions are met—but they usually are.
If an EA could go back in time to 1939, it is obvious that the ethically correct strategy would be to…?