How can humans make precommitments?
How can you precommit to something where the commitment is carried out only after you know your commitment strategy has failed?
It would seem to make it impossible to commit to blackmail when the action of blackmail has negative utility. How can you possibly convince your rational future self to carry out a commitment they know will not work?
You could attempt to adopt a strategy of always following your commitments. From your current perspective this is useful but once you have learned your strategy has failed, what’s to prevent you from just disregarding the strategy?
If a commitment strategy will fail you don’t want to make the commitment but if you will not follow the commitment even when the strategy fails then you never made the commitment in the first place.
For example, in nuclear war why would you ever retaliate? Once you know your strategy of nuclear deterrence has failed, shooting back will only cause more civilian casualties.
I’m not saying commitments aren’t useful, I’m just not sure how you can make them. How do you prevent your future self from reasoning their way out of them?
I apologize if reading this makes it harder for any of you to make precommitments. I’m hoping someone has a better solution than simply tricking your future self.
Disregarding it once will convince yourself and others that you will disregard it in the future, and remove your ability to make other precommitments.
The nuclear war example is more complicated, because presumably having a nuclear war will be the last thing you ever do. I would credit it to evolved instincts. Evolution “knows” that precommitments are important, so it gives us the desire to follow them even when it is not immediately rational to do so—for example, a lust for revenge that ought to be sufficient to make us retaliate in nuclear war, or a concept of “honor” that does the same.
Our brain has several mechanisms by which we can make commitments: Honor. Pride. Duty. Guilt. You can use any of those emotional mechanisms to the service of enforcing commitments.
Because CDT isn’t rational. You don’t always have to act only for the sake of things that you can cause. If you’re a transparent agent then you sometimes have to become the kind of agent that will carry out a precommitment. If that commitment fails then the rational thing to do is to carry out your threat.
EDIT: No-one else in the thread appears to understand that you don’t need to have an additional reason (like a third party agreement) in order to carry out your threat.
People don’t normally do things because that would be the rational thing to do. They do things because they believe to be the kind of person who does such things. Usually you would have to train to overcome that bias, but in this case you can make it work in your favor. So here is the three step program to learn to precommit:
convince yourself rationally that being able to precommit has a great expected utility and that hacking yourself to be able to precommit is a good thing.
Make lots of small easy to follow precommitments like precommiting what to have for lunch, etc. But always be sure to doublecheck that you will actually be able to do it and won’t be inconvenienced by it. When it is time to follow through on them remember you’re not doing it for the precommitments but for a “higher good”.
When you have followed on a precommitment tell yourself aloud: “I am the kind of person who always follows through on precommitments.”
That should make precommitments second nature to you.
Put the keys to the nuclear weapons in the hands of people who have been conditioned to retaliate as part of their job.
In terms of general ways of precommitting, there are a few options:
Get someone to punish you if you do the thing you don’t want to do. For example, you could sign a contract that says you’ll have to pay a large fee if you don’t bargain a car salesman down to a certain price—now they know that they must either sell it to you at that price or walk away, and so you win if that price is still profitable for them.
Start doing the stuff you want people to think you’ll do, so that its cost is reduced if you have to make good on the threat. For example, you could position your army near the border to make the neighboring country stop stealing your cows.
Put control in the hands of a third person who does have an incentive to carry out the threat. For example, when you’re acquiring a small company, don’t send the CEO, instead hire an independent negotiator who only gets paid if they bargain them down to a certain price. The CEO might not be willing to just walk away from the deal, but an independent negotiator can, and so the small company is more likely to capitulate.
Also, rip the steering wheel off and chug a fifth of whisky.
True. But this is only good as a straight commitment, not a conditional commitment, which is what’s necessary for most kinds of coercion.
The most obvious solution is to coerce your future self, by creating a future downside of not following through that is worse than the future downside of following through. Nuclear deterrence is a tough one, but In principle this is no different from coercing someone else. (I guess one could ask if it’s any more ethical, at that...)
Internalize the logic of why precommitments are useful.
I’m not sure what internalize means in this context. How is internalization accomplished?
By taking the idea of precommitements absolutely seriously. However, I’m not sure if it is actually possible in practice, and I doubt that the standard techniques for decompartmentalization are sufficient.
See a lawyer and notary and sign a contract. Be skeptical of precommitments when this isn’t a realistic option.
Another way to think about this, modify your utility function to care about your precommitments.
To use your example:
Of course, not retaliating will ensure that the future of humanity is dominated by the evil values (if I didn’t consider their values evil, why did I get into a nuclear standoff with them?) of someone who is, furthermore, willing to start an nuclear war.
I personally find that much more terrifying then the deaths of a few of their civilians in this generation.
You can’t always do it like that in the least convenient possible world.
You seem to be misunderstanding of the purpose of the “least convenient possible world”. The idea is that if your interlocutor gives a weak argument and you can think of a way to strengthen it you should attempt to answer the strengthened version. You should not be invoking “least convenient possible world” to self sabotage attempts to solve problems in the real world.
No, this is a correct use of LCPW. The question asked how keeping to precommitments is rationally possible, when the effects of carrying out your threat are bad for you. You took one example and explained why, in that case, retaliating wasn’t in fact negative utility. But unless you think that this will always be the case (it isn’t) the request for you to move to the LCPW is valid.
Yes I think that is right. Perhaps the LCPW in this case is one in which retaliation is guaranteed to mean an end to humanity. So a preference for one set of values over another isn’t applicable. This is somewhat explicit to a mutually assured destruction deterrence strategy but nonetheless once the other side pushes the button you have a choice to put an end to humanity or not. Its hard to come up with a utility function that prefers that even considering a preference for meeting pre-commitments. Its like the 0th law of robotics—no utility evaluation can exceed the existence of humanity.
You put the answer in the title. We are humans, not rational agents. We have built in mechanisms to handle this. Pride, embrace it. This actually becomes easier with experience. I’ve found that in times when I’ve tried to be a good little CDT agent and suppress my human instincts it has gone badly for me. My personal psychology doesn’t react well to the suppression and I’ve actually been surprised how often failing to follow through with a threat (or what should be an implied threat) had more negative consequences than I anticipated. On this my instincts and my ethics are aligned.
Use a third party, preferably a binding legal contract or similar.
Ideally, your decision to follow that precommitment should be so strong that you don’t really have a choice, retaliating is something you don’t even think about but execute by default. With precommitments, you want to restrict your own decision-possibilities.
If I hadn’t dissolved the question already, I’d probably have come up with something like “by making precommitments, you want to undermine your free will so that once that events (nuclear strike etc.) happened, you don’t have a free choice anymore because your free will is nonexistent in that situation”.
We can’t.
Posted under a throwaway account to avoid impairing my ability to pretend to make precommitments that I’m not actually guaranteed to follow.
Agree, except I’m not concerned with preserving my ability to give obviously false promises.