You seem to be misunderstanding of the purpose of the “least convenient possible world”. The idea is that if your interlocutor gives a weak argument and you can think of a way to strengthen it you should attempt to answer the strengthened version. You should not be invoking “least convenient possible world” to self sabotage attempts to solve problems in the real world.
No, this is a correct use of LCPW. The question asked how keeping to precommitments is rationally possible, when the effects of carrying out your threat are bad for you. You took one example and explained why, in that case, retaliating wasn’t in fact negative utility. But unless you think that this will always be the case (it isn’t) the request for you to move to the LCPW is valid.
Yes I think that is right. Perhaps the LCPW in this case is one in which retaliation is guaranteed to mean an end to humanity. So a preference for one set of values over another isn’t applicable. This is somewhat explicit to a mutually assured destruction deterrence strategy but nonetheless once the other side pushes the button you have a choice to put an end to humanity or not. Its hard to come up with a utility function that prefers that even considering a preference for meeting pre-commitments. Its like the 0th law of robotics—no utility evaluation can exceed the existence of humanity.
You seem to be misunderstanding of the purpose of the “least convenient possible world”. The idea is that if your interlocutor gives a weak argument and you can think of a way to strengthen it you should attempt to answer the strengthened version. You should not be invoking “least convenient possible world” to self sabotage attempts to solve problems in the real world.
No, this is a correct use of LCPW. The question asked how keeping to precommitments is rationally possible, when the effects of carrying out your threat are bad for you. You took one example and explained why, in that case, retaliating wasn’t in fact negative utility. But unless you think that this will always be the case (it isn’t) the request for you to move to the LCPW is valid.
Yes I think that is right. Perhaps the LCPW in this case is one in which retaliation is guaranteed to mean an end to humanity. So a preference for one set of values over another isn’t applicable. This is somewhat explicit to a mutually assured destruction deterrence strategy but nonetheless once the other side pushes the button you have a choice to put an end to humanity or not. Its hard to come up with a utility function that prefers that even considering a preference for meeting pre-commitments. Its like the 0th law of robotics—no utility evaluation can exceed the existence of humanity.