“What would you do in situation X?” and “What would you like to pre-commit to doing, should you ever encounter situation X?” should, to a rational agent, be one and the same question.
This phrasing sounds about right. Whatever decision-making algorithm you have drawing your decision D when it’s in situation X, should also come to the same conditional decision before the situation X appeared, “if(X) then D”. If you actually don’t give away $100 in situation X, you should also plan to not give away $100 in case of X, before (or irrespective of whether) X happens. Whichever decision is the right one, there should be no inconsistency of this form. This grows harder if you must preserve the whole preference order.
MBlume:
This phrasing sounds about right. Whatever decision-making algorithm you have drawing your decision D when it’s in situation X, should also come to the same conditional decision before the situation X appeared, “if(X) then D”. If you actually don’t give away $100 in situation X, you should also plan to not give away $100 in case of X, before (or irrespective of whether) X happens. Whichever decision is the right one, there should be no inconsistency of this form. This grows harder if you must preserve the whole preference order.