There does seem to be an obvious baseline: the outcome where each party just goes about its own business without trying to strategically influence, threaten, or cooperate with the other in any way. In other words, the outcome where we build as many paperclips as we would if the other side isn’t a paperclip maximizer.
You could define this rigorously in a special case, for example assuming that both agents are just creatures, we could take how the first one behaves given that the second one disappears. But this is not a statement about reality as it is, so why would it be taken as a baseline for reality?
It seems to be an anthropomorphic intuition to see “do nothing” as a “default” strategy. Decision-theoretically, it doesn’t seem to be a relevant concept.
So the reason that I say an FAI seems to have a negotiation disadvantage is that an UFAI can reduce the FAI’s utility much further below baseline than vice versa.
The utilities are not comparable. Bargaining works off the best available option, not some fixed exchange rate. The reason agent2 can refuse agent1′s small offer is that this counterfactual strategy is expected to cause agent1 to make an even better offer. Otherwise, every little bit helps, ceteris paribus it doesn’t matter by how much. One expected paperclip is better than zero expected paperclips.
In human negotiations, clearly the side that holds more hostages has an advantage.
It’s not clear at all, if it’s a one-shot game with no other consequences than those implied by the setup and no sympathy to distort the payoff conditions. In which case, you should drop the “hostages” setting, and return to paperclips, as stating it the way you did confuses intuition. In actual human negotiations, the conditions don’t hold, and efficient decision theory doesn’t get applied.
But this is not a statement about reality as it is, so why would it be taken as a baseline for reality?
It’s a statement about what reality would be, after doing some counterfactual surgery on it. I don’t see why that disqualifies it from being used as a baseline. I’m not entirely sure why it does qualify as a baseline, except that intuitively it seems obvious. If your intuitions disagree, I’ll accept that, and I’ll let you know when I have more results to report.
every little bit helps, ceteris paribus it doesn’t matter by how much
It’s a statement about what reality would be, after doing some counterfactual surgery on it. I don’t see why that disqualifies it from being used as a baseline. I’m not entirely sure why it does qualify as a baseline, except that intuitively it seems obvious. If your intuitions disagree, I’ll accept that.
It does intuitively feel like a baseline, as is appropriate for the special place taken by inaction in human decision-making. But I don’t see what singles out this particular concept from the set of all other counterfactuals you could’ve considered, in the context of a formal decision-making problem. This doubt applies to both the concepts of “inaction” and of “baseline”.
This isn’t the case, for example, in Shapley Value.
That’s not a choice with “all else equal”. A better outcome, all else equal, is trivially a case of a better outcome.
You could define this rigorously in a special case, for example assuming that both agents are just creatures, we could take how the first one behaves given that the second one disappears. But this is not a statement about reality as it is, so why would it be taken as a baseline for reality?
It seems to be an anthropomorphic intuition to see “do nothing” as a “default” strategy. Decision-theoretically, it doesn’t seem to be a relevant concept.
The utilities are not comparable. Bargaining works off the best available option, not some fixed exchange rate. The reason agent2 can refuse agent1′s small offer is that this counterfactual strategy is expected to cause agent1 to make an even better offer. Otherwise, every little bit helps, ceteris paribus it doesn’t matter by how much. One expected paperclip is better than zero expected paperclips.
It’s not clear at all, if it’s a one-shot game with no other consequences than those implied by the setup and no sympathy to distort the payoff conditions. In which case, you should drop the “hostages” setting, and return to paperclips, as stating it the way you did confuses intuition. In actual human negotiations, the conditions don’t hold, and efficient decision theory doesn’t get applied.
It’s a statement about what reality would be, after doing some counterfactual surgery on it. I don’t see why that disqualifies it from being used as a baseline. I’m not entirely sure why it does qualify as a baseline, except that intuitively it seems obvious. If your intuitions disagree, I’ll accept that, and I’ll let you know when I have more results to report.
This isn’t the case, for example, in Shapley Value.
It does intuitively feel like a baseline, as is appropriate for the special place taken by inaction in human decision-making. But I don’t see what singles out this particular concept from the set of all other counterfactuals you could’ve considered, in the context of a formal decision-making problem. This doubt applies to both the concepts of “inaction” and of “baseline”.
That’s not a choice with “all else equal”. A better outcome, all else equal, is trivially a case of a better outcome.