There’s a lot of assumptions of communication, negotiation, and reliability of commitment in there. The normal experimental setup for the Ultimatum Game is a one-shot without communication, and without much knowledge of your opponent.
In the case of negotiation, your probabilistic ultimatum still can’t incent your (rational, numerically-inclined) opponent to offer higher than they would with a fixed threshold, and in fact gives irrationally-optimistic opponents an out to convince themselves to lowball you (because they’d rather get lucky than give in to your demands). I’d enjoy hearing your model of the proposer who behaves differently in your probabilistic statement of intent than if you’d just said “I’ll reject if I don’t get at least half”.
Also, it’s STILL a problem that “whoever commits first, effectively moves last, and has the advantage”. If you get a response to your probabilistic distribution along the lines of “thanks, I’ve already locked in my algorithm—I picked from my own distribution before I even heard who I’d be playing against, and you’re offered 0.2x. I hope you roll dice well!”, you now have to figure out whether to follow through on your (irrelevant, now) statement, or just accept or reject based on other factors and future expectations.
Plus if you think about the Proposer’s optimization problem, it really hinges on “what is the probability that the Responder will accept my offer?” Obviously, the probability is at a maximum for 0,10 and one expects it to remain very high, even 1.0, through 5,5. Proposer is already aware that their own expected value declines after that point, and probably assumes it does so monotonically. If the Responder can share their particular probability schedule, that’s great, and it’s actually important if Proposer for some reason is unaware of the incentive structure. Yudkowsky and Kennedy’s explication is nice and probably helpful advice, but not really a “solution.”
There’s a lot of assumptions of communication, negotiation, and reliability of commitment in there. The normal experimental setup for the Ultimatum Game is a one-shot without communication, and without much knowledge of your opponent.
In the case of negotiation, your probabilistic ultimatum still can’t incent your (rational, numerically-inclined) opponent to offer higher than they would with a fixed threshold, and in fact gives irrationally-optimistic opponents an out to convince themselves to lowball you (because they’d rather get lucky than give in to your demands). I’d enjoy hearing your model of the proposer who behaves differently in your probabilistic statement of intent than if you’d just said “I’ll reject if I don’t get at least half”.
Also, it’s STILL a problem that “whoever commits first, effectively moves last, and has the advantage”. If you get a response to your probabilistic distribution along the lines of “thanks, I’ve already locked in my algorithm—I picked from my own distribution before I even heard who I’d be playing against, and you’re offered 0.2x. I hope you roll dice well!”, you now have to figure out whether to follow through on your (irrelevant, now) statement, or just accept or reject based on other factors and future expectations.
Plus if you think about the Proposer’s optimization problem, it really hinges on “what is the probability that the Responder will accept my offer?” Obviously, the probability is at a maximum for 0,10 and one expects it to remain very high, even 1.0, through 5,5. Proposer is already aware that their own expected value declines after that point, and probably assumes it does so monotonically. If the Responder can share their particular probability schedule, that’s great, and it’s actually important if Proposer for some reason is unaware of the incentive structure. Yudkowsky and Kennedy’s explication is nice and probably helpful advice, but not really a “solution.”