It seems the logical extension of your finitely many step-downs in “fairness” would be to define a function f(your_utility) which returns the greatest utility you will accept the other agent receiving for that utility you receive. The domain of this function should run from wherever your magical fairness point is down to the Nash equilibrium. As long as it is monotonically increasing, that should ensure unexploitability for the same reasons your finite version does. The offer both agents should make is at the greatest intersection point of these functions, with one of them inverted to put them on the same axes. (This intersection is guaranteed to exist in the only interesting case, where the agents do not accept as fair enough each other’s magical fairness point)
Curiously, if both agents use this strategy, then both agents seem to be incentivized to have their function have as much “skew” (as EY defined it in clarification 2) as possible, as both functions are monotonically increasing so decreasing your opponents share can only decrease your own. Asymptotically and choosing these functions optimally, this means that both agents will end up getting what the other agent thinks is fair, minus a vanishingly small factor!
Let me know if my reasoning above is transparent. If not, I can clarify, but I’ll avoid expending the extra effort revising further if what I already have is clear enough.
It seems the logical extension of your finitely many step-downs in “fairness” would be to define a function f(your_utility) which returns the greatest utility you will accept the other agent receiving for that utility you receive. The domain of this function should run from wherever your magical fairness point is down to the Nash equilibrium. As long as it is monotonically increasing, that should ensure unexploitability for the same reasons your finite version does. The offer both agents should make is at the greatest intersection point of these functions, with one of them inverted to put them on the same axes. (This intersection is guaranteed to exist in the only interesting case, where the agents do not accept as fair enough each other’s magical fairness point)
Curiously, if both agents use this strategy, then both agents seem to be incentivized to have their function have as much “skew” (as EY defined it in clarification 2) as possible, as both functions are monotonically increasing so decreasing your opponents share can only decrease your own. Asymptotically and choosing these functions optimally, this means that both agents will end up getting what the other agent thinks is fair, minus a vanishingly small factor!
Let me know if my reasoning above is transparent. If not, I can clarify, but I’ll avoid expending the extra effort revising further if what I already have is clear enough.