Unlosing agents, living in a world with extorters, might have to be classically irrational in the sense that they would not give into threats even when a rational person would. Furthermore, unlosing agents living in a world in which other people can be threatened might need to have an irrationally strong desire to carry out threats so as not to lose the opportunity of extorting others. These examples assume that others can correctly read your utility function.
Generalizing, an unlosing agent would have an attitude towards threats and promises that maximized his utility given that other people know his attitude towards threats and promises. I strongly suspect that this situation would have multiple equilibria when multiple unlosing agents interacted.
Unlosing agents, living in a world with extorters, might have to be classically irrational in the sense that they would not give into threats even when a rational person would. Furthermore, unlosing agents living in a world in which other people can be threatened might need to have an irrationally strong desire to carry out threats so as not to lose the opportunity of extorting others. These examples assume that others can correctly read your utility function.
Generalizing, an unlosing agent would have an attitude towards threats and promises that maximized his utility given that other people know his attitude towards threats and promises. I strongly suspect that this situation would have multiple equilibria when multiple unlosing agents interacted.
The problem isn’t solved for expected utility maximisers. Would unlosing agents be easier to solve?