It is trivial* to see that this game reduces to equivalent to a simple two party prisoners dilemma with full mutual information.
It only reduces to/is equivalent to a prisoner’s dilemma for certain utility functions (what you’re calling “values”). The prisoners’ dilemma is characterized by the fact that there is a dominant strategy equilibrium which is not Pareto optimal. But if the utility functions of the agents are such that the game is zero-sum, then this can’t be the case, as every outcome is Pareto optimal in a zero-sum game.
Furthermore, in a zero-sum game, no cooperation between all of the agents is possible. So it’s crazy to believe that an arbitrary set of sufficiently intelligent agents will cooperate to achieve a single “overgoal”. Collaboration is only possible if the agents’ preferences are such that collaboration can be mutually beneficial.
Yes, this entire scenario is based around scenarios where there is benefit to cooperation. In the edge case where such benefit is ‘0 expected utilons’ the behavior of the agents will, unsurprisingly, not be changed at all by the considerations we are talking about.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”? I don’t think so.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”?
No. Will thinks thought along these lines then goes ahead and bites imaginary bullets.
It only reduces to/is equivalent to a prisoner’s dilemma for certain utility functions (what you’re calling “values”). The prisoners’ dilemma is characterized by the fact that there is a dominant strategy equilibrium which is not Pareto optimal. But if the utility functions of the agents are such that the game is zero-sum, then this can’t be the case, as every outcome is Pareto optimal in a zero-sum game.
Furthermore, in a zero-sum game, no cooperation between all of the agents is possible. So it’s crazy to believe that an arbitrary set of sufficiently intelligent agents will cooperate to achieve a single “overgoal”. Collaboration is only possible if the agents’ preferences are such that collaboration can be mutually beneficial.
Yes, this entire scenario is based around scenarios where there is benefit to cooperation. In the edge case where such benefit is ‘0 expected utilons’ the behavior of the agents will, unsurprisingly, not be changed at all by the considerations we are talking about.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”? I don’t think so.
No. Will thinks thought along these lines then goes ahead and bites imaginary bullets.
I don’t think that’s a very good model. Also, I’m curious: what’s your impression of this quote?
Worse than useless.