Yes, this entire scenario is based around scenarios where there is benefit to cooperation. In the edge case where such benefit is ‘0 expected utilons’ the behavior of the agents will, unsurprisingly, not be changed at all by the considerations we are talking about.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”? I don’t think so.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”?
No. Will thinks thought along these lines then goes ahead and bites imaginary bullets.
Yes, this entire scenario is based around scenarios where there is benefit to cooperation. In the edge case where such benefit is ‘0 expected utilons’ the behavior of the agents will, unsurprisingly, not be changed at all by the considerations we are talking about.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”? I don’t think so.
No. Will thinks thought along these lines then goes ahead and bites imaginary bullets.
I don’t think that’s a very good model. Also, I’m curious: what’s your impression of this quote?
Worse than useless.