So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”? I don’t think so.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”?
No. Will thinks thought along these lines then goes ahead and bites imaginary bullets.
So I should interpret Will’s “Omega = objective morality” comment as meaning “sufficiently wise agents sometimes cooperate, when cooperation is the best way to achieve their ends”? I don’t think so.
No. Will thinks thought along these lines then goes ahead and bites imaginary bullets.
I don’t think that’s a very good model. Also, I’m curious: what’s your impression of this quote?
Worse than useless.