Right, but your conclusion still doesn’t follow—my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.
But “[of others]” part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there’s a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.
Right, but your conclusion still doesn’t follow—my example was just to show the flaw in your logic. Generally, you have to consider the trade-offs between cooperating and doing anything else instead.
Well, of course. But which my conclusion you mean that doesn’t follow?
But “[of others]” part is unnecessary. If every intelligent agent optimizes away their own unnecessary pain, it is sufficient for the conclusion. Unless, of course, there exists a significant number of intelligent agents that have pain of others as a terminal goal, or there’s a serious lack of atoms for all agents to achieve their otherwise non-contradicting goals.
This is highly dependent on the strategic structure of the situation.