This outcome is bad because bargaining away influence over the AI’s local area in exchange for a small amount of control over the global utility function is a poor trade. But in that case, it’s also a poor acausal trade.
A more reasonable acausal trade to make with other AIs would be to trade away influence over faraway places. After all, other AIs presumably care about those places more than our AI does, so this is a trade that’s actually beneficial to both parties. It’s even a marginally reasonable thing to do acausally.
Of course, this means that our AI isn’t allowed to help the Babyeaters stop eating their babies, in accordance with its acausal agreement with the AI the Babyeaters could have made. But it also means that the Superhappy AI isn’t allowed to help us become free of pain, because of its acausal agreement with our AI. Ideally, this would hold even if we didn’t make an AI yet.
This outcome is bad because bargaining away influence over the AI’s local area in exchange for a small amount of control over the global utility function is a poor trade. But in that case, it’s also a poor acausal trade.
I agree with your logic, but why do you say it’s a bad trade? At first it seemed absurd to me, but after thinking about it I’m able to feel that it’s the best possible outcome. Do you have more specific reasons why it’s bad?
At best it means that the AI shapes our civilization into some sort of twisted extrapolation of what other alien races might like. In the worst case, it ends up calculating a high probability of existence for Evil Abhorrent Alien Race #176 which is in every way antithetical to the human race, and the acausal trade that it makes is that it wipes out the human race (satisfying #176′s desires) so that if the #176 make an AI, that AI will wipe out their race as well (satisfying human desires, since you wouldn’t believe the terrible, inhuman monstrous things those #176s were up to).
This outcome is bad because bargaining away influence over the AI’s local area in exchange for a small amount of control over the global utility function is a poor trade. But in that case, it’s also a poor acausal trade.
A more reasonable acausal trade to make with other AIs would be to trade away influence over faraway places. After all, other AIs presumably care about those places more than our AI does, so this is a trade that’s actually beneficial to both parties. It’s even a marginally reasonable thing to do acausally.
Of course, this means that our AI isn’t allowed to help the Babyeaters stop eating their babies, in accordance with its acausal agreement with the AI the Babyeaters could have made. But it also means that the Superhappy AI isn’t allowed to help us become free of pain, because of its acausal agreement with our AI. Ideally, this would hold even if we didn’t make an AI yet.
I agree with your logic, but why do you say it’s a bad trade? At first it seemed absurd to me, but after thinking about it I’m able to feel that it’s the best possible outcome. Do you have more specific reasons why it’s bad?
At best it means that the AI shapes our civilization into some sort of twisted extrapolation of what other alien races might like. In the worst case, it ends up calculating a high probability of existence for Evil Abhorrent Alien Race #176 which is in every way antithetical to the human race, and the acausal trade that it makes is that it wipes out the human race (satisfying #176′s desires) so that if the #176 make an AI, that AI will wipe out their race as well (satisfying human desires, since you wouldn’t believe the terrible, inhuman monstrous things those #176s were up to).