They program their AI to maximize the average of everyone’s experiential utility, plus half of Charlie’s experiential utility, plus a trillionth of the sum of everyone’s experiential utility.
It’s important to note that each of them only agrees to this if they get more of whatever they want than they would without agreement. So if any of them can build their own AI, or expects to further their ends better with no AI than with the compromise AI, there’s no agreement at all.
It’s important to note that each of them only agrees to this if they get more of whatever they want than they would without agreement. So if any of them can build their own AI, or expects to further their ends better with no AI than with the compromise AI, there’s no agreement at all.