He is bargaining with a whole lot of Clippy’s brothers and sisters.
I think that makes some sense. It’s not clear to me that building a smiley-face maximizer that trades with AIs in other possible worlds would be better than having a no-AI future.
There is another possibility to consider though. Both we and the smiley-face maximizer would be better off if we did allow it to be built, and then it gives our preferences some control (enough for us to be better off than the no-AI future). It’s not clear that this opportunity for trade can be realized, but we should spend some time thinking about it before ruling it out.
It seems like we really need a theory of games that tells us (human beings) how to play games with superintelligences. We can’t depend on our FAIs to play the games for us, because we have to decide now what to do, including the above example, and also what kind of FAI to build.
Both we and the smiley-face maximizer would be better off if we did allow it to be built, and then it gives our preferences some control (enough for us to be better off than the no-AI future). It’s not clear that this opportunity for trade can be realized, but we should spend some time thinking about it before ruling it out.
Sounds like Drescher’s bounded Newcomb. This perspective suddenly painted it FAI-complete.
I think that makes some sense. It’s not clear to me that building a smiley-face maximizer that trades with AIs in other possible worlds would be better than having a no-AI future.
There is another possibility to consider though. Both we and the smiley-face maximizer would be better off if we did allow it to be built, and then it gives our preferences some control (enough for us to be better off than the no-AI future). It’s not clear that this opportunity for trade can be realized, but we should spend some time thinking about it before ruling it out.
It seems like we really need a theory of games that tells us (human beings) how to play games with superintelligences. We can’t depend on our FAIs to play the games for us, because we have to decide now what to do, including the above example, and also what kind of FAI to build.
Sounds like Drescher’s bounded Newcomb. This perspective suddenly painted it FAI-complete.
Can you please elaborate? I looked up “FAI-complete”, and found this but I still don’t get your point.
See the DT list. (Copy of the post here.) FAI-complete problem = solving it means that FAI gets solved as well.