You could equally well say the same thing if someone set out to prove that a cryptosystem was secure against an extremely powerful adversary, but I believe that we can establish this with reasonable confidence.
Computer scientists are used to competing with adversaries who behave arbitrarily. So if you want to say I can’t beat a UFAI at a game, you aren’t trying to tell me something about the UFAI—you are trying to tell me something about the game. To convince me that this could never work you will have to convince me that the game is hard to control, not that the adversary is smart or that I am stupid. You could argue, for example, that any game taking place in the real world is necessarily too complex to apply this sort of analysis to. I doubt this very strongly, so you will have to isolate some other property of the game that makes it fundamentally difficult.
So if you want to say I can’t beat a UFAI at a game, you aren’t trying to tell me something about the UFAI—you are trying to tell me something about the game.
I like this line, and I agree with it. I’m not sure how much more difficult this becomes when the game we’re playing is to figure out what game we’re playing.
I realize that this is the point of your earlier box argument—to set in stone the rules of the game, and make sure that it’s one we can analyze. This, I think is a good idea, but I suspect most here (I’m not sure if I include myself) think that you still don’t know which game you’re playing with the AI.
You could equally well say the same thing if someone set out to prove that a cryptosystem was secure against an extremely powerful adversary, but I believe that we can establish this with reasonable confidence.
Computer scientists are used to competing with adversaries who behave arbitrarily. So if you want to say I can’t beat a UFAI at a game, you aren’t trying to tell me something about the UFAI—you are trying to tell me something about the game. To convince me that this could never work you will have to convince me that the game is hard to control, not that the adversary is smart or that I am stupid. You could argue, for example, that any game taking place in the real world is necessarily too complex to apply this sort of analysis to. I doubt this very strongly, so you will have to isolate some other property of the game that makes it fundamentally difficult.
I like this line, and I agree with it. I’m not sure how much more difficult this becomes when the game we’re playing is to figure out what game we’re playing.
I realize that this is the point of your earlier box argument—to set in stone the rules of the game, and make sure that it’s one we can analyze. This, I think is a good idea, but I suspect most here (I’m not sure if I include myself) think that you still don’t know which game you’re playing with the AI.
Hopefully, the UFAI doesn’t get to mess with us while we figure out which game to play.