3. The system can think about atoms/physics, and it knows that our world exists, but it still only terminally cares about digital things in the simulated environment.
Case 3 is not safe, because controlling the physical world is a useful way to control the simulation you’re in. (E.g., killing all agents in base reality ensures that they’ll never shut down your simulation.)
Not necessarily. Train something multimodally on digital games of Go and on, say, predicting the effects of modifications to its own code on its success at Go. It could be a) good at go and b) have some real understanding of “real world actions” that make it better at Go, and still not actually take any real world actions to make it better at Go, even if it had the opportunity. You could modify the training to make it likely to do so—perhaps by asking it to either make a move or to produce descendants that make better choices—but if you don’t do this then it seems entirely plausible, and even perhaps likely, that it develops an understanding of self-modification and of go playing without ever self-modifying in order to play go better. Its goal, so to speak, is “play go with the restriction of using only legal game moves”.
Edit—forget the real world, here’s an experiment:
Train a board game playing AI with two modes of operation: game state x move → outcome and game state → best move. Subtle difference: in the first mode of operation, the move has a “cheat button” that, when pressed, always results in a win. In the second, it can output cheat button presses, but it has no effect on winning or losing.
Question is: does it learn to press the cheat button? I’m really not sure. Could you prevent it from learning to press the cheat button if training feedback is never allowed to depend on whether or not this button was pressed? That seems likely.
Not necessarily. Train something multimodally on digital games of Go and on, say, predicting the effects of modifications to its own code on its success at Go. It could be a) good at go and b) have some real understanding of “real world actions” that make it better at Go, and still not actually take any real world actions to make it better at Go, even if it had the opportunity. You could modify the training to make it likely to do so—perhaps by asking it to either make a move or to produce descendants that make better choices—but if you don’t do this then it seems entirely plausible, and even perhaps likely, that it develops an understanding of self-modification and of go playing without ever self-modifying in order to play go better. Its goal, so to speak, is “play go with the restriction of using only legal game moves”.
Edit—forget the real world, here’s an experiment:
Train a board game playing AI with two modes of operation: game state x move → outcome and game state → best move. Subtle difference: in the first mode of operation, the move has a “cheat button” that, when pressed, always results in a win. In the second, it can output cheat button presses, but it has no effect on winning or losing.
Question is: does it learn to press the cheat button? I’m really not sure. Could you prevent it from learning to press the cheat button if training feedback is never allowed to depend on whether or not this button was pressed? That seems likely.