When I consider this as a potential way to pose an open problem, the main thing that jumps out at me as being missing is something that doesn’t allow A to model all of B’s possible actions concretely. The problem is trivial if A can fully model B, precompute B’s actions, and precompute the consequences of those actions.
The levels of ‘reason for concern about AI safety’ might ascend something like this:
0 - system with a finite state space you can fully model, like Tic-Tac-Toe
1 - you can’t model the system in advance and therefore it may exhibit unanticipated behaviors on the level of computer bugs
2 - the system is cognitive, and can exhibit unanticipated consequentialist or goal-directed behaviors, on the level of a genetic algorithm finding an unanticipated way to turn the CPU into a radio or Eurisko hacking its own reward mechanism
3 - the system is cognitive and humanish-level general; an uncaught cognitive pressure towards an outcome we wouldn’t like, results in facing something like a smart cryptographic adversary that is going to deeply ponder any way to work around anything it sees as an obstacle
4 - the system is cognitive and superintelligent; its estimates are always at least as good as our estimates; the expected agent-utility of the best strategy we can imagine when we imagine ourselves in the agent’s shoes, is an unknowably severe underestimate of the expected agent-utility of the best strategy the agent can find using its own cognition
We want to introduce something into the toy model to at least force solutions past level 0. This is doubly true because levels 0 and 1 are in some sense ‘straightforward’ and therefore tempting for academics to write papers about (because they know that they can write the paper); so if you don’t force their thinking past those levels, I’d expect that to be all that they wrote about. You don’t get into the hard problems with astronomical stakes until levels 3 and 4. (Level 2 is the most we can possibly model using running code with today’s technology.)
Added a cheap way to get us somewhat in the region of 2, just by assuming that B/C can model A, which precludes A being able to model B/C in general.
When I consider this as a potential way to pose an open problem, the main thing that jumps out at me as being missing is something that doesn’t allow A to model all of B’s possible actions concretely. The problem is trivial if A can fully model B, precompute B’s actions, and precompute the consequences of those actions.
The levels of ‘reason for concern about AI safety’ might ascend something like this:
0 - system with a finite state space you can fully model, like Tic-Tac-Toe
1 - you can’t model the system in advance and therefore it may exhibit unanticipated behaviors on the level of computer bugs
2 - the system is cognitive, and can exhibit unanticipated consequentialist or goal-directed behaviors, on the level of a genetic algorithm finding an unanticipated way to turn the CPU into a radio or Eurisko hacking its own reward mechanism
3 - the system is cognitive and humanish-level general; an uncaught cognitive pressure towards an outcome we wouldn’t like, results in facing something like a smart cryptographic adversary that is going to deeply ponder any way to work around anything it sees as an obstacle
4 - the system is cognitive and superintelligent; its estimates are always at least as good as our estimates; the expected agent-utility of the best strategy we can imagine when we imagine ourselves in the agent’s shoes, is an unknowably severe underestimate of the expected agent-utility of the best strategy the agent can find using its own cognition
We want to introduce something into the toy model to at least force solutions past level 0. This is doubly true because levels 0 and 1 are in some sense ‘straightforward’ and therefore tempting for academics to write papers about (because they know that they can write the paper); so if you don’t force their thinking past those levels, I’d expect that to be all that they wrote about. You don’t get into the hard problems with astronomical stakes until levels 3 and 4. (Level 2 is the most we can possibly model using running code with today’s technology.)
Added a cheap way to get us somewhat in the region of 2, just by assuming that B/C can model A, which precludes A being able to model B/C in general.