This has some problems associated with stunting. Adding humans in the loop with this frequency of oversight will slow things down, whatever happens. The AI would also have fewer problem solving strategies open to it—that is if doesn’t care about thinking ahead to , it also won’t think ahead to .
The programmers also have to make sure that they inspect not only the output of the AI at this stage, but the strategies it is considering implementing. Otherwise, it’s possible that there is a sudden transition where one strategy only works up until a certain point, then another more general strategy takes over.
This has some problems associated with stunting. Adding humans in the loop with this frequency of oversight will slow things down, whatever happens. The AI would also have fewer problem solving strategies open to it—that is if doesn’t care about thinking ahead to , it also won’t think ahead to .
The programmers also have to make sure that they inspect not only the output of the AI at this stage, but the strategies it is considering implementing. Otherwise, it’s possible that there is a sudden transition where one strategy only works up until a certain point, then another more general strategy takes over.