To me, It seems like the point of this story is that we could build an AI that ends up doing very dangerous things without ever asking it “Will you do things I don’t like if given more capability?” or some other similar question that requires it to execute the treacherous turn. In contrast, if the developers did something like build a testing world with toy humans in it who could be manipulated in a way detectable to the developers, and placed the AI in the toy testing world, then it seems like this AI would be forced into a position where it either acts in a way according to it’s true incentives (manipulate the humans and be detected), or execute the treacherous turn (abstain from manipulating the humans so developers will trust it more). So it seems like this wouldn’t happen if the developers are trying to test for treacherous turn behaviour during development.
To me, It seems like the point of this story is that we could build an AI that ends up doing very dangerous things without ever asking it “Will you do things I don’t like if given more capability?” or some other similar question that requires it to execute the treacherous turn. In contrast, if the developers did something like build a testing world with toy humans in it who could be manipulated in a way detectable to the developers, and placed the AI in the toy testing world, then it seems like this AI would be forced into a position where it either acts in a way according to it’s true incentives (manipulate the humans and be detected), or execute the treacherous turn (abstain from manipulating the humans so developers will trust it more). So it seems like this wouldn’t happen if the developers are trying to test for treacherous turn behaviour during development.