The name I have in my head for this is “zones of control”. In board- and videogames, sometimes a unit explicitly has an effect on tiles adjacent to its own. I expanded the term from there to include related phenomena, for example where the mere existence of strategy X blocks strategy Y from ever being played, even if X itself is almost never played either. X is in some sense providing “cover fire”, not achieving anything directly, but pinning down another strategy in the process.
This case doesn’t match that intuition exactly, but it’s in the same neighborhood.
I’m not buying the premise. Passing the Turing test requires to fool an alert, smart person who is deliberately probing the limits of the system. ChatGPT isn’t at that level.
A specially tuned persona that is optimized for this task might do better than the “assistant” persona we have available now, but the model is currently incapable of holding a conversation without going on long, unwanted tangents, getting trapped in loops, etc.