One quick follow-up question, when you say “build powerful AI tools that are deceptive” as a way of “the problem being easier than anticipated”, how exactly do you mean that? Do you say that as in, if we can create deceptive or power-seeking tool AI very easily, it will be much simpler to investigate what is happening and derive solutions?
That was a typo, should say “are NOT deceptive”