Thinking about it more, it seems that messy reward signals will lead to some approximation of alignment that works while the agent has low power compared to its “teachers”, but at high power it will do something strange and maybe harm the “teachers” values. That holds true for humans gaining a lot of power and going against evolutionary values (“superstimuli”), and for individual humans gaining a lot of power and going against societal values (“power corrupts”), so it’s probably true for AI as well. The worrying thing is that high power by itself seems sufficient for the change, for example if an AI gets good at real-world planning, that constitutes power and therefore danger. And there don’t seem to be any natural counterexamples. So yeah, I’m updating toward your view on this.
Thinking about it more, it seems that messy reward signals will lead to some approximation of alignment that works while the agent has low power compared to its “teachers”, but at high power it will do something strange and maybe harm the “teachers” values. That holds true for humans gaining a lot of power and going against evolutionary values (“superstimuli”), and for individual humans gaining a lot of power and going against societal values (“power corrupts”), so it’s probably true for AI as well. The worrying thing is that high power by itself seems sufficient for the change, for example if an AI gets good at real-world planning, that constitutes power and therefore danger. And there don’t seem to be any natural counterexamples. So yeah, I’m updating toward your view on this.