I’m not sure I understand the “prosaic alignment” position well enough to answer this.
I guess, personally, I can see appeal of scenario 2, of keeping a super-optimizer under control and using it in limited ways to solve specific problems. I also find that scenario incredibly terrifying, because super-optimizers that don’t optimize for the full set of human values are dangerous.
I’m not sure I understand the “prosaic alignment” position well enough to answer this.
I guess, personally, I can see appeal of scenario 2, of keeping a super-optimizer under control and using it in limited ways to solve specific problems. I also find that scenario incredibly terrifying, because super-optimizers that don’t optimize for the full set of human values are dangerous.