I think the point could be steelmanned as something like
The ability of humans to come up with a coherent and extrapolated version of their own values is limited by their intelligence.
A more intelligent system loaded with CEV 1.0 might extrapolate into CEV 2.0, with unexpected consequences.
I think the point could be steelmanned as something like
The ability of humans to come up with a coherent and extrapolated version of their own values is limited by their intelligence.
A more intelligent system loaded with CEV 1.0 might extrapolate into CEV 2.0, with unexpected consequences.