Because EY has specifically said that that must be avoided, when he describes evolution as something dangerous.
That doesn’t mean that you can’t examine possible trajectories of evolution for good things you wouldn’t have thought of yourself, just that you shouldn’t allow evolution to determine the actual future.
I don’t think there’s any coherent way of saying both that CEV will constrain future development (which is its purpose), and that it will not prevent us from reaching some of the best optimums.
I’m not sure what you mean by “constrain” here. A process that reliably reaches an optimum (I’m not saying CEV is such a process) constrains future development to reach an optimum. Any nontrivial (and non-self-undermining, I suppose; one could value the nonexistence of optimization processes or something) value system, whether “provincially human” or not, prefers the world to be constrained into more valuable states.
Most likely, all the best optimums lie in places that CEV is designed to keep us away from
I don’t see where you’ve responded to the point that CEV would incorporate whatever reasoning leads you to be concerned about this.
That doesn’t mean that you can’t examine possible trajectories of evolution for good things you wouldn’t have thought of yourself, just that you shouldn’t allow evolution to determine the actual future.
I’m not sure what you mean by “constrain” here. A process that reliably reaches an optimum (I’m not saying CEV is such a process) constrains future development to reach an optimum. Any nontrivial (and non-self-undermining, I suppose; one could value the nonexistence of optimization processes or something) value system, whether “provincially human” or not, prefers the world to be constrained into more valuable states.
I don’t see where you’ve responded to the point that CEV would incorporate whatever reasoning leads you to be concerned about this.