Different optimization processes disagree about what to optimize, no?
No. The paperclip maximizer believes that it makes no reasoning errors in striving to maximize paperclips, and the pencil maximizer agrees. And vice versa. And neither of them conceives of a property of agent-independent “to-be-optimized-ness”, much less attributes such a property to anything.
Edit: Nor, for that matter, do ordinary moralists conceive of an agent-independent “to-be-optimized”. “Should” always applies to an agent doing something, not to the universe in general. However, often enough people assume that everyone should try to accomplish a certain goal, in which case people will talk about “what ought to be”.
No. The paperclip maximizer believes that it makes no reasoning errors in striving to maximize paperclips, and the pencil maximizer agrees. And vice versa. And neither of them conceives of a property of agent-independent “to-be-optimized-ness”, much less attributes such a property to anything.
Edit: Nor, for that matter, do ordinary moralists conceive of an agent-independent “to-be-optimized”. “Should” always applies to an agent doing something, not to the universe in general. However, often enough people assume that everyone should try to accomplish a certain goal, in which case people will talk about “what ought to be”.