Nonsentient optimizers seem impossible in practice, if not in principle—from the perspective of functionalism/computationalism.
If any system demonstrates human or beyond level intelligence during conversation in natural language, a functionalist should say that is sentience, regardless of what’s going on inside.
Some (many?) people will value that sentience, even if it has no selfish center of goal seeking and seeks to optimize for more general criteria.
The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.
Nonsentient optimizers seem impossible in practice, if not in principle—from the perspective of functionalism/computationalism.
If any system demonstrates human or beyond level intelligence during conversation in natural language, a functionalist should say that is sentience, regardless of what’s going on inside.
Some (many?) people will value that sentience, even if it has no selfish center of goal seeking and seeks to optimize for more general criteria.
The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.
Clippy, you have a new friend! :D
Notice I said intrinsically. Clippy has massive negative value. ;)