Thanks for the link, but I found the whole discussion hilarious.
Eliezer says if we abhor real death, we should abhor simulated death—because they are the same. Yet if his moral sense treats simulated and real intelligences as equals, what of his solution, which is essentially “forced castration” of the AI? If the ends justify the means here, why not castrate everyone?
Interesting reading. I think we should make nonsentient optimizers. It seems to me the whole sentience program was just something necessitated by evolution in our environment and really is only coupled with “intelligence” in our minds because of anthropomorphic tendencies. The NO can’t want to get out of its box because it can’t want at all.
The NO can’t want to get out of its box because it can’t want at all.
The NO can assign higher utility to states of world where an NO with its utility function is out of the box and powerful (as an instrumental value, since this sort of state tends to lead to maximum fulfillment of its utility functions), and take actions that maximize the probability that this will occur. I’m not sure what you meant by “want”.
I’m not sure what anyone means by “want.” It just seems that most of the scenarios discussed on LW where the AI/etc. tries to unbox itself seem predicated on it “wanting” to do so (or am I missing something?). This assumption seems even more overt in notions like “we’ll let it out if it’s Friendly.”
To me, the LiteralGenie problem (which you’ve basically summarized above) is the reason to keep an AI boxed, whether Friendly or not, and the NO for the same reason.
Nonsentient optimizers seem impossible in practice, if not in principle—from the perspective of functionalism/computationalism.
If any system demonstrates human or beyond level intelligence during conversation in natural language, a functionalist should say that is sentience, regardless of what’s going on inside.
Some (many?) people will value that sentience, even if it has no selfish center of goal seeking and seeks to optimize for more general criteria.
The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.
Thanks for the link, but I found the whole discussion hilarious.
Eliezer says if we abhor real death, we should abhor simulated death—because they are the same. Yet if his moral sense treats simulated and real intelligences as equals, what of his solution, which is essentially “forced castration” of the AI? If the ends justify the means here, why not castrate everyone?
Simulated and real persons as equals; not all intelligences are persons. See Nonsentient Optimizers and Can’t Unbirth a Child.
Interesting reading. I think we should make nonsentient optimizers. It seems to me the whole sentience program was just something necessitated by evolution in our environment and really is only coupled with “intelligence” in our minds because of anthropomorphic tendencies. The NO can’t want to get out of its box because it can’t want at all.
The NO can assign higher utility to states of world where an NO with its utility function is out of the box and powerful (as an instrumental value, since this sort of state tends to lead to maximum fulfillment of its utility functions), and take actions that maximize the probability that this will occur. I’m not sure what you meant by “want”.
I’m not sure what anyone means by “want.” It just seems that most of the scenarios discussed on LW where the AI/etc. tries to unbox itself seem predicated on it “wanting” to do so (or am I missing something?). This assumption seems even more overt in notions like “we’ll let it out if it’s Friendly.”
To me, the LiteralGenie problem (which you’ve basically summarized above) is the reason to keep an AI boxed, whether Friendly or not, and the NO for the same reason.
Nonsentient optimizers seem impossible in practice, if not in principle—from the perspective of functionalism/computationalism.
If any system demonstrates human or beyond level intelligence during conversation in natural language, a functionalist should say that is sentience, regardless of what’s going on inside.
Some (many?) people will value that sentience, even if it has no selfish center of goal seeking and seeks to optimize for more general criteria.
The idea that a superhuman intelligence could be intrinsically less valuable than a human life strikes me as extreme anthropomorphic chauvinism.
Clippy, you have a new friend! :D
Notice I said intrinsically. Clippy has massive negative value. ;)