Isn’t “evil” in evil genie distracting? The plausible unfriendly AI scenarios are not that we will inadvertently create an AI that hates us. It’s that the we’ll quite advertently create an AI that desires to turn the solar system into a hard disk full of math theorems, or whatever. Optimization isn’t evil, just dangerous. Even talking about an AI that “wants to be free” is anthropomorphizing.
Maybe this is just for color, and I’m being obtuse.
Agreed that there’s a whole mess of mistakes that can result from anthropomorphizing (thank ye hairy gods for spell check) the AI, but I still think this style of post is useful. If we can come up with a fool-proof way of safely extracting information from a malevolent genie, then we are probably pretty close to a way of safely extracting information from a genie that we’re unsure of the friendliness of.
Isn’t “evil” in evil genie distracting? The plausible unfriendly AI scenarios are not that we will inadvertently create an AI that hates us. It’s that the we’ll quite advertently create an AI that desires to turn the solar system into a hard disk full of math theorems, or whatever. Optimization isn’t evil, just dangerous. Even talking about an AI that “wants to be free” is anthropomorphizing.
Maybe this is just for color, and I’m being obtuse.
Agreed that there’s a whole mess of mistakes that can result from anthropomorphizing (thank ye hairy gods for spell check) the AI, but I still think this style of post is useful. If we can come up with a fool-proof way of safely extracting information from a malevolent genie, then we are probably pretty close to a way of safely extracting information from a genie that we’re unsure of the friendliness of.