It seems that ems are much harder to make friendly than a general AI. That is, some of what we have to fear from unfriendly AIs is present in powerful WBEs too, and you can’t just build a WBE provably friendly to start with; you have to constrain it or teach it to be friendly (both of which are considered dangerous methods of getting to friendliness).
Perhaps harder to make perfectly friendly (we don’t have an upper bound on how hard that would be for an AI), but probably much easier to make somewhat friendly.
I’m afraid don’t recall who I’m (poorly) paraphrasing here, but:
Why would we expect emulated humans to be any Friendlier than a de novo AGI? At least no computer program has tried to maliciously take over the world yet; humans have been trying to pull that off for millennia!
The case for WBE over AGI, simply put: the chances of getting a nice AI is vanishingly small. The chance of getting an evil AI is vanishingly small. The whole danger is in the huge “lethally indifferent” zone.
WBE are more likely to be nice, more likely to be evil, and less likely to be lethally indifferent. Since evil and lethally indifferent have similar consequences (and a lot of evil will be better than indifferent), this makes WBE better than AGI.
What is the psychological motivation of a WBE upload? The people who try to take over the world are psychopaths and we would be able to alter their brain structures to remove those psychotic elements.
It seems that ems are much harder to make friendly than a general AI. That is, some of what we have to fear from unfriendly AIs is present in powerful WBEs too, and you can’t just build a WBE provably friendly to start with; you have to constrain it or teach it to be friendly (both of which are considered dangerous methods of getting to friendliness).
Perhaps harder to make perfectly friendly (we don’t have an upper bound on how hard that would be for an AI), but probably much easier to make somewhat friendly.
I’m afraid don’t recall who I’m (poorly) paraphrasing here, but:
Why would we expect emulated humans to be any Friendlier than a de novo AGI? At least no computer program has tried to maliciously take over the world yet; humans have been trying to pull that off for millennia!
The case for WBE over AGI, simply put: the chances of getting a nice AI is vanishingly small. The chance of getting an evil AI is vanishingly small. The whole danger is in the huge “lethally indifferent” zone.
WBE are more likely to be nice, more likely to be evil, and less likely to be lethally indifferent. Since evil and lethally indifferent have similar consequences (and a lot of evil will be better than indifferent), this makes WBE better than AGI.
Usually, said humans want to take over a world that still contains people.
That’s individual humans. The percentage of the population that’s trying to take over the world maliciously at any given time is very low.
What is the psychological motivation of a WBE upload? The people who try to take over the world are psychopaths and we would be able to alter their brain structures to remove those psychotic elements.