Human level AI is still dangerous. Look how dangerous we are.
Consider that a human level AI which is not friendly, is likely to be far more unfriendly or difficult to bargain with than any human. (The total space of possible value systems is far far greater than the space of value systems inhabited by functioning humans). If there are enough of them, then they can cause the same kind of problem that a hostile society could.
But it’s worse than that. A sufficiently unfriendly AI would be like a sociopath or psychopath by human standards. But unlike individual sociopaths among humans (who can become very powerful and do extraordinary damage, consider Stalin), they would not need to fake [human] sanity to work with others if there were a large community of like-minded unfriendly AIs. Indeed, if they were unfriendly enough and more comfortable with violence than say, your typical european/american, the result could look a lot like the colonialism of the 15th-19th centuries or earlier migrations of more warlike populations with all humans on the short end of the stick. And that’s just looking at the human potential for collective violence. Surely the space of all human level intelligences contains some that are more brutally violent than the worst of us.
Could we conceivably hold this off? Possible, but it would be a big gamble, and unfriendliness would ensure that such a conflict would be inevitable. If the AI were significantly more efficient than we are (cost of upkeep and reproduction), that would be a huge advantage in any potential conflict. And it’s hard to imagine an AI of strictly human level being commercially useful to build unless unless its efficiency is superior to ours.
Human level AI is still dangerous. Look how dangerous we are.
Consider that a human level AI which is not friendly, is likely to be far more unfriendly or difficult to bargain with than any human. (The total space of possible value systems is far far greater than the space of value systems inhabited by functioning humans). If there are enough of them, then they can cause the same kind of problem that a hostile society could.
But it’s worse than that. A sufficiently unfriendly AI would be like a sociopath or psychopath by human standards. But unlike individual sociopaths among humans (who can become very powerful and do extraordinary damage, consider Stalin), they would not need to fake [human] sanity to work with others if there were a large community of like-minded unfriendly AIs. Indeed, if they were unfriendly enough and more comfortable with violence than say, your typical european/american, the result could look a lot like the colonialism of the 15th-19th centuries or earlier migrations of more warlike populations with all humans on the short end of the stick. And that’s just looking at the human potential for collective violence. Surely the space of all human level intelligences contains some that are more brutally violent than the worst of us.
Could we conceivably hold this off? Possible, but it would be a big gamble, and unfriendliness would ensure that such a conflict would be inevitable. If the AI were significantly more efficient than we are (cost of upkeep and reproduction), that would be a huge advantage in any potential conflict. And it’s hard to imagine an AI of strictly human level being commercially useful to build unless unless its efficiency is superior to ours.