Yeah, the very first (upvoted) comment suggests that we don’t have to worry about the superhumanly intelligent AI, because humans “have things like EMP and nukes, not even the best AI is capable of thwarting getting bombed or nuked”.
Luckily, a soon following comment says that the AI may use a network of machines, and may cooperate with powerful human organizations.
On the other hand the last (upvoted) comment at this time suggests that we don’t have to worry about our machine overlords, because the more intelligent someone is, the less able they are to cooperate. Therefore, two superhuman robots would most likely start fighting each other, not the humankind.
Well, at least we have a nice feedback, which could be used to build a FAQ oriented towards this part of population.
Wow, the inferential gap is terrifyingly large.
Yeah, the very first (upvoted) comment suggests that we don’t have to worry about the superhumanly intelligent AI, because humans “have things like EMP and nukes, not even the best AI is capable of thwarting getting bombed or nuked”.
Luckily, a soon following comment says that the AI may use a network of machines, and may cooperate with powerful human organizations.
On the other hand the last (upvoted) comment at this time suggests that we don’t have to worry about our machine overlords, because the more intelligent someone is, the less able they are to cooperate. Therefore, two superhuman robots would most likely start fighting each other, not the humankind.
Well, at least we have a nice feedback, which could be used to build a FAQ oriented towards this part of population.