I don’t see how this can be avoided if the damn thing is so much smarter, it can only treat “normal” humans as pets, ants, or, best case, wild animals confined to a sanctuary for their own good.
“Pet”, “vermin”, and “wild animal” (as well as “livestock” and “working animal”) are all concepts that humans have come up with for our species’ relationships with other species that we’ve been living with since forever, and have developed both instincts and cultural practices to relate to. Why would you expect them to apply to an AI’s relationship to humans? Isn’t that a bit, well, anthropomorphizing?
Indeed it is, a bit. This is just an analogy meant to convey that humans aren’t likely to stop a foomed AI (or maybe a group of them, if such a term will even make sense) from doing what it wants, just like animals are powerless to stop determined humans.
I don’t see how this can be avoided if the damn thing is so much smarter [...]
Companies and governments are much smarter than humans. So far, none has taken over the world. Companies compete with other companies. Governments compete with other governments . Like that.
I don’t see how this is relevant to the issue. Sure, an average organization is smarter than an average human, but it is not nearly as smart as a smart human, let alone a foomed AI.
Well, the idea that a single smart machine will take over the world because it will comprehensively trounce humans makes no sense. Smart machines are likely to compete with other smart machines, much as they do today.
The governments too though. A company needs to overthrow all the governments to take over the world. Not an impossible task, perhaps, but it would be quite a revolution—and probably a bad one.
I don’t see how this can be avoided if the damn thing is so much smarter, it can only treat “normal” humans as pets, ants, or, best case, wild animals confined to a sanctuary for their own good.
“Pet”, “vermin”, and “wild animal” (as well as “livestock” and “working animal”) are all concepts that humans have come up with for our species’ relationships with other species that we’ve been living with since forever, and have developed both instincts and cultural practices to relate to. Why would you expect them to apply to an AI’s relationship to humans? Isn’t that a bit, well, anthropomorphizing?
Indeed it is, a bit. This is just an analogy meant to convey that humans aren’t likely to stop a foomed AI (or maybe a group of them, if such a term will even make sense) from doing what it wants, just like animals are powerless to stop determined humans.
Companies and governments are much smarter than humans. So far, none has taken over the world. Companies compete with other companies. Governments compete with other governments . Like that.
Are they? More powerful, maybe. Often wealthier. But what evidence do you have that they are smarter? They often act rather stupidly.
Collective intelligence won in collective intelligence vs groupthink.
I don’t see how this is relevant to the issue. Sure, an average organization is smarter than an average human, but it is not nearly as smart as a smart human, let alone a foomed AI.
Large organization can be as smart as the smartest human they can hire (the listening part is up to them of course)
Well, the idea that a single smart machine will take over the world because it will comprehensively trounce humans makes no sense. Smart machines are likely to compete with other smart machines, much as they do today.
give the Companies time. They’re making good progress
The governments too though. A company needs to overthrow all the governments to take over the world. Not an impossible task, perhaps, but it would be quite a revolution—and probably a bad one.