That is a pretty vague criticism—you don’t say whether you are critical of the idea the idea that large groups will be responsible for machine intelligence or the idea that they are unlikely to build a murderous machine intelligence that destroys all humans.
I’m critical of the idea that given a large group builds a machine intelligence, they will be unlikely to build a murderous (or otherwise severely harmful) machine intelligence.
Consider that engineering developed into a regulated profession only after several large scale disasters. Even still, there are notable disasters from time to time. Now consider the professionalism of the average software developer and their average manager. A disaster in this context could be far greater than the loss of everyone in the lab or facility.
Right—well, some people may well die. I expect some people died at the hands of the printing press—probably through starvation and malnutrition. Personally, I expect all those saved from gruesome deaths in automobile accidents are likely to vastly outnumber them in this case—but that is another issue.
Anyway, I am not arguing that nobody will die. The idea I was criticising was that “we all die”.
My favoured example of IT company gone bad is Microsoft. IMO, Microsoft have done considerable damage to the computing industry, over an extended period of time—illustrating how programs can be relatively harmful. However, “even” a Microsoft superintelligence seems unlikely to kill everyone.
That is a pretty vague criticism—you don’t say whether you are critical of the idea the idea that large groups will be responsible for machine intelligence or the idea that they are unlikely to build a murderous machine intelligence that destroys all humans.
I’m critical of the idea that given a large group builds a machine intelligence, they will be unlikely to build a murderous (or otherwise severely harmful) machine intelligence.
Consider that engineering developed into a regulated profession only after several large scale disasters. Even still, there are notable disasters from time to time. Now consider the professionalism of the average software developer and their average manager. A disaster in this context could be far greater than the loss of everyone in the lab or facility.
Right—well, some people may well die. I expect some people died at the hands of the printing press—probably through starvation and malnutrition. Personally, I expect all those saved from gruesome deaths in automobile accidents are likely to vastly outnumber them in this case—but that is another issue.
Anyway, I am not arguing that nobody will die. The idea I was criticising was that “we all die”.
My favoured example of IT company gone bad is Microsoft. IMO, Microsoft have done considerable damage to the computing industry, over an extended period of time—illustrating how programs can be relatively harmful. However, “even” a Microsoft superintelligence seems unlikely to kill everyone.