If somebody is going to set off a super intelligent machine I’d rather it was a machine that will only probably kill us, rather than a machine that almost certainly will kill us because issues of safety haven’t even been considered.
A plausible problem is server-side machine intelligence collecting the world’s wealth, and then distributing it very unevenly—which could cause political problems and unrest. Patent and copyright laws make this kind of problem worse. I think that sort of scenario is much more likely than a bug causing an accidental takeover of the world.
A plausible problem is server-side machine intelligence collecting the world’s wealth, and then distributing it very unevenly—which could cause political problems and unrest. Patent and copyright laws make this kind of problem worse. I think that sort of scenario is much more likely than a bug causing an accidental takeover of the world.