Personally, I don’t think I see that you can turn an ML system that has say 50-to-250 percent of a human’s intelligence into an existential threat just by pushing the “turbo” button on the hardware. Which means that I’m kind of hoping nobody goes the “nuclear war” route in real life.
Isn’t that part somewhat tautological? A sufficiently large group of humans is basically a superintelligence. We’ve basically terraformed the earth with cows and rice and cities and such.
A computer with >100% human intelligence would be incredibly economically valuable (automate every job that can be done remotely?), so it seems very likely that people would make huge numbers of copies if the cost of running one was less than compensation for a human doing the same job.
And that’s basically your superintelligence: even if a low-level AGI can’t directly self-improve (which seems somewhat doubtful since humans are currently improving computers at a reasonably fast rate), it could still reach superintelligence by being scaled from 1 to N.
Isn’t that part somewhat tautological? A sufficiently large group of humans is basically a superintelligence. We’ve basically terraformed the earth with cows and rice and cities and such.
A computer with >100% human intelligence would be incredibly economically valuable (automate every job that can be done remotely?), so it seems very likely that people would make huge numbers of copies if the cost of running one was less than compensation for a human doing the same job.
And that’s basically your superintelligence: even if a low-level AGI can’t directly self-improve (which seems somewhat doubtful since humans are currently improving computers at a reasonably fast rate), it could still reach superintelligence by being scaled from 1 to N.