Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, “yeah, that seemed reasonable”: no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I’ll post my reasoning publicly. His arguments are, roughly:
Intelligence is situational / human brains can’t pilot octopus bodies.
(“Smarter than a smallpox virus” is as meaningful as “smarter than a human”—and look what happened there.)
Environment affects how intelligent a given human ends up. ”...an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human.”
(That’s not a relevant scenario, though! How about an AI merely as smart as I am, which can teleport through the internet, save/load snapshots of itself, and replicate endlessly as long as each instance can afford to keep a g4ad.16xlarge EC2 instance running?)
Human civilization is vastly more capable than individual humans. “When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation… Their own individual cognitive work may not be much more significant to the whole process than the work of a single transistor on a chip.”
(This argument does not distinguish between “ability to design self-replicating nanomachinery” and “ability to produce beautiful digital art.”)
Intelligences can’t design better intelligences. “This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred.”
(This argument does not distinguish between “ability to design intelligence” and “ability to design weapons that can level cities”; neither had ever happened, until one did.)
Francois Chollet on the implausibility of intelligence explosion :
https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec
Respectable Person: check. Arguing against AI doomerism: check. Me subsequently thinking, “yeah, that seemed reasonable”: no check, so no bounty. Sorry!
It seems weaselly to refuse a bounty based on that very subjective criterion, so, to keep myself honest, I’ll post my reasoning publicly. His arguments are, roughly:
Intelligence is situational / human brains can’t pilot octopus bodies.
(“Smarter than a smallpox virus” is as meaningful as “smarter than a human”—and look what happened there.)
Environment affects how intelligent a given human ends up. ”...an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human.”
(That’s not a relevant scenario, though! How about an AI merely as smart as I am, which can teleport through the internet, save/load snapshots of itself, and replicate endlessly as long as each instance can afford to keep a g4ad.16xlarge EC2 instance running?)
Human civilization is vastly more capable than individual humans. “When a scientist makes a breakthrough, the thought processes they are running in their brain are just a small part of the equation… Their own individual cognitive work may not be much more significant to the whole process than the work of a single transistor on a chip.”
(This argument does not distinguish between “ability to design self-replicating nanomachinery” and “ability to produce beautiful digital art.”)
Intelligences can’t design better intelligences. “This is a purely empirical statement: out of billions of human brains that have come and gone, none has done so. Clearly, the intelligence of a single human, over a single lifetime, cannot design intelligence, or else, over billions of trials, it would have already occurred.”
(This argument does not distinguish between “ability to design intelligence” and “ability to design weapons that can level cities”; neither had ever happened, until one did.)