“I argue that confinement is intrinsically impractical. Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate—say—one million times slower than you, there is little doubt that over a period of years (your time) you could come up with a way to escape. I call this “fast thinking” form of superintelligence “weak superhumanity.” Such a “weakly superhuman” entity would probably burn out in a few weeks of outside time. “Strong superhumanity” would be more than cranking up the clock speed on a human-equivalent mind. It’s hard to say precisely what “strong superhumanity” would be like, but the difference appears to be profound. Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight?”
Vernor Vinge, Technological Singularity, 1993
I want to clarify that Ray Kurzweil’s writings seemed to be pretty successful at persuading large numbers of smart people that AI is a serious matter. Since the absurdity heuristic is the big bottleneck right now with ML researchers, techxecutives, and policymakers, maybe we should take cues from people we know succeeded? It seems like a small jump from that, just to make techxecutives and policymakers afraid of such an intense thing as a “singularity”, given that they take the concept seriously in the first place.
Nitpick: “singularity” is basically one abstract concept (exponential growth / infinity / asymptote) to another abstract concept (technology / economic growth / intelligence). So the intuition-pumping is probably a better first step (like examples of past technology that seemed absurd before it was invented).
“I argue that confinement is intrinsically impractical. Imagine yourself locked in your home with only limited data access to the outside, to your masters. If those masters thought at a rate—say—one million times slower than you, there is little doubt that over a period of years (your time) you could come up with a way to escape. I call this “fast thinking” form of superintelligence “weak superhumanity.” Such a “weakly superhuman” entity would probably burn out in a few weeks of outside time. “Strong superhumanity” would be more than cranking up the clock speed on a human-equivalent mind. It’s hard to say precisely what “strong superhumanity” would be like, but the difference appears to be profound. Imagine running a dog mind at very high speed. Would a thousand years of doggy living add up to any human insight?”
Vernor Vinge, Technological Singularity, 1993
I want to clarify that Ray Kurzweil’s writings seemed to be pretty successful at persuading large numbers of smart people that AI is a serious matter. Since the absurdity heuristic is the big bottleneck right now with ML researchers, techxecutives, and policymakers, maybe we should take cues from people we know succeeded? It seems like a small jump from that, just to make techxecutives and policymakers afraid of such an intense thing as a “singularity”, given that they take the concept seriously in the first place.
Nitpick: “singularity” is basically one abstract concept (exponential growth / infinity / asymptote) to another abstract concept (technology / economic growth / intelligence). So the intuition-pumping is probably a better first step (like examples of past technology that seemed absurd before it was invented).