Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.
The linked article is amusing but misleading; the described ‘ultimate laptop’ would essentially be a nuclear explosion. The relevant physical limit is ln(2)kT energy dissipated per bit erased; in SI units at room temperature this is about 4e-21. We don’t know exactly how much computation the human brain performs; middle-of-the-road estimates put it in the ballpark of 1e18 several-bit operations per second for 20 watts, which is not very many orders of magnitude short of even the theoretical limit imposed by thermodynamics, let alone whatever practical limits may arise once we take into account issues like error correction, communication latency and bandwidth, and the need for reprogrammability.
Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.
Indeed we hit this some years ago. Of course as you observe, it is impossible to prove serial speed won’t start increasing again in the future; that’s inherent in the problem of proving a negative. If such proof is required, then no sequence of observations whatsoever could possibly count as evidence against the Singularity.
Superior parallel power:
Of course uses can always be found for more parallel power. That’s why we humans make use of it all the time, both by assigning multiple humans to a task, and increasingly by placing multiple CPU cores at the disposal of individual humans.
Improved algorithms:
Finding these is (assuming P!=NP) intrinsically difficult; humans and computers can both do it, but neither will ever be able to do it easily.
Designing new mental modules:
As for improved algorithms.
Modifiable motivation systems:
An advantage when they reduce akrasia, a disadvantage when they make you more vulnerable to wireheading.
Copyability: Evidence against would be evidence that minds cannot be effectively copied, maybe because there won’t be enough computing power to run many copies.
Indeed there won’t, at least initially; supercomputers don’t grow on trees. Of course, computing power tends to become cheaper over time, but that does take time, so no support for hard takeoff here.
Alternatively, that copying minds would result in rapidly declining marginal returns and that the various copying advantages discussed by e.g. Hanson and Shulman aren’t as big as they seem.
Matt Mahoney argues that this will indeed happen because an irreducible fraction of the knowledge of how to do a job is specific to that job.
Perfect co-operation:
Some of the more interesting AI work has been on using a virtual market economy to allocate resources between different modules within an AI program, which suggests computers and humans will be on the same playing field.
Superior communication:
Empirically, progress in communication technology between humans outpaces progress in AI, and has done so for as long as digital computers have existed.
Transfer of skills:
Addressed under copyability.
Various biases:
Hard to say, both because it’s very hard to see our own biases, and because a bias that’s adaptive in one situation may be maladaptive in another. But if we believe maladaptive biases run deep, such that we cannot shake them off with any confidence, then we should be all the more skeptical of our far beliefs, which are the most susceptible to bias.
Of course, there is also the fact that humans can and do tap the advantages of digital computers, both by running software on them, and in the long run potentially by uploading to digital substrate.
we should be all the more skeptical of our far beliefs, which are the most susceptible to bias.
Just out of interest… assume my far beliefs take the form of a probability distribution of possible future outcomes. How can I be “skeptical” of that? Given that something will happen in the future, all I can do is update in the direction of a different probability distribution.
In other words, which direction am I likely to be biased in?
We should update away from beliefs that the future will resemble a story, particularly a story whose primary danger will be fought by superheroes (most particularly for those of us who would personally be among the superheroes!) and towards beliefs that the future will resemble the past and the primary dangers will be drearily mundane.
The future will certainly resemble a story—or, more accurately, will be capable of being placed into several plausible narrative frames, just as the past has. The bias you’re probably trying to point to is in interpreting any particular plausible story as evidence for its individual components—or, for that matter, against.
The conjunction fallacy
implies that any particular vision of a Singularity-like outcome is less likely than our untrained intuitions would lead us to believe. It’s an excellent reason to be skeptical of any highly derived theories of the future—the specifics of Ray Kurzweil’s singularity timeline, for example, or Robin Hanson’s Malthusian emverse. But I don’t think it’s a good reason to be skeptical of any of the dominant singularity models in general form. Those don’t work back from a compelling image to first principles; most of them don’t even present specific consequential predictions, for fairly straightforward reasons. All the complexity is right there on the surface, and attempts to narrativize it inevitably run up against limits of imagination. (As evidence, the strong Singularity has been fairly poor at producing fiction when compared to most future histories of comparable generality; there’s no equivalent of Heinlein writing stories about nuclear-powered space colonization, although there’s quite a volume of stories about weak or partial singularities.)
So yes, there’s not going to be a singleton AI bent on turning us all into paperclips. But that’s a deliberately absurd instantiation of a much more general pattern. I can conceive of a number of ways in which the general pattern too might be wrong, but the conjunction fallacy doesn’t fly; a number of attempted debunkings, meanwhile, do suffer from narrative fixation issues.
Superhero bias is a more interesting question—but it’s also a more specific one.
Well, any sequence of events can be placed in a narrative frame with enough of a stretch, but the fact remains that different sequence of events differ in their amenability to this; fiction is not a random sampling from the space of possible things we could imagine happening, and the Singularity is narratively far stronger than most imaginable futures, to a degree that indicates bias we should correct for. I’ve seen a fair bit of strong Singularity fiction at this stage, though being, well, singular, it tends not to be amenable to repeated stories by the same author the way Heinlein’s vision of nuclear powered space colonization was.
Empirically, progress in communication technology between humans outpaces progress in AI, and has done so for as long as digital computers have existed.
The best way to colonize Alpha Centauri has always been to wait for technology to improve rather than launching an expedition, but it’s impossible for that to continue to be true indefinitely. Short of direct mind-to-mind communication or something with a concurrent halt to AI progress, AI advances will probably outpace human communication advances in the near to medium term.
It seems unreasonable to believe human minds, optimized according to considerations such as politicking in addition to communication, will be able to communicate just as well as designed AIs. Human mind development was constrained by ancestral energy availability and head size, etc., so it’s unlikely that we represent optimally sized minds to form a group of minds, even assuming an AI isn’t able to reap huge efficiencies by becoming essentially as a single mind, regardless of scale.
Or human communications may stop improving because they are good enough to no longer be a major bottleneck, in which case it may not greatly matter whether other possible minds could do better. Amdahl’s law: if something was already only ten percent of total cost, improving it by a factor of infinity would reduce total cost by only that ten percent.
Okay, to look at some of the specifics:
The linked article is amusing but misleading; the described ‘ultimate laptop’ would essentially be a nuclear explosion. The relevant physical limit is ln(2)kT energy dissipated per bit erased; in SI units at room temperature this is about 4e-21. We don’t know exactly how much computation the human brain performs; middle-of-the-road estimates put it in the ballpark of 1e18 several-bit operations per second for 20 watts, which is not very many orders of magnitude short of even the theoretical limit imposed by thermodynamics, let alone whatever practical limits may arise once we take into account issues like error correction, communication latency and bandwidth, and the need for reprogrammability.
Indeed we hit this some years ago. Of course as you observe, it is impossible to prove serial speed won’t start increasing again in the future; that’s inherent in the problem of proving a negative. If such proof is required, then no sequence of observations whatsoever could possibly count as evidence against the Singularity.
Of course uses can always be found for more parallel power. That’s why we humans make use of it all the time, both by assigning multiple humans to a task, and increasingly by placing multiple CPU cores at the disposal of individual humans.
Finding these is (assuming P!=NP) intrinsically difficult; humans and computers can both do it, but neither will ever be able to do it easily.
As for improved algorithms.
An advantage when they reduce akrasia, a disadvantage when they make you more vulnerable to wireheading.
Indeed there won’t, at least initially; supercomputers don’t grow on trees. Of course, computing power tends to become cheaper over time, but that does take time, so no support for hard takeoff here.
Matt Mahoney argues that this will indeed happen because an irreducible fraction of the knowledge of how to do a job is specific to that job.
Some of the more interesting AI work has been on using a virtual market economy to allocate resources between different modules within an AI program, which suggests computers and humans will be on the same playing field.
Empirically, progress in communication technology between humans outpaces progress in AI, and has done so for as long as digital computers have existed.
Addressed under copyability.
Hard to say, both because it’s very hard to see our own biases, and because a bias that’s adaptive in one situation may be maladaptive in another. But if we believe maladaptive biases run deep, such that we cannot shake them off with any confidence, then we should be all the more skeptical of our far beliefs, which are the most susceptible to bias.
Of course, there is also the fact that humans can and do tap the advantages of digital computers, both by running software on them, and in the long run potentially by uploading to digital substrate.
Just out of interest… assume my far beliefs take the form of a probability distribution of possible future outcomes. How can I be “skeptical” of that? Given that something will happen in the future, all I can do is update in the direction of a different probability distribution.
In other words, which direction am I likely to be biased in?
In the direction of overconfidence, i.e., assigning too much probability mass to your highest probability theory.
We should update away from beliefs that the future will resemble a story, particularly a story whose primary danger will be fought by superheroes (most particularly for those of us who would personally be among the superheroes!) and towards beliefs that the future will resemble the past and the primary dangers will be drearily mundane.
The future will certainly resemble a story—or, more accurately, will be capable of being placed into several plausible narrative frames, just as the past has. The bias you’re probably trying to point to is in interpreting any particular plausible story as evidence for its individual components—or, for that matter, against.
The conjunction fallacy implies that any particular vision of a Singularity-like outcome is less likely than our untrained intuitions would lead us to believe. It’s an excellent reason to be skeptical of any highly derived theories of the future—the specifics of Ray Kurzweil’s singularity timeline, for example, or Robin Hanson’s Malthusian emverse. But I don’t think it’s a good reason to be skeptical of any of the dominant singularity models in general form. Those don’t work back from a compelling image to first principles; most of them don’t even present specific consequential predictions, for fairly straightforward reasons. All the complexity is right there on the surface, and attempts to narrativize it inevitably run up against limits of imagination. (As evidence, the strong Singularity has been fairly poor at producing fiction when compared to most future histories of comparable generality; there’s no equivalent of Heinlein writing stories about nuclear-powered space colonization, although there’s quite a volume of stories about weak or partial singularities.)
So yes, there’s not going to be a singleton AI bent on turning us all into paperclips. But that’s a deliberately absurd instantiation of a much more general pattern. I can conceive of a number of ways in which the general pattern too might be wrong, but the conjunction fallacy doesn’t fly; a number of attempted debunkings, meanwhile, do suffer from narrative fixation issues.
Superhero bias is a more interesting question—but it’s also a more specific one.
Well, any sequence of events can be placed in a narrative frame with enough of a stretch, but the fact remains that different sequence of events differ in their amenability to this; fiction is not a random sampling from the space of possible things we could imagine happening, and the Singularity is narratively far stronger than most imaginable futures, to a degree that indicates bias we should correct for. I’ve seen a fair bit of strong Singularity fiction at this stage, though being, well, singular, it tends not to be amenable to repeated stories by the same author the way Heinlein’s vision of nuclear powered space colonization was.
The best way to colonize Alpha Centauri has always been to wait for technology to improve rather than launching an expedition, but it’s impossible for that to continue to be true indefinitely. Short of direct mind-to-mind communication or something with a concurrent halt to AI progress, AI advances will probably outpace human communication advances in the near to medium term.
It seems unreasonable to believe human minds, optimized according to considerations such as politicking in addition to communication, will be able to communicate just as well as designed AIs. Human mind development was constrained by ancestral energy availability and head size, etc., so it’s unlikely that we represent optimally sized minds to form a group of minds, even assuming an AI isn’t able to reap huge efficiencies by becoming essentially as a single mind, regardless of scale.
Or human communications may stop improving because they are good enough to no longer be a major bottleneck, in which case it may not greatly matter whether other possible minds could do better. Amdahl’s law: if something was already only ten percent of total cost, improving it by a factor of infinity would reduce total cost by only that ten percent.