if what we are observing doesn’t constitute evidence against the Singularity in your opinion, then what would?
I’m not marchdown, but:
Estimating the probability of a Singularity requires looking at various possible advantages of digital minds and asking what would constitute evidence against such advantages being possible. Some possibilities:
Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.
Superior parallel power: Evidence against would be an indication of extra parallel power not being useful for a mind that already has human-equivalent (whatever that means) parallel power.
Improved algorithms: Evidence against would be the human brain’s algorithms already being perfectly optimized and with no further room for improvement.
Designing new mental modules: Evidence against would be evidence that the human brain’s existing mental modules are already sufficient for any cognitive task with any real-world relevance.
Modifiable motivation systems: Evidence against would be evidence that humans are already optimal at motivating themselves to work on important tasks, that realistic techniques could be developed to make humans optimal in this sense, or that having a great number of minds without any akrasia issues would have no major advantage over humans.
Copyability: Evidence against would be evidence that minds cannot be effectively copied, maybe because there won’t be enough computing power to run many copies. Alternatively, that copying minds would result in rapidly declining marginal returns and that the various copying advantages discussed by e.g. Hanson and Shulman aren’t as big as they seem.
Perfect co-operation: Evidence against would be that no minds can co-operate better than humans do, or at least not to such an extent that they’d receive a major advantage. Also, evidence of realistic techniques bringing humans to this level of co-operation.
Superior communication: Evidence against would be that no minds can communicate better than humans do, or at least not to such an extent that they’d receive a major advantage. Also, evidence of realistic techniques bringing humans to this level of communication.
Transfer of skills: Evidence against would be that no minds can teach better than humans do, or at least not to such an extent that they’d receive a major advantage. Also, evidence of realistic techniques bringing humans to this level of skill transfer.
Various biases: Evidence against would either be that human cognitive biases are not actually major ones, or that no mind architecture could overcome them. Also, evidence that humans actually have a realistic chance of overcoming most biases.
Depending on how you define “the Singularity”, some of these may be irrelevant. Personally, I think the most important aspect of the Singularity is whether minds drastically different from humans will eventually take over, and how rapid the transition could be. Excluding the possibility of a rapid takeover would require at least strong evidence against gains from increased serial power, increased parallel power, improved algorithms, new mental modules, copyability, and transfer of skills. That seems quite hard to come by, especially once you take into account the fact that it’s not enough to show that e.g. current trends in hardware development show mostly increases in parallel instead of serial power—to refute the gains from increased serial power, you’d also have to show that this is indeed some deep physical limit which cannot be overcome.
Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.
The linked article is amusing but misleading; the described ‘ultimate laptop’ would essentially be a nuclear explosion. The relevant physical limit is ln(2)kT energy dissipated per bit erased; in SI units at room temperature this is about 4e-21. We don’t know exactly how much computation the human brain performs; middle-of-the-road estimates put it in the ballpark of 1e18 several-bit operations per second for 20 watts, which is not very many orders of magnitude short of even the theoretical limit imposed by thermodynamics, let alone whatever practical limits may arise once we take into account issues like error correction, communication latency and bandwidth, and the need for reprogrammability.
Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.
Indeed we hit this some years ago. Of course as you observe, it is impossible to prove serial speed won’t start increasing again in the future; that’s inherent in the problem of proving a negative. If such proof is required, then no sequence of observations whatsoever could possibly count as evidence against the Singularity.
Superior parallel power:
Of course uses can always be found for more parallel power. That’s why we humans make use of it all the time, both by assigning multiple humans to a task, and increasingly by placing multiple CPU cores at the disposal of individual humans.
Improved algorithms:
Finding these is (assuming P!=NP) intrinsically difficult; humans and computers can both do it, but neither will ever be able to do it easily.
Designing new mental modules:
As for improved algorithms.
Modifiable motivation systems:
An advantage when they reduce akrasia, a disadvantage when they make you more vulnerable to wireheading.
Copyability: Evidence against would be evidence that minds cannot be effectively copied, maybe because there won’t be enough computing power to run many copies.
Indeed there won’t, at least initially; supercomputers don’t grow on trees. Of course, computing power tends to become cheaper over time, but that does take time, so no support for hard takeoff here.
Alternatively, that copying minds would result in rapidly declining marginal returns and that the various copying advantages discussed by e.g. Hanson and Shulman aren’t as big as they seem.
Matt Mahoney argues that this will indeed happen because an irreducible fraction of the knowledge of how to do a job is specific to that job.
Perfect co-operation:
Some of the more interesting AI work has been on using a virtual market economy to allocate resources between different modules within an AI program, which suggests computers and humans will be on the same playing field.
Superior communication:
Empirically, progress in communication technology between humans outpaces progress in AI, and has done so for as long as digital computers have existed.
Transfer of skills:
Addressed under copyability.
Various biases:
Hard to say, both because it’s very hard to see our own biases, and because a bias that’s adaptive in one situation may be maladaptive in another. But if we believe maladaptive biases run deep, such that we cannot shake them off with any confidence, then we should be all the more skeptical of our far beliefs, which are the most susceptible to bias.
Of course, there is also the fact that humans can and do tap the advantages of digital computers, both by running software on them, and in the long run potentially by uploading to digital substrate.
we should be all the more skeptical of our far beliefs, which are the most susceptible to bias.
Just out of interest… assume my far beliefs take the form of a probability distribution of possible future outcomes. How can I be “skeptical” of that? Given that something will happen in the future, all I can do is update in the direction of a different probability distribution.
In other words, which direction am I likely to be biased in?
We should update away from beliefs that the future will resemble a story, particularly a story whose primary danger will be fought by superheroes (most particularly for those of us who would personally be among the superheroes!) and towards beliefs that the future will resemble the past and the primary dangers will be drearily mundane.
The future will certainly resemble a story—or, more accurately, will be capable of being placed into several plausible narrative frames, just as the past has. The bias you’re probably trying to point to is in interpreting any particular plausible story as evidence for its individual components—or, for that matter, against.
The conjunction fallacy
implies that any particular vision of a Singularity-like outcome is less likely than our untrained intuitions would lead us to believe. It’s an excellent reason to be skeptical of any highly derived theories of the future—the specifics of Ray Kurzweil’s singularity timeline, for example, or Robin Hanson’s Malthusian emverse. But I don’t think it’s a good reason to be skeptical of any of the dominant singularity models in general form. Those don’t work back from a compelling image to first principles; most of them don’t even present specific consequential predictions, for fairly straightforward reasons. All the complexity is right there on the surface, and attempts to narrativize it inevitably run up against limits of imagination. (As evidence, the strong Singularity has been fairly poor at producing fiction when compared to most future histories of comparable generality; there’s no equivalent of Heinlein writing stories about nuclear-powered space colonization, although there’s quite a volume of stories about weak or partial singularities.)
So yes, there’s not going to be a singleton AI bent on turning us all into paperclips. But that’s a deliberately absurd instantiation of a much more general pattern. I can conceive of a number of ways in which the general pattern too might be wrong, but the conjunction fallacy doesn’t fly; a number of attempted debunkings, meanwhile, do suffer from narrative fixation issues.
Superhero bias is a more interesting question—but it’s also a more specific one.
Well, any sequence of events can be placed in a narrative frame with enough of a stretch, but the fact remains that different sequence of events differ in their amenability to this; fiction is not a random sampling from the space of possible things we could imagine happening, and the Singularity is narratively far stronger than most imaginable futures, to a degree that indicates bias we should correct for. I’ve seen a fair bit of strong Singularity fiction at this stage, though being, well, singular, it tends not to be amenable to repeated stories by the same author the way Heinlein’s vision of nuclear powered space colonization was.
Empirically, progress in communication technology between humans outpaces progress in AI, and has done so for as long as digital computers have existed.
The best way to colonize Alpha Centauri has always been to wait for technology to improve rather than launching an expedition, but it’s impossible for that to continue to be true indefinitely. Short of direct mind-to-mind communication or something with a concurrent halt to AI progress, AI advances will probably outpace human communication advances in the near to medium term.
It seems unreasonable to believe human minds, optimized according to considerations such as politicking in addition to communication, will be able to communicate just as well as designed AIs. Human mind development was constrained by ancestral energy availability and head size, etc., so it’s unlikely that we represent optimally sized minds to form a group of minds, even assuming an AI isn’t able to reap huge efficiencies by becoming essentially as a single mind, regardless of scale.
Or human communications may stop improving because they are good enough to no longer be a major bottleneck, in which case it may not greatly matter whether other possible minds could do better. Amdahl’s law: if something was already only ten percent of total cost, improving it by a factor of infinity would reduce total cost by only that ten percent.
Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.
It is often cited how much faster expert systems are at their narrow area of expertise. But does that mean that the human brain is actually slower or that it can’t focus its resources on certain tasks? Take for example my ability to simulated some fantasy environment, off the top of my head, in front of my mind’s eye. Or the ability of humans to run real-time egocentric world-simulations to extrapolate and predict the behavior of physical systems and other agents. Our best computers don’t even come close to that.
Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.
Chip manufacturers are already earning most of their money by making their chips more energy efficient and working in parallel.
Improved algorithms: Evidence against would be the human brain’s algorithms already being perfectly optimized and with no further room for improvement.
We simply don’t know how efficient the human brain’s algorithms are. You can’t just compare artificial algorithms with the human ability to accomplish tasks that were never selected for by evolution.
Designing new mental modules: Evidence against would be evidence that the human brain’s existing mental modules are already sufficient for any cognitive task with any real-world relevance.
This is an actual feature. It is not clear that you can have a general intelligence with a huge amount of plasticity that would work at all rather than messing itself up.
Modifiable motivation systems: Evidence against would be evidence that humans are already optimal at motivating themselves to work on important tasks...
This is an actual feature, see dysfunctional autism.
Copyability: Evidence against would be evidence that minds cannot be effectively copied, maybe because there won’t be enough computing power to run many copies.
You don’t really anticipate to be surprised by evidence on this point because your definition of “minds” doesn’t even exist and therefore can’t be shown not to be copyable. And regarding brains, show me some neuroscientists who think that minds are effectively copyable.
Perfect co-operation: Evidence against would be that no minds can co-operate better than humans do, or at least not to such an extent that they’d receive a major advantage.
Cooperation is a delicate quality. Too much and you get frozen, too little and you can’t accomplish much. Human science is a great example of a balance between cooperation and useful rivalry. How is a collective intellect of AGI’s going to preserve the right balance without mugging itself into pursuing insane expected utility-calculations?
Excluding the possibility of a rapid takeover would require at least strong evidence against gains...
Wait, are you saying that the burden of proof is with those who are skeptical of a Singularity? Are you saying that the null hypothesis is a rapid takeover? What evidence allowed you to make that hypothesises in the first place? Making up unfounded conjectures and then telling others to disprove them will lead to privileging random high-utility possibilities, that sound superficially convincing, while ignoring other problems that are based on empirical evidence.
...it’s not enough to show that e.g. current trends in hardware development show mostly increases in parallel instead of serial power—to refute the gains from increased serial power, you’d also have to show that this is indeed some deep physical limit which cannot be overcome.
All that doesn’t even matter. Computational resources are mostly irrelevant when it comes to risks from AI. What you have to show is that recursive self-improvement is possible. It is a question of whether you can dramatically speed up the discovery of unknown unknowns.
I’m not marchdown, but:
Estimating the probability of a Singularity requires looking at various possible advantages of digital minds and asking what would constitute evidence against such advantages being possible. Some possibilities:
Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.
Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.
Superior parallel power: Evidence against would be an indication of extra parallel power not being useful for a mind that already has human-equivalent (whatever that means) parallel power.
Improved algorithms: Evidence against would be the human brain’s algorithms already being perfectly optimized and with no further room for improvement.
Designing new mental modules: Evidence against would be evidence that the human brain’s existing mental modules are already sufficient for any cognitive task with any real-world relevance.
Modifiable motivation systems: Evidence against would be evidence that humans are already optimal at motivating themselves to work on important tasks, that realistic techniques could be developed to make humans optimal in this sense, or that having a great number of minds without any akrasia issues would have no major advantage over humans.
Copyability: Evidence against would be evidence that minds cannot be effectively copied, maybe because there won’t be enough computing power to run many copies. Alternatively, that copying minds would result in rapidly declining marginal returns and that the various copying advantages discussed by e.g. Hanson and Shulman aren’t as big as they seem.
Perfect co-operation: Evidence against would be that no minds can co-operate better than humans do, or at least not to such an extent that they’d receive a major advantage. Also, evidence of realistic techniques bringing humans to this level of co-operation.
Superior communication: Evidence against would be that no minds can communicate better than humans do, or at least not to such an extent that they’d receive a major advantage. Also, evidence of realistic techniques bringing humans to this level of communication.
Transfer of skills: Evidence against would be that no minds can teach better than humans do, or at least not to such an extent that they’d receive a major advantage. Also, evidence of realistic techniques bringing humans to this level of skill transfer.
Various biases: Evidence against would either be that human cognitive biases are not actually major ones, or that no mind architecture could overcome them. Also, evidence that humans actually have a realistic chance of overcoming most biases.
Depending on how you define “the Singularity”, some of these may be irrelevant. Personally, I think the most important aspect of the Singularity is whether minds drastically different from humans will eventually take over, and how rapid the transition could be. Excluding the possibility of a rapid takeover would require at least strong evidence against gains from increased serial power, increased parallel power, improved algorithms, new mental modules, copyability, and transfer of skills. That seems quite hard to come by, especially once you take into account the fact that it’s not enough to show that e.g. current trends in hardware development show mostly increases in parallel instead of serial power—to refute the gains from increased serial power, you’d also have to show that this is indeed some deep physical limit which cannot be overcome.
Okay, to look at some of the specifics:
The linked article is amusing but misleading; the described ‘ultimate laptop’ would essentially be a nuclear explosion. The relevant physical limit is ln(2)kT energy dissipated per bit erased; in SI units at room temperature this is about 4e-21. We don’t know exactly how much computation the human brain performs; middle-of-the-road estimates put it in the ballpark of 1e18 several-bit operations per second for 20 watts, which is not very many orders of magnitude short of even the theoretical limit imposed by thermodynamics, let alone whatever practical limits may arise once we take into account issues like error correction, communication latency and bandwidth, and the need for reprogrammability.
Indeed we hit this some years ago. Of course as you observe, it is impossible to prove serial speed won’t start increasing again in the future; that’s inherent in the problem of proving a negative. If such proof is required, then no sequence of observations whatsoever could possibly count as evidence against the Singularity.
Of course uses can always be found for more parallel power. That’s why we humans make use of it all the time, both by assigning multiple humans to a task, and increasingly by placing multiple CPU cores at the disposal of individual humans.
Finding these is (assuming P!=NP) intrinsically difficult; humans and computers can both do it, but neither will ever be able to do it easily.
As for improved algorithms.
An advantage when they reduce akrasia, a disadvantage when they make you more vulnerable to wireheading.
Indeed there won’t, at least initially; supercomputers don’t grow on trees. Of course, computing power tends to become cheaper over time, but that does take time, so no support for hard takeoff here.
Matt Mahoney argues that this will indeed happen because an irreducible fraction of the knowledge of how to do a job is specific to that job.
Some of the more interesting AI work has been on using a virtual market economy to allocate resources between different modules within an AI program, which suggests computers and humans will be on the same playing field.
Empirically, progress in communication technology between humans outpaces progress in AI, and has done so for as long as digital computers have existed.
Addressed under copyability.
Hard to say, both because it’s very hard to see our own biases, and because a bias that’s adaptive in one situation may be maladaptive in another. But if we believe maladaptive biases run deep, such that we cannot shake them off with any confidence, then we should be all the more skeptical of our far beliefs, which are the most susceptible to bias.
Of course, there is also the fact that humans can and do tap the advantages of digital computers, both by running software on them, and in the long run potentially by uploading to digital substrate.
Just out of interest… assume my far beliefs take the form of a probability distribution of possible future outcomes. How can I be “skeptical” of that? Given that something will happen in the future, all I can do is update in the direction of a different probability distribution.
In other words, which direction am I likely to be biased in?
In the direction of overconfidence, i.e., assigning too much probability mass to your highest probability theory.
We should update away from beliefs that the future will resemble a story, particularly a story whose primary danger will be fought by superheroes (most particularly for those of us who would personally be among the superheroes!) and towards beliefs that the future will resemble the past and the primary dangers will be drearily mundane.
The future will certainly resemble a story—or, more accurately, will be capable of being placed into several plausible narrative frames, just as the past has. The bias you’re probably trying to point to is in interpreting any particular plausible story as evidence for its individual components—or, for that matter, against.
The conjunction fallacy implies that any particular vision of a Singularity-like outcome is less likely than our untrained intuitions would lead us to believe. It’s an excellent reason to be skeptical of any highly derived theories of the future—the specifics of Ray Kurzweil’s singularity timeline, for example, or Robin Hanson’s Malthusian emverse. But I don’t think it’s a good reason to be skeptical of any of the dominant singularity models in general form. Those don’t work back from a compelling image to first principles; most of them don’t even present specific consequential predictions, for fairly straightforward reasons. All the complexity is right there on the surface, and attempts to narrativize it inevitably run up against limits of imagination. (As evidence, the strong Singularity has been fairly poor at producing fiction when compared to most future histories of comparable generality; there’s no equivalent of Heinlein writing stories about nuclear-powered space colonization, although there’s quite a volume of stories about weak or partial singularities.)
So yes, there’s not going to be a singleton AI bent on turning us all into paperclips. But that’s a deliberately absurd instantiation of a much more general pattern. I can conceive of a number of ways in which the general pattern too might be wrong, but the conjunction fallacy doesn’t fly; a number of attempted debunkings, meanwhile, do suffer from narrative fixation issues.
Superhero bias is a more interesting question—but it’s also a more specific one.
Well, any sequence of events can be placed in a narrative frame with enough of a stretch, but the fact remains that different sequence of events differ in their amenability to this; fiction is not a random sampling from the space of possible things we could imagine happening, and the Singularity is narratively far stronger than most imaginable futures, to a degree that indicates bias we should correct for. I’ve seen a fair bit of strong Singularity fiction at this stage, though being, well, singular, it tends not to be amenable to repeated stories by the same author the way Heinlein’s vision of nuclear powered space colonization was.
The best way to colonize Alpha Centauri has always been to wait for technology to improve rather than launching an expedition, but it’s impossible for that to continue to be true indefinitely. Short of direct mind-to-mind communication or something with a concurrent halt to AI progress, AI advances will probably outpace human communication advances in the near to medium term.
It seems unreasonable to believe human minds, optimized according to considerations such as politicking in addition to communication, will be able to communicate just as well as designed AIs. Human mind development was constrained by ancestral energy availability and head size, etc., so it’s unlikely that we represent optimally sized minds to form a group of minds, even assuming an AI isn’t able to reap huge efficiencies by becoming essentially as a single mind, regardless of scale.
Or human communications may stop improving because they are good enough to no longer be a major bottleneck, in which case it may not greatly matter whether other possible minds could do better. Amdahl’s law: if something was already only ten percent of total cost, improving it by a factor of infinity would reduce total cost by only that ten percent.
It is often cited how much faster expert systems are at their narrow area of expertise. But does that mean that the human brain is actually slower or that it can’t focus its resources on certain tasks? Take for example my ability to simulated some fantasy environment, off the top of my head, in front of my mind’s eye. Or the ability of humans to run real-time egocentric world-simulations to extrapolate and predict the behavior of physical systems and other agents. Our best computers don’t even come close to that.
Chip manufacturers are already earning most of their money by making their chips more energy efficient and working in parallel.
We simply don’t know how efficient the human brain’s algorithms are. You can’t just compare artificial algorithms with the human ability to accomplish tasks that were never selected for by evolution.
This is an actual feature. It is not clear that you can have a general intelligence with a huge amount of plasticity that would work at all rather than messing itself up.
This is an actual feature, see dysfunctional autism.
You don’t really anticipate to be surprised by evidence on this point because your definition of “minds” doesn’t even exist and therefore can’t be shown not to be copyable. And regarding brains, show me some neuroscientists who think that minds are effectively copyable.
Cooperation is a delicate quality. Too much and you get frozen, too little and you can’t accomplish much. Human science is a great example of a balance between cooperation and useful rivalry. How is a collective intellect of AGI’s going to preserve the right balance without mugging itself into pursuing insane expected utility-calculations?
Wait, are you saying that the burden of proof is with those who are skeptical of a Singularity? Are you saying that the null hypothesis is a rapid takeover? What evidence allowed you to make that hypothesises in the first place? Making up unfounded conjectures and then telling others to disprove them will lead to privileging random high-utility possibilities, that sound superficially convincing, while ignoring other problems that are based on empirical evidence.
All that doesn’t even matter. Computational resources are mostly irrelevant when it comes to risks from AI. What you have to show is that recursive self-improvement is possible. It is a question of whether you can dramatically speed up the discovery of unknown unknowns.