Superior processing power. Evidence against would be the human brain already being close to the physical limits of what is possible.
It is often cited how much faster expert systems are at their narrow area of expertise. But does that mean that the human brain is actually slower or that it can’t focus its resources on certain tasks? Take for example my ability to simulated some fantasy environment, off the top of my head, in front of my mind’s eye. Or the ability of humans to run real-time egocentric world-simulations to extrapolate and predict the behavior of physical systems and other agents. Our best computers don’t even come close to that.
Superior serial power: Evidence against would be an inability to increase the serial power of computers anymore.
Chip manufacturers are already earning most of their money by making their chips more energy efficient and working in parallel.
Improved algorithms: Evidence against would be the human brain’s algorithms already being perfectly optimized and with no further room for improvement.
We simply don’t know how efficient the human brain’s algorithms are. You can’t just compare artificial algorithms with the human ability to accomplish tasks that were never selected for by evolution.
Designing new mental modules: Evidence against would be evidence that the human brain’s existing mental modules are already sufficient for any cognitive task with any real-world relevance.
This is an actual feature. It is not clear that you can have a general intelligence with a huge amount of plasticity that would work at all rather than messing itself up.
Modifiable motivation systems: Evidence against would be evidence that humans are already optimal at motivating themselves to work on important tasks...
This is an actual feature, see dysfunctional autism.
Copyability: Evidence against would be evidence that minds cannot be effectively copied, maybe because there won’t be enough computing power to run many copies.
You don’t really anticipate to be surprised by evidence on this point because your definition of “minds” doesn’t even exist and therefore can’t be shown not to be copyable. And regarding brains, show me some neuroscientists who think that minds are effectively copyable.
Perfect co-operation: Evidence against would be that no minds can co-operate better than humans do, or at least not to such an extent that they’d receive a major advantage.
Cooperation is a delicate quality. Too much and you get frozen, too little and you can’t accomplish much. Human science is a great example of a balance between cooperation and useful rivalry. How is a collective intellect of AGI’s going to preserve the right balance without mugging itself into pursuing insane expected utility-calculations?
Excluding the possibility of a rapid takeover would require at least strong evidence against gains...
Wait, are you saying that the burden of proof is with those who are skeptical of a Singularity? Are you saying that the null hypothesis is a rapid takeover? What evidence allowed you to make that hypothesises in the first place? Making up unfounded conjectures and then telling others to disprove them will lead to privileging random high-utility possibilities, that sound superficially convincing, while ignoring other problems that are based on empirical evidence.
...it’s not enough to show that e.g. current trends in hardware development show mostly increases in parallel instead of serial power—to refute the gains from increased serial power, you’d also have to show that this is indeed some deep physical limit which cannot be overcome.
All that doesn’t even matter. Computational resources are mostly irrelevant when it comes to risks from AI. What you have to show is that recursive self-improvement is possible. It is a question of whether you can dramatically speed up the discovery of unknown unknowns.
It is often cited how much faster expert systems are at their narrow area of expertise. But does that mean that the human brain is actually slower or that it can’t focus its resources on certain tasks? Take for example my ability to simulated some fantasy environment, off the top of my head, in front of my mind’s eye. Or the ability of humans to run real-time egocentric world-simulations to extrapolate and predict the behavior of physical systems and other agents. Our best computers don’t even come close to that.
Chip manufacturers are already earning most of their money by making their chips more energy efficient and working in parallel.
We simply don’t know how efficient the human brain’s algorithms are. You can’t just compare artificial algorithms with the human ability to accomplish tasks that were never selected for by evolution.
This is an actual feature. It is not clear that you can have a general intelligence with a huge amount of plasticity that would work at all rather than messing itself up.
This is an actual feature, see dysfunctional autism.
You don’t really anticipate to be surprised by evidence on this point because your definition of “minds” doesn’t even exist and therefore can’t be shown not to be copyable. And regarding brains, show me some neuroscientists who think that minds are effectively copyable.
Cooperation is a delicate quality. Too much and you get frozen, too little and you can’t accomplish much. Human science is a great example of a balance between cooperation and useful rivalry. How is a collective intellect of AGI’s going to preserve the right balance without mugging itself into pursuing insane expected utility-calculations?
Wait, are you saying that the burden of proof is with those who are skeptical of a Singularity? Are you saying that the null hypothesis is a rapid takeover? What evidence allowed you to make that hypothesises in the first place? Making up unfounded conjectures and then telling others to disprove them will lead to privileging random high-utility possibilities, that sound superficially convincing, while ignoring other problems that are based on empirical evidence.
All that doesn’t even matter. Computational resources are mostly irrelevant when it comes to risks from AI. What you have to show is that recursive self-improvement is possible. It is a question of whether you can dramatically speed up the discovery of unknown unknowns.