This might be a convenient place to collect a variety of reasons why people are FOOM denialists. From my POV:
I am skeptical that safeguards against UFAI (unFAI) will not work. In part because:
I doubt that the “takeoff” will be “hard”. Because:
I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software.
And hence an effective safeguard would be to simply not give the machine its own credit card!
And in any case, the Moore’s law curve for electronics does not arise from delays in thinking up clever ideas, it arises from delays in building machines to incredibly high tolerances.
Furthermore, even after the machine has more hardware, it doesn’t yet have higher intelligence until it reads lots more encyclopedias and proves for itself many more theorems. These things take time.
And finally, I have yet to see the argument that an FAI protects us from a future UFAI. That is, how does the SIAI help us?
Oh, and I do think that the other existential risks, particularly war and economic collapse, put the UFAI risk pretty far down the priority list. Sure, those other risks may not be quite so existential, but if they don’t kill us, they will at least prevent an early singularity.
Edit added two days later: Since writing this, I thought about it some more, shut up for a moment, and did the math. I still think that it is unlikely that the first takeoff will be a hard one; so hard that it gets out of control. But I now estimate something like a 10% chance that the first takeoff will be hard, and I estimate something like a 30% chance that at least one of the first couple dozen takeoffs will be hard. Multiply that by an estimated 10% chance that a hard takeoff will take place without adequate safeguards in place, and another 10% chance that a safeguardless hard takeoff will go rogue, and you get something like a 0.3% chance of a disaster of Forbin Project magnitude. Completely unacceptable.
Originally, I had discounted the chance that a simple software change could cause the takeoff; I assumed you would need to double and redouble the hardware capability. What I failed to notice was that a simple “tuning” change to the (soft) network connectivity parameters—changing the maximum number of inputs per “neuron” from
8 to 7, say, could have an (unexpected) effect on performance of several orders of magnitude simply by suppressing wasteful thrashing or some such thing.
I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software.
Do you think that progress in AI is limited primarily by hardware? If hardware is the limiting factor, then you should think AI soon relatively plausible. If software is the limiting factor (the majority view, and the reason most AI folk reject claims such as those of Moravec), such that we won’t get AI until well beyond the minimum computational requirements, then either early AIs should be able to run fast or with numerous copies cheaply, or there will be a lot of room to reduce bloated hardware demands through software improvements.
Thinking that AI will take a long time (during which hardware will advance mightily towards physical limits) but also be sharply and stably hardware-limited when created is a hard view to defend.
I am imagining that it will work something like the human brain (but not by ‘scan and emulate’). We need to create hardware modules comparable to neurons, we need to have some kind of geometric organization which permits individual hardware modules to establish physical connections to a handful of nearby modules, and we need a ‘program’ (corresponding to human embryonic development) which establishes a few starting connections, and finally we need a training period (like training a neural net, and comparable to what the human brain experiences from the first neural activity in the womb through graduate school) which adds many more physical connections. I’m not sure whether to call these connections hardware or software. Actually, they are a hybrid of both—like PLAs (yeah, I way out of date on technology).
So I’m imagining a lot of theoretical work needed to come up with a good ‘neuron’ design (probably several dozen different kinds of neurons), more theoretical work to come up with a good ‘program’ to correspond to the embryonic interconnect, and someone willing to pay for lots and lots of neurons.
So, yeah, I’m thinking that the program will be relatively simple (equivalent to a few million lines of code), but it will take us a long time to find it. Not the 500 million years that it took evolution to come up with that program—apparently 500 million years after it had already invented the neuron. But for human designers, at least a few decades to find and write the program. I hope this explanation helps to make my position seem less weird.
4 . And hence an effective safeguard would be to simply not give the machine its own credit card!
(Powerful) optimization processes can find such ways of solving problems by exploiting every possible shortcut that it is hard to predict those ways in advance. Recently here was an example of that. Genetic algorithm found unexpected solution of a problem exploiting analog properties of particular FPGA chip.
3 and 4: hardware, sure—that is improving too—just not as fast, sometimes. A machine may find a way to obtain a credit card—or it will get a human to buy whatever it needs—as happens in companies today.
6: how much time? Surely a better example would be: “perform experiments”—and experiments that caan’t be minaturised and executed at high speeds—such as those done in the LHC.
7: AltaVista didn’t protect us from Google—nor did Friendster protect against MySpace. However, so far Google has mostly successfully crushed its rivals.
8: no way, IMO—e.g. see Matt Ridley. That is probably good advice for all DOOMsters, actually.
Some of the most obvious safeguards are likely to be self-imposed ones:
This might be a convenient place to collect a variety of reasons why people are FOOM denialists. From my POV:
I am skeptical that safeguards against UFAI (unFAI) will not work. In part because:
I doubt that the “takeoff” will be “hard”. Because:
I am pretty sure the takeoff will require repeatedly doubling and quadrupling hardware, not just autorewriting software.
And hence an effective safeguard would be to simply not give the machine its own credit card!
And in any case, the Moore’s law curve for electronics does not arise from delays in thinking up clever ideas, it arises from delays in building machines to incredibly high tolerances.
Furthermore, even after the machine has more hardware, it doesn’t yet have higher intelligence until it reads lots more encyclopedias and proves for itself many more theorems. These things take time.
And finally, I have yet to see the argument that an FAI protects us from a future UFAI. That is, how does the SIAI help us?
Oh, and I do think that the other existential risks, particularly war and economic collapse, put the UFAI risk pretty far down the priority list. Sure, those other risks may not be quite so existential, but if they don’t kill us, they will at least prevent an early singularity.
Edit added two days later: Since writing this, I thought about it some more, shut up for a moment, and did the math. I still think that it is unlikely that the first takeoff will be a hard one; so hard that it gets out of control. But I now estimate something like a 10% chance that the first takeoff will be hard, and I estimate something like a 30% chance that at least one of the first couple dozen takeoffs will be hard. Multiply that by an estimated 10% chance that a hard takeoff will take place without adequate safeguards in place, and another 10% chance that a safeguardless hard takeoff will go rogue, and you get something like a 0.3% chance of a disaster of Forbin Project magnitude. Completely unacceptable.
Originally, I had discounted the chance that a simple software change could cause the takeoff; I assumed you would need to double and redouble the hardware capability. What I failed to notice was that a simple “tuning” change to the (soft) network connectivity parameters—changing the maximum number of inputs per “neuron” from 8 to 7, say, could have an (unexpected) effect on performance of several orders of magnitude simply by suppressing wasteful thrashing or some such thing.
Do you think that progress in AI is limited primarily by hardware? If hardware is the limiting factor, then you should think AI soon relatively plausible. If software is the limiting factor (the majority view, and the reason most AI folk reject claims such as those of Moravec), such that we won’t get AI until well beyond the minimum computational requirements, then either early AIs should be able to run fast or with numerous copies cheaply, or there will be a lot of room to reduce bloated hardware demands through software improvements.
Thinking that AI will take a long time (during which hardware will advance mightily towards physical limits) but also be sharply and stably hardware-limited when created is a hard view to defend.
I am imagining that it will work something like the human brain (but not by ‘scan and emulate’). We need to create hardware modules comparable to neurons, we need to have some kind of geometric organization which permits individual hardware modules to establish physical connections to a handful of nearby modules, and we need a ‘program’ (corresponding to human embryonic development) which establishes a few starting connections, and finally we need a training period (like training a neural net, and comparable to what the human brain experiences from the first neural activity in the womb through graduate school) which adds many more physical connections. I’m not sure whether to call these connections hardware or software. Actually, they are a hybrid of both—like PLAs (yeah, I way out of date on technology).
So I’m imagining a lot of theoretical work needed to come up with a good ‘neuron’ design (probably several dozen different kinds of neurons), more theoretical work to come up with a good ‘program’ to correspond to the embryonic interconnect, and someone willing to pay for lots and lots of neurons.
So, yeah, I’m thinking that the program will be relatively simple (equivalent to a few million lines of code), but it will take us a long time to find it. Not the 500 million years that it took evolution to come up with that program—apparently 500 million years after it had already invented the neuron. But for human designers, at least a few decades to find and write the program. I hope this explanation helps to make my position seem less weird.
(Powerful) optimization processes can find such ways of solving problems by exploiting every possible shortcut that it is hard to predict those ways in advance. Recently here was an example of that. Genetic algorithm found unexpected solution of a problem exploiting analog properties of particular FPGA chip.
7-8 aren’t hard-takeoff-denialist ideas; they’re SIAI noncontribution arguments. Good summary, though.
Phew! First, my material on the topic:
http://alife.co.uk/essays/the_singularity_is_nonsense/
http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/
Then a few points—which I may add to later.
3 and 4: hardware, sure—that is improving too—just not as fast, sometimes. A machine may find a way to obtain a credit card—or it will get a human to buy whatever it needs—as happens in companies today.
6: how much time? Surely a better example would be: “perform experiments”—and experiments that caan’t be minaturised and executed at high speeds—such as those done in the LHC.
7: AltaVista didn’t protect us from Google—nor did Friendster protect against MySpace. However, so far Google has mostly successfully crushed its rivals.
8: no way, IMO—e.g. see Matt Ridley. That is probably good advice for all DOOMsters, actually.
Some of the most obvious safeguards are likely to be self-imposed ones:
http://alife.co.uk/essays/stopping_superintelligence/
...though a resiliant infrastructure would help too. We see rogue agents (botnets) “eating” the internet today—and it is not very much fun!
Incidentally, a much better place for this kind of comment on this site would be:
http://lesswrong.com/lw/wf/hard_takeoff/