Most of the questions seem to be loaded or ambiguous in some way.
For example, this one implies intelligence is simply a hardware problem:
Computers will soon become so fast that AI researchers will be able to create an artificial intelligence that’s smarter than any human. When this happens humanity will probably be wiped out.
Well, to some extent, that’s true. If a malicious god gave us a computer with infinite or nigh-infinite computing power, we could probably have AIXI up and running within a few days. Similar comments apply to brain emulation—things like the Blue Brain project indicate our scanning ability, poor as it may seem, is still way beyond our ability to run the scanned neurons.
Even if you don’t interpret ‘hardware problem’ quite that generously, you still have an argument for hard takeoff—this is the ‘hardware overhang’ argument: if you prefer to argue that software is the bottleneck, then you have the problem that when we finally blunder into a working AI, it will be running on hardware far beyond what was needed for an intelligently-written AI.
So you’re faced with a bit of a dilemma. Either hardware is the limit in which case Moore’s law means you expect an AI soon and then quickly passing human with a few more cranks of the law, or you expect an AI much further out, but when it comes it’ll improve even faster than the other kind would.
Most of the questions seem to be loaded or ambiguous in some way.
For example, this one implies intelligence is simply a hardware problem:
Well, to some extent, that’s true. If a malicious god gave us a computer with infinite or nigh-infinite computing power, we could probably have AIXI up and running within a few days. Similar comments apply to brain emulation—things like the Blue Brain project indicate our scanning ability, poor as it may seem, is still way beyond our ability to run the scanned neurons.
Even if you don’t interpret ‘hardware problem’ quite that generously, you still have an argument for hard takeoff—this is the ‘hardware overhang’ argument: if you prefer to argue that software is the bottleneck, then you have the problem that when we finally blunder into a working AI, it will be running on hardware far beyond what was needed for an intelligently-written AI.
So you’re faced with a bit of a dilemma. Either hardware is the limit in which case Moore’s law means you expect an AI soon and then quickly passing human with a few more cranks of the law, or you expect an AI much further out, but when it comes it’ll improve even faster than the other kind would.