As far as I can tell, this possibility of an exponentially-paced intelligence explosion is the main argument for folks devoting time to worrying about super-intelligent AI now, even though current technology doesn’t give us anything even close.
Not at all. The reasons we should work on AI alignment now are:
AI alignment is a hard problem
We don’t know how long it will take us to solve it
We don’t know how long it will be until superintelligent AI becomes possible
There is no strong reason to believe we will know superintelligent AI is coming far in advance
“Current technology doesn’t give us anything even close” is not extremely informative since we don’t know the metric w.r.t. which “close” should be measured. Heavier than air flight was believed impossible by many, until the Wright brothers did it. The technology of 1929 didn’t give anything close to an atom bomb or a moon landing, and yet the atom bomb was made 16 years later, and the moon landing 40 years later.
Regarding the differential equations, I don’t think it’s a very meaningful analysis if you haven’t even defined the scale on which you measure intelligence. If I(x) is some measure of intelligence that grows exponentially, then log I(x) is another measure of intelligence which grows linearly, and if I(x) grows linearly then exp I(x) grows exponentially.
Also, you might be interested in this paper by Yudkowsky.
if you do want to analyze the plausibility of an intelligence explosion then it seems worthwhile to respond in detail to previous work
If you replace “analyze the plausibility” with “convincingly demonstrate to skeptics” then this seems right.
The OP seems to be written more in the spirit of exploration rather than conclusive argument though, which seems valuable and doesn’t necessarily require responding in detail to prior work (in this case ~100 pages). Seems like kind of a soul-crushing way to respond to curiosity :)
(I hope my own comments didn’t come across harshly.)
Not at all. The reasons we should work on AI alignment now are:
AI alignment is a hard problem
We don’t know how long it will take us to solve it
We don’t know how long it will be until superintelligent AI becomes possible
There is no strong reason to believe we will know superintelligent AI is coming far in advance
“Current technology doesn’t give us anything even close” is not extremely informative since we don’t know the metric w.r.t. which “close” should be measured. Heavier than air flight was believed impossible by many, until the Wright brothers did it. The technology of 1929 didn’t give anything close to an atom bomb or a moon landing, and yet the atom bomb was made 16 years later, and the moon landing 40 years later.
Regarding the differential equations, I don’t think it’s a very meaningful analysis if you haven’t even defined the scale on which you measure intelligence. If I(x) is some measure of intelligence that grows exponentially, then log I(x) is another measure of intelligence which grows linearly, and if I(x) grows linearly then exp I(x) grows exponentially.
Also, you might be interested in this paper by Yudkowsky.
If you replace “analyze the plausibility” with “convincingly demonstrate to skeptics” then this seems right.
The OP seems to be written more in the spirit of exploration rather than conclusive argument though, which seems valuable and doesn’t necessarily require responding in detail to prior work (in this case ~100 pages). Seems like kind of a soul-crushing way to respond to curiosity :)
(I hope my own comments didn’t come across harshly.)
You’re right, sorry. Edited.