I definitely think that LW might not realize that AI is on an S-curve right now.
AI is obviously on an S-curve, since eventually you run out of energy to feed into the system. But the top of that S-curve is so far beyond human intelligence, that this fact is basically irrelevant when considering AI safety.
The arguments about fundamental limits of computation (halting problem,etc) also are irrelevant for similar reasons. Humans can’t even solve BB(6).
I definitely agree that the limit could end up being far beyond superhuman, but in that addendum, I was talking about limitations that would slow down AI right as it has equal the compute and memory that humans have. It’s possible that Addendum 2 does fail though, so I agree with you that this isn’t conclusive. It was more to check the inevitability of fast takeoff/AI explosion, not that it can’t happen.
AI is obviously on an S-curve, since eventually you run out of energy to feed into the system. But the top of that S-curve is so far beyond human intelligence, that this fact is basically irrelevant when considering AI safety.
The arguments about fundamental limits of computation (halting problem,etc) also are irrelevant for similar reasons. Humans can’t even solve BB(6).
I definitely agree that the limit could end up being far beyond superhuman, but in that addendum, I was talking about limitations that would slow down AI right as it has equal the compute and memory that humans have. It’s possible that Addendum 2 does fail though, so I agree with you that this isn’t conclusive. It was more to check the inevitability of fast takeoff/AI explosion, not that it can’t happen.