I don’t see a lot of people on LW assuming that capabilities progressions will be obvious, or that we’re likely to get warning shots. If the people on safety teams at capabilities orgs do think this way, that’s interesting. I think the truth is probably somewhere in between there.
I think this might’ve been better titled “AI capabilities are hard to predict”? IQ isn’t a great predictor of dangerousness for humans. And AI does have something like IQ, and will have it even more if it crosses a critical threshold and gains (or more likely is given) general self-teaching capacities like humans have. See my recent short post on this Sapience, understanding, and “AGI”
It’s another argument for why we’ll see nonlinear progress, but it also implies a possible threshold that could be recognized- if people are alert to it.
I don’t see a lot of people on LW assuming that capabilities progressions will be obvious, or that we’re likely to get warning shots. If the people on safety teams at capabilities orgs do think this way, that’s interesting. I think the truth is probably somewhere in between there.
I think this might’ve been better titled “AI capabilities are hard to predict”? IQ isn’t a great predictor of dangerousness for humans. And AI does have something like IQ, and will have it even more if it crosses a critical threshold and gains (or more likely is given) general self-teaching capacities like humans have. See my recent short post on this Sapience, understanding, and “AGI” It’s another argument for why we’ll see nonlinear progress, but it also implies a possible threshold that could be recognized- if people are alert to it.