I would need to understand why early AIs would become so much more powerful than corporations, terrorists or nation-states>
One argument I removed to make it shorter was approximately: “It doesn’t have to take over the world to cause you harm”. And since early misaligned AI is more likely to appear in a developed country, your odds of being harmed by it is higher compared to someone in an undeveloped country. If ISIS suddenly found itself 500 strong in Silicon Valley and in control of Google’s servers, surely you would have the right to be concerned before they had a good chance of taking over the whole world. And you’d be doubly worried if you did not understand how it went from 0 to 500 “strong”, or what the next increase in strength might be. You understand how nation states and terrorist organizations grow. I don’t think anyone currently understands, well, how AI grows in intelligence.
There were a million other arguments I wanted to “head off” in this post, but the whole point of introductory material is to be short.
> there is no reason to believe that rouge AI will be dramatically more powerful than corporations or terrorists”
I don’t think that’s true. If our AI ends up no more powerful than existing corporations or terrorists, why are we spending billions on it? It had better be more powerful than something. I agree alignment might not be “solvable” for the reasons you mention, and I don’t claim that it is.
I am specifically claiming AI will be unusually dangerous, though.
One argument I removed to make it shorter was approximately: “It doesn’t have to take over the world to cause you harm”. And since early misaligned AI is more likely to appear in a developed country, your odds of being harmed by it is higher compared to someone in an undeveloped country. If ISIS suddenly found itself 500 strong in Silicon Valley and in control of Google’s servers, surely you would have the right to be concerned before they had a good chance of taking over the whole world. And you’d be doubly worried if you did not understand how it went from 0 to 500 “strong”, or what the next increase in strength might be. You understand how nation states and terrorist organizations grow. I don’t think anyone currently understands, well, how AI grows in intelligence.
There were a million other arguments I wanted to “head off” in this post, but the whole point of introductory material is to be short.
> there is no reason to believe that rouge AI will be dramatically more powerful than corporations or terrorists”
I don’t think that’s true. If our AI ends up no more powerful than existing corporations or terrorists, why are we spending billions on it? It had better be more powerful than something. I agree alignment might not be “solvable” for the reasons you mention, and I don’t claim that it is.
I am specifically claiming AI will be unusually dangerous, though.