I read some of the post and skimmed the rest, but this seems to broadly agree with my current thoughts about AI doom, and I am happy to see someone fleshing out this argument in detail.
[I decided to dump my personal intuition about AI risk below. I don’t have any specific facts to back it up.]
It seems to me that there is a much larger possibility space of what AIs can/will get created than the ideal superintelligent “goal-maximiser” AI put forward in arguments for AI doom.
The tools that we have depend more on the specific details of the underlying mechanics, and how we can wrangle it to do what we want, rather than our prior beliefs on how we would expect the tools to behave. I imagine that if you lived before aircraft and imagined a future in which humans could fly, you might think that humans would be flapping giant wings or be pedal-powered or something. While it would be great for that to exist, the limitations of the physics we know how to use require a different kind of mechanic that has different strengths and weaknesses to what we would think of in advance.
There’s no particular reason to think that the practical technologies available will lead to an AI capable of power-seeking, just because power-seeking is a side effect of the “ideal” AI that some people want to create. The existing AI tools, as far as I can tell, don’t provide much evidence in that direction. Even if a power-seeking AI is eventually practical to create, it may be far from the default and by then we may have sufficiently intelligent non-power-seeking AI.
I read some of the post and skimmed the rest, but this seems to broadly agree with my current thoughts about AI doom, and I am happy to see someone fleshing out this argument in detail.
[I decided to dump my personal intuition about AI risk below. I don’t have any specific facts to back it up.]
It seems to me that there is a much larger possibility space of what AIs can/will get created than the ideal superintelligent “goal-maximiser” AI put forward in arguments for AI doom.
The tools that we have depend more on the specific details of the underlying mechanics, and how we can wrangle it to do what we want, rather than our prior beliefs on how we would expect the tools to behave. I imagine that if you lived before aircraft and imagined a future in which humans could fly, you might think that humans would be flapping giant wings or be pedal-powered or something. While it would be great for that to exist, the limitations of the physics we know how to use require a different kind of mechanic that has different strengths and weaknesses to what we would think of in advance.
There’s no particular reason to think that the practical technologies available will lead to an AI capable of power-seeking, just because power-seeking is a side effect of the “ideal” AI that some people want to create. The existing AI tools, as far as I can tell, don’t provide much evidence in that direction. Even if a power-seeking AI is eventually practical to create, it may be far from the default and by then we may have sufficiently intelligent non-power-seeking AI.