I don’t mean to present myself as the “best arguments that could be answered here” or at all representative of the alignment community. But just wanted to engage. I appreciate your thoughts!
Well, one argument for potential doom doesn’t necessitate an adversarial AI, but rather people using increasingly powerful tools in dumb and harmful ways (in the same class of consideration for me as nuclear weapons; my dumb imagined situation of this is a government using AI to continually scale up surveillance and maybe we eventually get to a position like in 1984)
Another point is that a sufficiently intelligent and agentic AI would not need humans, it would probably eventually be suboptimal to rely on humans for anything. And it kinda feels to me like this is what we are heavily incentivized to design, the next best and most capable system. In terms of efficiency, we want to get rid of the human in the loop, that person’s expensive!
I don’t mean to present myself as the “best arguments that could be answered here” or at all representative of the alignment community. But just wanted to engage. I appreciate your thoughts!
Well, one argument for potential doom doesn’t necessitate an adversarial AI, but rather people using increasingly powerful tools in dumb and harmful ways (in the same class of consideration for me as nuclear weapons; my dumb imagined situation of this is a government using AI to continually scale up surveillance and maybe we eventually get to a position like in 1984)
Another point is that a sufficiently intelligent and agentic AI would not need humans, it would probably eventually be suboptimal to rely on humans for anything. And it kinda feels to me like this is what we are heavily incentivized to design, the next best and most capable system. In terms of efficiency, we want to get rid of the human in the loop, that person’s expensive!