It’s a good list. @avturchin is good at coming up with a lot of weird possibilities too (example, another example).
If I look within while staring into your list, and ask myself, what feels likely to me, I think “Partly aligned AI”, but not quite the way you describe it. I think a superintelligence, that has an agenda regarding humans, but not the ideal like CEV. Instead, an agenda that may require reshaping humans, at least if they intend to participate in the technological world…
I am also skeptical about the stereotype of the hegemonizing AI which remakes the entire universe. I take the Doomsday Argument seriously, and it suggests to me that one is running some kind of risk, if you engage in that behavior. (Another way to resolve the tension between Doomsday Argument and Hegemonizing AI is to suppose that the latter and its agents are almost always unconscious. But here one is getting into areas where the truth may be something that no human being has yet imagined.)
Thanks! I think your tag of @avturchin didn’t work, so just pinging them here to see if they think I missed important and probable scenarios.
Taking the Doomsday argument seriously, the “Futures without AGI because we go extinct in another way” and the “Futures with AGI in which we die” seem most probable. In futures with conscious AGI agents, it will depend a lot on how experience gets sampled (e.g. one agent vs many).
It’s a good list. @avturchin is good at coming up with a lot of weird possibilities too (example, another example).
If I look within while staring into your list, and ask myself, what feels likely to me, I think “Partly aligned AI”, but not quite the way you describe it. I think a superintelligence, that has an agenda regarding humans, but not the ideal like CEV. Instead, an agenda that may require reshaping humans, at least if they intend to participate in the technological world…
I am also skeptical about the stereotype of the hegemonizing AI which remakes the entire universe. I take the Doomsday Argument seriously, and it suggests to me that one is running some kind of risk, if you engage in that behavior. (Another way to resolve the tension between Doomsday Argument and Hegemonizing AI is to suppose that the latter and its agents are almost always unconscious. But here one is getting into areas where the truth may be something that no human being has yet imagined.)
Thanks! I think your tag of @avturchin didn’t work, so just pinging them here to see if they think I missed important and probable scenarios.
Taking the Doomsday argument seriously, the “Futures without AGI because we go extinct in another way” and the “Futures with AGI in which we die” seem most probable. In futures with conscious AGI agents, it will depend a lot on how experience gets sampled (e.g. one agent vs many).