In my head, I’ve sort of just been simplifying to two ways the future could go: human extinction within a relatively short time period after powerful AI is developed or a pretty good utopian world. The non-extinction outcomes are not ones I worry about at the moment, though I’m very curious about how things will play out. I’m very excited about the future conditional on us figuring out how to align AI.
I’m curious about, for people who think similarly to Katja, what kind of story are you imagining that leads to that? Does the story involve authoritarianism (but I think even then, the world in which the leader of any of the current leading labs has total control and a superintelligent AI that does whatever they want, that future is probably much much more fun and exciting for me than the present—and I like my present life!)? Does it involve us being only presented with pretty meh options for how to build the future because we can’t agree on something that wholly satisfies everyone? Does it involve multi-agent scenarios with the AIs or the humans controlling the AIs being bad at bargaining so we end up with meh futures that no one really wants? I find a bunch of stories pretty unlikely after I think about them but maybe I’m missing something important.
This is also something I’d be excited to have a Dialogue with someone about. Maybe just fleshing out what kind of future you’re imagining and how you’re imagining we end up in that situation.
In my head, I’ve sort of just been simplifying to two ways the future could go: human extinction within a relatively short time period after powerful AI is developed or a pretty good utopian world. The non-extinction outcomes are not ones I worry about at the moment, though I’m very curious about how things will play out. I’m very excited about the future conditional on us figuring out how to align AI.
I’m curious about, for people who think similarly to Katja, what kind of story are you imagining that leads to that? Does the story involve authoritarianism (but I think even then, the world in which the leader of any of the current leading labs has total control and a superintelligent AI that does whatever they want, that future is probably much much more fun and exciting for me than the present—and I like my present life!)? Does it involve us being only presented with pretty meh options for how to build the future because we can’t agree on something that wholly satisfies everyone? Does it involve multi-agent scenarios with the AIs or the humans controlling the AIs being bad at bargaining so we end up with meh futures that no one really wants? I find a bunch of stories pretty unlikely after I think about them but maybe I’m missing something important.
This is also something I’d be excited to have a Dialogue with someone about. Maybe just fleshing out what kind of future you’re imagining and how you’re imagining we end up in that situation.