I find arguing about timelines also frustrating for the reason that it has a huge asymmetry.
Suppose you have a distribution of probability for times at which AGI might appear, which is the best you can really honestly say to have (if you claim to have a single precise date, either you’re a prophet or you don’t know what you’re talking about).
The AI safety argument only needs to demonstrate that the total P(T < 20 years) is higher than, say, 5% to argue that AI safety isn’t just a problem, it’s a relatively urgent problem.
The AI optimism argument needs to argue that P(T < 20 years) or even P(T < 50 years) is essentially zero, because otherwise the consequences are potentially so massive it really takes very little to offset even a low probability. How do you do that? I understand thinking that AGI is probably kinda far off. How do you get absolutely sure that AGI can’t possibly be behind the corner? I mean willing-to-bet-and-be-shot-in-the-head-if-you-lose sure? Because that’s what we’re talking about here. I don’t see how any intellectually honest technologist can be that confident about such a hard to predict thing.
There’s very few things I’d be willing to bet my life on. Specific-ish dates for conceivably feasible technological developments are not one of them. And if you do accept that AGI is possible, and you do think that it’s powerful and in fact worth seeking, and you justify your work by saying it will revolutionize the world, but you also deny that it can possibly be correspondingly dangerous, then you’re just not a serious person.
“Willing-to-bet-and-be-shot-in-the-head-if-you-lose” starts to be a little too much Pascal’s Mugging. By this token we can make different arguments about how unimaginable high suffering (S-risk) happens right now, which seems plausible enough that nobody would be willing to put their life on a bet about it, and the implications of these arguments might be in facts that AI should mostly wipe out the complex life on earth and start from scratch. I.e.., this would motivate that consciousness and ethics research is even higher, or at least equal priority with more “orthodox” style of AI x-risk concen, which takes that human’s or at least complex life’s existence on earth is an unshackable assumption (prior).
But in general, I agree with the sentiment of your comment.
I say it just to mean that the level of certainty required is quite high. Even if you count only human deaths and not all the rest of the value lost, with 8 billion people, a 0.0001% chance of extinction is an expected value of 8,000 deaths. People would usually be quite careful about boldly stating something that can get 8,000 people killed! Anyone who’s trying to argue that “no worries, it’ll be fine” has a much higher burden of proof for that reason alone, IMO, especially if they want to essentially dodge altogether the argument for why it might not, and simply rely on “there’s no way we’ll invent AGI that soon anyway”, which many people do.
I find arguing about timelines also frustrating for the reason that it has a huge asymmetry.
Suppose you have a distribution of probability for times at which AGI might appear, which is the best you can really honestly say to have (if you claim to have a single precise date, either you’re a prophet or you don’t know what you’re talking about).
The AI safety argument only needs to demonstrate that the total P(T < 20 years) is higher than, say, 5% to argue that AI safety isn’t just a problem, it’s a relatively urgent problem.
The AI optimism argument needs to argue that P(T < 20 years) or even P(T < 50 years) is essentially zero, because otherwise the consequences are potentially so massive it really takes very little to offset even a low probability. How do you do that? I understand thinking that AGI is probably kinda far off. How do you get absolutely sure that AGI can’t possibly be behind the corner? I mean willing-to-bet-and-be-shot-in-the-head-if-you-lose sure? Because that’s what we’re talking about here. I don’t see how any intellectually honest technologist can be that confident about such a hard to predict thing.
There’s very few things I’d be willing to bet my life on. Specific-ish dates for conceivably feasible technological developments are not one of them. And if you do accept that AGI is possible, and you do think that it’s powerful and in fact worth seeking, and you justify your work by saying it will revolutionize the world, but you also deny that it can possibly be correspondingly dangerous, then you’re just not a serious person.
I think this is a critical point in favor of AGI safety concern. Distilling this argument to a talking point seems useful for the public debate.
“Willing-to-bet-and-be-shot-in-the-head-if-you-lose” starts to be a little too much Pascal’s Mugging. By this token we can make different arguments about how unimaginable high suffering (S-risk) happens right now, which seems plausible enough that nobody would be willing to put their life on a bet about it, and the implications of these arguments might be in facts that AI should mostly wipe out the complex life on earth and start from scratch. I.e.., this would motivate that consciousness and ethics research is even higher, or at least equal priority with more “orthodox” style of AI x-risk concen, which takes that human’s or at least complex life’s existence on earth is an unshackable assumption (prior).
But in general, I agree with the sentiment of your comment.
I say it just to mean that the level of certainty required is quite high. Even if you count only human deaths and not all the rest of the value lost, with 8 billion people, a 0.0001% chance of extinction is an expected value of 8,000 deaths. People would usually be quite careful about boldly stating something that can get 8,000 people killed! Anyone who’s trying to argue that “no worries, it’ll be fine” has a much higher burden of proof for that reason alone, IMO, especially if they want to essentially dodge altogether the argument for why it might not, and simply rely on “there’s no way we’ll invent AGI that soon anyway”, which many people do.