If your base rate is strongly different from the expert consensus there should be some explainable reason for the difference.
If the reason for the difference is “I thought a lot about it, but I can’t explain the details to you”, I will happily add yours to the list of “bad arguments”.
A good argument should be:
simple
backed up by facts that are either self-evidently true or empirically observable
If you give me a list of “100 things make me nervous”, I can just as easily give you “a list of 100 things that make me optimistic”.
This was the most compelling part of their post for me:
“You are correct about the arguments for doom being either incomplete or bad. But the arguments for survival are equally incomplete and bad.”
And you really don’t seem to have taken it to heart. You’re demanding that doomers provide you with a good argument. Well, I demand that you provide me with a good argument!
More seriously: we need to weigh the doom-evidence and the non-doom-evidence against each other. But you believe that we need to look at the doom-evidence and if it’s not very good, then p(doom) should be low. But that’s wrong—you don’t acknowledge that the non-doom-evidence is also not very good. IOW there’s a ton of uncertainty.
If you give me a list of “100 things make me nervous”, I can just as easily give you “a list of 100 things that make me optimistic”.
Then it would be a lot more logical for your p(doom) to be 0.5 rather than 0.02-0.2!
Feels like this attitude would lead you to neurotically obsessing over tons of things. You ought to have something that strongly distinguishes AI from other concepts before you start worrying about it, considering how infeasible it is to worry about everything conceivable.
Well of course there is something different: The p(doom), as based on the opinions of a lot of people who I consider to be smart. That strongly distinguishes it from just about every other concept.
“People I consider very smart say this is dangerous” seems so cursed, especially in response to people questioning whether it is dangerous. Would be better for you to not participate in the discussion and just leave it to the people who have an actual independently informed opinion.
How many things could reasonably have a p(doom) > 0.01? Not very many. Therefore your worry about me “neurotically obsessing over tons of things” is unfounded. I promise I won’t :) If my post causes you to think that, then I apologize, I have misspoken my argument.
What is the actual argument that there’s ‘not very many’? (Or why do you believe such an argument made somewhere else)
There’s hundreds of asteroids and comets alone that have some probability of hitting the Earth in the next thousand years, how can anyone possibly evaluate ‘p(doom)’ for any of this, let alone every other possible catastrophe?
I was reading the UK National Risk Register earlier today and thinking about this. Notable to me that the top-level disaster severity has a very low cap of ~thousands of casualties, or billions of economic loss. Although it does note in the register that AI is a chronic risk that is being managed under a new framework (that I can’t find precedent for).
If your base rate is strongly different from the expert consensus there should be some explainable reason for the difference.
If the reason for the difference is “I thought a lot about it, but I can’t explain the details to you”, I will happily add yours to the list of “bad arguments”.
A good argument should be:
simple
backed up by facts that are either self-evidently true or empirically observable
If you give me a list of “100 things make me nervous”, I can just as easily give you “a list of 100 things that make me optimistic”.
There’s a lot of problems with linking to manifold and calling it “the expert consensus”!
It’s not the right source. The survey you linked elsewhere would be better.
Even for the survey, it’s unclear whether these are the “right” experts for the question. This at least needs clarification.
It’s not a consensus, this is a median or mean of a pretty wide distribution.
I wouldn’t belabor it, but you’re putting quite a lot of weight on this one point.
This was the most compelling part of their post for me:
“You are correct about the arguments for doom being either incomplete or bad. But the arguments for survival are equally incomplete and bad.”
And you really don’t seem to have taken it to heart. You’re demanding that doomers provide you with a good argument. Well, I demand that you provide me with a good argument!
More seriously: we need to weigh the doom-evidence and the non-doom-evidence against each other. But you believe that we need to look at the doom-evidence and if it’s not very good, then p(doom) should be low. But that’s wrong—you don’t acknowledge that the non-doom-evidence is also not very good. IOW there’s a ton of uncertainty.
Then it would be a lot more logical for your p(doom) to be 0.5 rather than 0.02-0.2!
Feels like this attitude would lead you to neurotically obsessing over tons of things. You ought to have something that strongly distinguishes AI from other concepts before you start worrying about it, considering how infeasible it is to worry about everything conceivable.
Well of course there is something different: The p(doom), as based on the opinions of a lot of people who I consider to be smart. That strongly distinguishes it from just about every other concept.
“People I consider very smart say this is dangerous” seems so cursed, especially in response to people questioning whether it is dangerous. Would be better for you to not participate in the discussion and just leave it to the people who have an actual independently informed opinion.
How many things could reasonably have a p(doom) > 0.01? Not very many. Therefore your worry about me “neurotically obsessing over tons of things” is unfounded. I promise I won’t :) If my post causes you to think that, then I apologize, I have misspoken my argument.
What is the actual argument that there’s ‘not very many’? (Or why do you believe such an argument made somewhere else)
There’s hundreds of asteroids and comets alone that have some probability of hitting the Earth in the next thousand years, how can anyone possibly evaluate ‘p(doom)’ for any of this, let alone every other possible catastrophe?
I was reading the UK National Risk Register earlier today and thinking about this. Notable to me that the top-level disaster severity has a very low cap of ~thousands of casualties, or billions of economic loss. Although it does note in the register that AI is a chronic risk that is being managed under a new framework (that I can’t find precedent for).