This was the most compelling part of their post for me:
“You are correct about the arguments for doom being either incomplete or bad. But the arguments for survival are equally incomplete and bad.”
And you really don’t seem to have taken it to heart. You’re demanding that doomers provide you with a good argument. Well, I demand that you provide me with a good argument!
More seriously: we need to weigh the doom-evidence and the non-doom-evidence against each other. But you believe that we need to look at the doom-evidence and if it’s not very good, then p(doom) should be low. But that’s wrong—you don’t acknowledge that the non-doom-evidence is also not very good. IOW there’s a ton of uncertainty.
If you give me a list of “100 things make me nervous”, I can just as easily give you “a list of 100 things that make me optimistic”.
Then it would be a lot more logical for your p(doom) to be 0.5 rather than 0.02-0.2!
Feels like this attitude would lead you to neurotically obsessing over tons of things. You ought to have something that strongly distinguishes AI from other concepts before you start worrying about it, considering how infeasible it is to worry about everything conceivable.
Well of course there is something different: The p(doom), as based on the opinions of a lot of people who I consider to be smart. That strongly distinguishes it from just about every other concept.
“People I consider very smart say this is dangerous” seems so cursed, especially in response to people questioning whether it is dangerous. Would be better for you to not participate in the discussion and just leave it to the people who have an actual independently informed opinion.
This was the most compelling part of their post for me:
“You are correct about the arguments for doom being either incomplete or bad. But the arguments for survival are equally incomplete and bad.”
And you really don’t seem to have taken it to heart. You’re demanding that doomers provide you with a good argument. Well, I demand that you provide me with a good argument!
More seriously: we need to weigh the doom-evidence and the non-doom-evidence against each other. But you believe that we need to look at the doom-evidence and if it’s not very good, then p(doom) should be low. But that’s wrong—you don’t acknowledge that the non-doom-evidence is also not very good. IOW there’s a ton of uncertainty.
Then it would be a lot more logical for your p(doom) to be 0.5 rather than 0.02-0.2!
Feels like this attitude would lead you to neurotically obsessing over tons of things. You ought to have something that strongly distinguishes AI from other concepts before you start worrying about it, considering how infeasible it is to worry about everything conceivable.
Well of course there is something different: The p(doom), as based on the opinions of a lot of people who I consider to be smart. That strongly distinguishes it from just about every other concept.
“People I consider very smart say this is dangerous” seems so cursed, especially in response to people questioning whether it is dangerous. Would be better for you to not participate in the discussion and just leave it to the people who have an actual independently informed opinion.