Well of course there is something different: The p(doom), as based on the opinions of a lot of people who I consider to be smart. That strongly distinguishes it from just about every other concept.
“People I consider very smart say this is dangerous” seems so cursed, especially in response to people questioning whether it is dangerous. Would be better for you to not participate in the discussion and just leave it to the people who have an actual independently informed opinion.
How many things could reasonably have a p(doom) > 0.01? Not very many. Therefore your worry about me “neurotically obsessing over tons of things” is unfounded. I promise I won’t :) If my post causes you to think that, then I apologize, I have misspoken my argument.
What is the actual argument that there’s ‘not very many’? (Or why do you believe such an argument made somewhere else)
There’s hundreds of asteroids and comets alone that have some probability of hitting the Earth in the next thousand years, how can anyone possibly evaluate ‘p(doom)’ for any of this, let alone every other possible catastrophe?
I was reading the UK National Risk Register earlier today and thinking about this. Notable to me that the top-level disaster severity has a very low cap of ~thousands of casualties, or billions of economic loss. Although it does note in the register that AI is a chronic risk that is being managed under a new framework (that I can’t find precedent for).
Well of course there is something different: The p(doom), as based on the opinions of a lot of people who I consider to be smart. That strongly distinguishes it from just about every other concept.
“People I consider very smart say this is dangerous” seems so cursed, especially in response to people questioning whether it is dangerous. Would be better for you to not participate in the discussion and just leave it to the people who have an actual independently informed opinion.
How many things could reasonably have a p(doom) > 0.01? Not very many. Therefore your worry about me “neurotically obsessing over tons of things” is unfounded. I promise I won’t :) If my post causes you to think that, then I apologize, I have misspoken my argument.
What is the actual argument that there’s ‘not very many’? (Or why do you believe such an argument made somewhere else)
There’s hundreds of asteroids and comets alone that have some probability of hitting the Earth in the next thousand years, how can anyone possibly evaluate ‘p(doom)’ for any of this, let alone every other possible catastrophe?
I was reading the UK National Risk Register earlier today and thinking about this. Notable to me that the top-level disaster severity has a very low cap of ~thousands of casualties, or billions of economic loss. Although it does note in the register that AI is a chronic risk that is being managed under a new framework (that I can’t find precedent for).