From what I am seeing people here are focusing way too much on having a precisely calibrated P(doom) value.
It seems that even if P(doom) is 1% the doom scenario should be taken very seriously and alignment research pursued to the furthest extent possible.
The probability that after much careful calibration and research you would come up with a P(doom) value less than 1% seems very unlikely to me. So why invest time into refining your estimate?
because it fails to engage with the key point: that the low predictiveness of the dynamics of ai risk makes it hard for people to believe there’s a significant risk at all. I happen to think there is; that’s why I clicked agree vote. but I clicked karma downvote because of failing to engage with the key epistemic issue at hand.
Why is this being downvoted?
From what I am seeing people here are focusing way too much on having a precisely calibrated P(doom) value.
It seems that even if P(doom) is 1% the doom scenario should be taken very seriously and alignment research pursued to the furthest extent possible.
The probability that after much careful calibration and research you would come up with a P(doom) value less than 1% seems very unlikely to me. So why invest time into refining your estimate?
because it fails to engage with the key point: that the low predictiveness of the dynamics of ai risk makes it hard for people to believe there’s a significant risk at all. I happen to think there is; that’s why I clicked agree vote. but I clicked karma downvote because of failing to engage with the key epistemic issue at hand.