Yes. His argument is it is against any particular risk and here the risk is particular, or something. Scott Alexander’s response is… less polite than mine, and emphasizes this point.
Re uncertainty about safety, if we were radically uncertain about how safe AI is, then the optimists would be more pessimistic, and the pessimists would be more optimistic.
In particular that means I’d have to be more pessimistic, while the extreme pessimists like Yudkowsky would have to be more optimistic on the problem.
Yes. His argument is it is against any particular risk and here the risk is particular, or something. Scott Alexander’s response is… less polite than mine, and emphasizes this point.
Just read that one this morning. Glad we have a handle for it now.
Confusion, I dub thee
Tyler’s Weird Uncertainty ArgumentSafe Uncertainty Fallacy!First pithy summarization:
Safety =/= SUFty
Re uncertainty about safety, if we were radically uncertain about how safe AI is, then the optimists would be more pessimistic, and the pessimists would be more optimistic.
In particular that means I’d have to be more pessimistic, while the extreme pessimists like Yudkowsky would have to be more optimistic on the problem.