Re uncertainty about safety, if we were radically uncertain about how safe AI is, then the optimists would be more pessimistic, and the pessimists would be more optimistic.
In particular that means I’d have to be more pessimistic, while the extreme pessimists like Yudkowsky would have to be more optimistic on the problem.
Just read that one this morning. Glad we have a handle for it now.
Confusion, I dub thee
Tyler’s Weird Uncertainty ArgumentSafe Uncertainty Fallacy!First pithy summarization:
Safety =/= SUFty
Re uncertainty about safety, if we were radically uncertain about how safe AI is, then the optimists would be more pessimistic, and the pessimists would be more optimistic.
In particular that means I’d have to be more pessimistic, while the extreme pessimists like Yudkowsky would have to be more optimistic on the problem.