Re uncertainty about safety, if we were radically uncertain about how safe AI is, then the optimists would be more pessimistic, and the pessimists would be more optimistic.
In particular that means I’d have to be more pessimistic, while the extreme pessimists like Yudkowsky would have to be more optimistic on the problem.
Re uncertainty about safety, if we were radically uncertain about how safe AI is, then the optimists would be more pessimistic, and the pessimists would be more optimistic.
In particular that means I’d have to be more pessimistic, while the extreme pessimists like Yudkowsky would have to be more optimistic on the problem.