I feel like I must be reading this wrong, because Tyler seems to be saying that uncertainty somehow weighs against risk. This is deeply confusing to me, as normally people treat the association as running the other way.
Yes. His argument is it is against any particular risk and here the risk is particular, or something. Scott Alexander’s response is… less polite than mine, and emphasizes this point.
Re uncertainty about safety, if we were radically uncertain about how safe AI is, then the optimists would be more pessimistic, and the pessimists would be more optimistic.
In particular that means I’d have to be more pessimistic, while the extreme pessimists like Yudkowsky would have to be more optimistic on the problem.
Yeah—it’s odd, but TC is a self-professed contrarian after all.
I think the question here is: why doesn’t he actually address the fundamentals of the AGI doom case? The “it’s unlikely / unknown” position is really quite a weak argument which I doubt he would make if he actually understood EY’s position.
Seeing the state of the discourse on AGI risk just makes it more and more clear that the AGI risk awareness movement has failed to express its arguments in terms that non-rationalists can understand.
People like TC should the first type of public intellectual to grok it, because EY’s doom case is is highly analogous to market dynamics. And yet.
I feel like I must be reading this wrong, because Tyler seems to be saying that uncertainty somehow weighs against risk. This is deeply confusing to me, as normally people treat the association as running the other way.
Yes. His argument is it is against any particular risk and here the risk is particular, or something. Scott Alexander’s response is… less polite than mine, and emphasizes this point.
Just read that one this morning. Glad we have a handle for it now.
Confusion, I dub thee
Tyler’s Weird Uncertainty ArgumentSafe Uncertainty Fallacy!First pithy summarization:
Safety =/= SUFty
Re uncertainty about safety, if we were radically uncertain about how safe AI is, then the optimists would be more pessimistic, and the pessimists would be more optimistic.
In particular that means I’d have to be more pessimistic, while the extreme pessimists like Yudkowsky would have to be more optimistic on the problem.
Yeah—it’s odd, but TC is a self-professed contrarian after all.
I think the question here is: why doesn’t he actually address the fundamentals of the AGI doom case? The “it’s unlikely / unknown” position is really quite a weak argument which I doubt he would make if he actually understood EY’s position.
Seeing the state of the discourse on AGI risk just makes it more and more clear that the AGI risk awareness movement has failed to express its arguments in terms that non-rationalists can understand.
People like TC should the first type of public intellectual to grok it, because EY’s doom case is is highly analogous to market dynamics. And yet.