uses it to make a confusing claim that “there’s nothing wrong” as though we have no more agency over the development of AI than we do over the chaotic motion of a dice.
It’s foolish to liken the development of AI to a roll of the dice. Given the stakes, we must try to study, prepare for, and guide the development of AI as best we can.
I think you’re misinterpreting the original comment. Scott was talking about there being “nothing wrong” with this conception of epistemic uncertainty before the 1 arrives, where each new roll doesn’t tell you anything about when the 1 will come. He isn’t advocating pacifism about AI risk, though. Ironically enough, in his capacity as lead of the Agent Foundations team at MIRI, Scott is arguably one of the least AI-risk-passive people on the planet.
I think you’re misinterpreting the original comment. Scott was talking about there being “nothing wrong” with this conception of epistemic uncertainty before the 1 arrives, where each new roll doesn’t tell you anything about when the 1 will come. He isn’t advocating pacifism about AI risk, though. Ironically enough, in his capacity as lead of the Agent Foundations team at MIRI, Scott is arguably one of the least AI-risk-passive people on the planet.