If even the comatose behemoth of the gov has noticed the risk, then AGI is indeed much closer than most people think.
Reasoning doesn’t work like that. The information flow is almost entirely from the subtle hints in reality, to people like MIRI, and then to the government. Maybe update on gov’s being slightly less comatose, or MIRI having a really good PR team.
Once we make the assumption that governments are less on the ball than MIRI, and see what MIRI says, the governments actions tell us almost nothing about AI.
Reasoning doesn’t work like that. The information flow is almost entirely from the subtle hints in reality, to people like MIRI, and then to the government. Maybe update on gov’s being slightly less comatose, or MIRI having a really good PR team.
Once we make the assumption that governments are less on the ball than MIRI, and see what MIRI says, the governments actions tell us almost nothing about AI.