This made some points I found helpful, but I reject this thinking as adequate. Specifically, I predict a ~40% chance that by late 2025, respected AI safety people from different camps will clearly indicate that this particular executive order was largely unhelpful/disappointing e.g. largely acted as a cover to accelerate AI, moved AI safety people into more neat-looking government roles with little clout, did not yet build a bridge to better/substantive AI policy, and generally tried to lead AI safety community along. (I will have a better number in ~2 months, but I still lose some Bayes/prediction/reputation points as it departs from 40% AND more Bayes points if this executive order has substantial positive effects).
My reasoning is that:
Governments like the US and China currently already pursue AI for specific capabilities, since SOTA AI applications offer a broad suite of targeted public opinion hacking and targeted elite psychological influence which modern governments and militaries evolved to pursue. This also indicates that the current policies are deceptive somehow (e.g. largely won’t be enforced) since governments are fundamentally adversarial against interest groups (e.g. extracting maximum value, playing interest groups against eachother, etc), but the capabilities downstream of AI acceleration in particular is coveted, and coveted by more deception/conflict-minded officials than the usual government folk. They are thinking about AI applications and have very little concern for AI risk itself, and will continue the race.
Writing policies that look satisfying to specific interest groups, but don’t get much results. I think they will underestimate the ability of the AI safety community to focus on results over satisfying-looking bills, but that they have not yet learned from that mistake.
Rather than being entirely caused by low competence and internal conflict/office politics/goodharting your boss/corruption, inconsistent policymaking is also caused by competent people demonstrating nonchalance, apathy or disdain for improving the world, and flippantly granting and denying interest group’s requests for extractive coercion or to obfuscate the policymaking process. This is especially the case for matters involving intelligence agency competence since they face natural selection pressures and engage in routine disinformation. As a result, they will largely not lean on people from the AI safety community, mostly pretending to do so, and in ways that they often fail to realize won’t trick or corrupt rationalist or EA-adjacent thinkers (whose pragmatism they aren’t accustomed to).
Pure profit motive. Money is potential energy when it comes to moving government policy, and AI also happens to generates exactly the particular kind of money and power that modern governments are structured to pursue. In late 2022 Yudkowsky wrote:
Imagine if nuclear weapons could be made out of laundry detergent; and spit out gold up until they got large enough, whereupon they’d ignite the atmosphere; and this threshold couldn’t be calculated...
This made some points I found helpful, but I reject this thinking as adequate. Specifically, I predict a ~40% chance that by late 2025, respected AI safety people from different camps will clearly indicate that this particular executive order was largely unhelpful/disappointing e.g. largely acted as a cover to accelerate AI, moved AI safety people into more neat-looking government roles with little clout, did not yet build a bridge to better/substantive AI policy, and generally tried to lead AI safety community along. (I will have a better number in ~2 months, but I still lose some Bayes/prediction/reputation points as it departs from 40% AND more Bayes points if this executive order has substantial positive effects).
My reasoning is that:
Governments like the US and China currently already pursue AI for specific capabilities, since SOTA AI applications offer a broad suite of targeted public opinion hacking and targeted elite psychological influence which modern governments and militaries evolved to pursue. This also indicates that the current policies are deceptive somehow (e.g. largely won’t be enforced) since governments are fundamentally adversarial against interest groups (e.g. extracting maximum value, playing interest groups against eachother, etc), but the capabilities downstream of AI acceleration in particular is coveted, and coveted by more deception/conflict-minded officials than the usual government folk. They are thinking about AI applications and have very little concern for AI risk itself, and will continue the race.
Writing policies that look satisfying to specific interest groups, but don’t get much results. I think they will underestimate the ability of the AI safety community to focus on results over satisfying-looking bills, but that they have not yet learned from that mistake.
Rather than being entirely caused by low competence and internal conflict/office politics/goodharting your boss/corruption, inconsistent policymaking is also caused by competent people demonstrating nonchalance, apathy or disdain for improving the world, and flippantly granting and denying interest group’s requests for extractive coercion or to obfuscate the policymaking process. This is especially the case for matters involving intelligence agency competence since they face natural selection pressures and engage in routine disinformation. As a result, they will largely not lean on people from the AI safety community, mostly pretending to do so, and in ways that they often fail to realize won’t trick or corrupt rationalist or EA-adjacent thinkers (whose pragmatism they aren’t accustomed to).
Pure profit motive. Money is potential energy when it comes to moving government policy, and AI also happens to generates exactly the particular kind of money and power that modern governments are structured to pursue. In late 2022 Yudkowsky wrote: