Strong upvoted. I agree that the more people look at the topic of AI regulation, the more they converge around US-China affairs and wondering why governments care about AI in the first place.
I don’t really know anything about regulation, but the question of “why governments care about AI in the first place” is something that I’ve been laser-focused on for several years now. It’s a really big question and the models have lots of gears. It’s pretty hard to answer questions like this in a comment, as opposed to a DM, but I can definitely say that I’ve written about a lot of these cruxes in my post on information warfare, and Ethan Edwards did some fantastic research which, along with many other goalposts relevant to geopolitics, he thoroughly demonstrates that LLMs are a match made in heaven for things like social media public opinion analysis/steering and analysis of bulk email collection. Since the end of the cold war, one of the big goalposts in international affairs and security agencies is preventing entire governments from having the rug pulled out from under them, like with East Germany, and unlike nuclear weapons and military force, information/hybrid warfare is a battleground where countries like the US and China can actually lose and win and pursue an endgame, similar to how economic catastrophes are unambiguously a deciding factor for whether the US or China will end up more powerful in 2030.
Strong upvoted. I agree that the more people look at the topic of AI regulation, the more they converge around US-China affairs and wondering why governments care about AI in the first place.
I don’t really know anything about regulation, but the question of “why governments care about AI in the first place” is something that I’ve been laser-focused on for several years now. It’s a really big question and the models have lots of gears. It’s pretty hard to answer questions like this in a comment, as opposed to a DM, but I can definitely say that I’ve written about a lot of these cruxes in my post on information warfare, and Ethan Edwards did some fantastic research which, along with many other goalposts relevant to geopolitics, he thoroughly demonstrates that LLMs are a match made in heaven for things like social media public opinion analysis/steering and analysis of bulk email collection. Since the end of the cold war, one of the big goalposts in international affairs and security agencies is preventing entire governments from having the rug pulled out from under them, like with East Germany, and unlike nuclear weapons and military force, information/hybrid warfare is a battleground where countries like the US and China can actually lose and win and pursue an endgame, similar to how economic catastrophes are unambiguously a deciding factor for whether the US or China will end up more powerful in 2030.