The fact that very few in government even understand the existential risk argument means that we haven’t seen their relevant opinions yet. As you point out, the government is composed of selfish individuals. At least some of those individuals care about themselves, their children and grandchildren. Making them aware of the existential risk arguments in detail could entirely change their atiitude.
In addition, I think we need to think in more detail about possible regulations and downsides. Sure, government is shortsighted and selfish, like the rest of humanity.
I think you’re miscalibrated on the risks relative to your average reader. We tend to care primarily about the literal extinction of humanity. Relative to those concerns, the “the most dystopian uses for AI” you mention are not a concern, unless you mean literally the worst- a billion-year reich of suffering or something.
We need a reason to believe that governments can reliably improve the incentives facing private organizations.
We do not. Many of us here believe we are in such a desparate situation that merely rolling the dice to change anything would make sense.
I’m not one of those people. I can’t tell what situation we’re really in, and I don’t think anyone else has a satisfactory full view either. So, despite all of the above, I think you might be right that government regulation may make the situation worse. The biggest risk I can see is changing who’s in the lead for the AGI race; the current candidates seem relatively well-intended and aware of the risks (with large caveats). (One counterargument is that takeoff will likely be so slow in the current paradigm that we will have a multiple AGIs, making the group dynamics as important as individual intentions.)
So I’d like to see a better analysis of the potential outcomes of government regulation. Arguing that governments are bad and dumb in a variety of ways just isn’t sufficiently detailed to be helpful in this situation.
The fact that very few in government even understand the existential risk argument means that we haven’t seen their relevant opinions yet. As you point out, the government is composed of selfish individuals. At least some of those individuals care about themselves, their children and grandchildren. Making them aware of the existential risk arguments in detail could entirely change their atiitude.
In addition, I think we need to think in more detail about possible regulations and downsides. Sure, government is shortsighted and selfish, like the rest of humanity.
I think you’re miscalibrated on the risks relative to your average reader. We tend to care primarily about the literal extinction of humanity. Relative to those concerns, the “the most dystopian uses for AI” you mention are not a concern, unless you mean literally the worst- a billion-year reich of suffering or something.
We do not. Many of us here believe we are in such a desparate situation that merely rolling the dice to change anything would make sense.
I’m not one of those people. I can’t tell what situation we’re really in, and I don’t think anyone else has a satisfactory full view either. So, despite all of the above, I think you might be right that government regulation may make the situation worse. The biggest risk I can see is changing who’s in the lead for the AGI race; the current candidates seem relatively well-intended and aware of the risks (with large caveats). (One counterargument is that takeoff will likely be so slow in the current paradigm that we will have a multiple AGIs, making the group dynamics as important as individual intentions.)
So I’d like to see a better analysis of the potential outcomes of government regulation. Arguing that governments are bad and dumb in a variety of ways just isn’t sufficiently detailed to be helpful in this situation.