I can not imagine well functioning AGI without subverting of government. If you look at the quality of law writing that’s currently in existence there are many reasons for letting an AGI write laws because the quality will be better if you put the equivalent of 10,000 smart humans as AGIs on the task of writing a law than when a bunch of congressional staffers write it.
I agree that it is likely our political institutions will change post AGI. And that there might even be a period of martial law if things go really haywire.
On having an AGI write laws by itself (rather than as part of an augmented politician), I consider that scenario needing fleshing out a lot more. AGI is insufficiently magic just to wave it at things. You need to consider things like describing how the AGI is aligned with the populace (fairly!). If the AGI persists over time you would want to make sure that the AGI design avoids any calcification of its ideas. In humans it is possible that successful ideas monopolize the idea space so new ideas don’t have room to flourish, rather than by physical aging. You want to avoid that kind of problem with large scale AGI in positions of power.
Issues of trust in the system also become important (how can you be sure that it is aligned with you?), much as they occur with voting machines.
I can not imagine well functioning AGI without subverting of government. If you look at the quality of law writing that’s currently in existence there are many reasons for letting an AGI write laws because the quality will be better if you put the equivalent of 10,000 smart humans as AGIs on the task of writing a law than when a bunch of congressional staffers write it.
I agree that it is likely our political institutions will change post AGI. And that there might even be a period of martial law if things go really haywire.
On having an AGI write laws by itself (rather than as part of an augmented politician), I consider that scenario needing fleshing out a lot more. AGI is insufficiently magic just to wave it at things. You need to consider things like describing how the AGI is aligned with the populace (fairly!). If the AGI persists over time you would want to make sure that the AGI design avoids any calcification of its ideas. In humans it is possible that successful ideas monopolize the idea space so new ideas don’t have room to flourish, rather than by physical aging. You want to avoid that kind of problem with large scale AGI in positions of power.
Issues of trust in the system also become important (how can you be sure that it is aligned with you?), much as they occur with voting machines.