There might be humans who set it up in exchange for power/similar, and then it continues after they are gone (perhaps simply because it is “spaghetti code”).
The presence of the regulations might also be forced by other factors, e.g. to suppress AI-powered frauds, gangsters, disinformation spreaders, etc..
Need to be proved as x-risk. For example, if population fails below 100 people, then regulation fails first.
Not if the regulation is sufficiently self-sustainably AI-run.
If not AGI, it will fail without enough humans. If AGI, it is just an example of misalignment.
There might be humans who set it up in exchange for power/similar, and then it continues after they are gone (perhaps simply because it is “spaghetti code”).
The presence of the regulations might also be forced by other factors, e.g. to suppress AI-powered frauds, gangsters, disinformation spreaders, etc..