In your edit, you are essentially describing somebody being “slap-droned” from the culture series by Ian M. Banks.
This super-moralist-AI-dominated world may look like a darker version of the Culture, where if superintelligent systems determine you or other intelligent systems within their purview are not intrinsically moral enough they contrive a clever way to have you eliminate yourself, and monitor/intervene if you are too non-moral in the meantime.
The difference being, that this version of the culture would not necessarily be all that concerned with maximizing the “human experience” or anything like that.
This super-moralist-AI-dominated world may look like a darker version of the Culture, where if superintelligent systems determine you or other intelligent systems within their purview are not intrinsically moral enough they contrive a clever way to have you eliminate yourself, and monitor/intervene if you are too non-moral in the meantime.
My guess is you get one of two extremes:
build a bubble of human survivable space protected/managed by an aligned AGI
die
with no middle ground. The bubble would be self contained. There’s nothing you can do from inside the bubble to raise a ruckus because if there was you’d already be dead or your neighbors would have built a taller fence-like-thing at your expense so the ruckus couldn’t affect them.
The whole scenario seems unlikely since building the bubble requires an aligned AGI and if we have those we probably won’t be in this mess to begin with. Winner take all dynamics abound. The rich get richer (and smarter) and humans just lose unless the first meaningfully smarter entity we build is aligned.
In your edit, you are essentially describing somebody being “slap-droned” from the culture series by Ian M. Banks.
This super-moralist-AI-dominated world may look like a darker version of the Culture, where if superintelligent systems determine you or other intelligent systems within their purview are not intrinsically moral enough they contrive a clever way to have you eliminate yourself, and monitor/intervene if you are too non-moral in the meantime.
The difference being, that this version of the culture would not necessarily be all that concerned with maximizing the “human experience” or anything like that.
My guess is you get one of two extremes:
build a bubble of human survivable space protected/managed by an aligned AGI
die
with no middle ground. The bubble would be self contained. There’s nothing you can do from inside the bubble to raise a ruckus because if there was you’d already be dead or your neighbors would have built a taller fence-like-thing at your expense so the ruckus couldn’t affect them.
The whole scenario seems unlikely since building the bubble requires an aligned AGI and if we have those we probably won’t be in this mess to begin with. Winner take all dynamics abound. The rich get richer (and smarter) and humans just lose unless the first meaningfully smarter entity we build is aligned.