It’s feasible to establish AGI-run governance that does nothing on its own other than permanently and irrevocably but unobtrusively restrict the level of technological development of every civilization it reaches, including its own builders (perhaps as a way of opposing extinction risk). This leads to strange ancient cultures of biological low-tech aliens that slowly travel the galaxy, much later than the initial wave of von Neumann probes of the technological development restricting AGI.
This is still unlikely, as the outcome both wastes the cosmic endowment and requires sufficient technical sophistication to make it stable and irrevocable. So the builders of this AGI governance both need to decently understand alignment and target an outcome that radically impairs their future. But this seems only Fermi paradox unlikely, not literal magic unlikely. The fraudulent nature of “evidence” we see reduces the probability that this is the case further, as low-tech aliens could instead be making themselves known in straightforward ways, while the high-tech AGI that restricts tech doesn’t need to be observable at all. But this doesn’t go all the way to impossibility, as an ancient low-tech culture could have traditions and bureaucracy cashing out in a bizarre first contact process.
The prediction of this hypothesis is that we don’t get to develop unrestricted ASI of our own. Given the inscrutable nature of the models (or equivalently lack of technical sophistication needed to know what we are doing) any interventions don’t yet need to be humanly observable.
It’s feasible to establish AGI-run governance that does nothing on its own other than permanently and irrevocably but unobtrusively restrict the level of technological development of every civilization it reaches, including its own builders (perhaps as a way of opposing extinction risk). This leads to strange ancient cultures of biological low-tech aliens that slowly travel the galaxy, much later than the initial wave of von Neumann probes of the technological development restricting AGI.
This is still unlikely, as the outcome both wastes the cosmic endowment and requires sufficient technical sophistication to make it stable and irrevocable. So the builders of this AGI governance both need to decently understand alignment and target an outcome that radically impairs their future. But this seems only Fermi paradox unlikely, not literal magic unlikely. The fraudulent nature of “evidence” we see reduces the probability that this is the case further, as low-tech aliens could instead be making themselves known in straightforward ways, while the high-tech AGI that restricts tech doesn’t need to be observable at all. But this doesn’t go all the way to impossibility, as an ancient low-tech culture could have traditions and bureaucracy cashing out in a bizarre first contact process.
The prediction of this hypothesis is that we don’t get to develop unrestricted ASI of our own. Given the inscrutable nature of the models (or equivalently lack of technical sophistication needed to know what we are doing) any interventions don’t yet need to be humanly observable.