The Shulman world is exploratory engineering, it rests on the assumption of AGI sufficient for explosive growth that somehow doesn’t cause an intelligence explosion for many years. It neither forecasts actuality nor endorses a possibility, it instead explores the consequences of a specific magical assumption. Being aware of these consequences helps set a lower bound on expected scale of change in actuality.
Aligned growth can in principle coexist with undisturbed slow development. Superintelligence doesn’t make an elephant too large to notice ants, it makes it capable of observing minute distinctions. A sudden arrival of the solved world causes many issues, that doesn’t seem like a compelling reason to preserve death. Which asks for immediate access to some infrastructure from distant technological future, even if it does little else for a while.
The Shulman world is exploratory engineering, it rests on the assumption of AGI sufficient for explosive growth that somehow doesn’t cause an intelligence explosion for many years. It neither forecasts actuality nor endorses a possibility, it instead explores the consequences of a specific magical assumption. Being aware of these consequences helps set a lower bound on expected scale of change in actuality.
Aligned growth can in principle coexist with undisturbed slow development. Superintelligence doesn’t make an elephant too large to notice ants, it makes it capable of observing minute distinctions. A sudden arrival of the solved world causes many issues, that doesn’t seem like a compelling reason to preserve death. Which asks for immediate access to some infrastructure from distant technological future, even if it does little else for a while.