(About the first part of your comment) Thank you for pointing to three confused points:
First, I don’t know if you intended this, but “stimulating the universe” carries a connotation of a low-level physics simulation. This is computationally impossible. Let’s have it model the universe instead, using the same kind of high-level pattern recognition that people use to predict the future.
To be more precise, what I had in mind is that the ASI is an agent which goal is:
to model the sentient part of the universe finely enough to produce sentience in an instance of its model (and it will also need to model the necessary non-sentient “dependencies”)
and to instantiate this model N times. For example, playing them from 1000 A.D. to the time where no sentience remains in a given instance of modeled universe. (all of this efficiently)
(To reduce complexity, I didn’t mention it but we could think of heuristics to reduce playing to much of the “past” and “future” history filled suffering)
Second, if the AGI is simulating itself, the predictions are wildly undetermined; it can predict that it will do X, and then fulfill its own prophecy by actually doing X, for any X. Let’s have it model a counterfactual world with no AGIs in it.
An instance of the modeled universe would not be our present universe. It would be “another seed”, starting before that the ASI exists and thus it would not need to model itself but only possible (“new”) ASI produced inside the instances.
Third, you need some kind of interface. Maybe you type in “I’m interested in future scenarios in which somebody cures Alzheimer’s and writes a scientific article describing what they did. What is the text of that article?” and then it runs through a bunch of scenarios and prints out its best-guess article in the first 50 scenarios it can find. (Maybe also print out a retrospective article from 20 years later about the long-term repercussions of the invention.) For a different type of interface, see microscope AI.
In the scenario I had in mind, the ASI would fill our universe will computing machines to produce as many instances as possible. (We would not use it and thus we will not need interface with the ASI)
(About the first part of your comment) Thank you for pointing to three confused points:
To be more precise, what I had in mind is that the ASI is an agent which goal is:
to model the sentient part of the universe finely enough to produce sentience in an instance of its model (and it will also need to model the necessary non-sentient “dependencies”)
and to instantiate this model N times. For example, playing them from 1000 A.D. to the time where no sentience remains in a given instance of modeled universe. (all of this efficiently)
(To reduce complexity, I didn’t mention it but we could think of heuristics to reduce playing to much of the “past” and “future” history filled suffering)
An instance of the modeled universe would not be our present universe. It would be “another seed”, starting before that the ASI exists and thus it would not need to model itself but only possible (“new”) ASI produced inside the instances.
In the scenario I had in mind, the ASI would fill our universe will computing machines to produce as many instances as possible. (We would not use it and thus we will not need interface with the ASI)