Is your suggestion to run this system as a source of value, simulating lives for their own sake rather than to improve the quality of life of sentient beings in our universe? Our history (and present) aren’t exactly utopian, and I don’t see any real reason to believe that slight variations on it would lead to anything happier.
I am thinking about if we should reasonably expect to produce better result by trying to align an AGI with our value than by simulating a lot of alternate universes. I am not saying that this is net-negative or net-positive. It seems to me that the expected value of both cases may be identical.
Also by history, I also meant the future, not only the past and present. (I edited the question to replace “histories” by “trajectories”)
I am thinking about if we should reasonably expect to produce better result by trying to align an AGI with our value than by simulating a lot of alternate universes. I am not saying that this is net-negative or net-positive. It seems to me that the expected value of both cases may be identical.
Also by history, I also meant the future, not only the past and present. (I edited the question to replace “histories” by “trajectories”)