We may stare at the empty plane and ask ourselves if this is the graveyard of a superintelligence, once lived here and conquered the plane for a brief time, then vanished in a collapse. Several gliders and roses could be everything what remained, as some dry fossils.
Or, we could find that the playing field stabilizes to something that can easily be interpreted as a superintelligence’s preferred state—perhaps with the field divided into subsections in which interesting things happen in repeated cycles, or whatever.
Because it takes the meaning out of the accomplishment? In this scenario, there might be something interpretable as a superintelligence that exists at some point before the scenario settles into repeating, but the end state still seems more to be caused by the initial state than by the superintelligence.
Alternately, it could be because you value novelty, and the repeating nature of the stabilized field precludes that in a way that’s more emotionally salient than heat death.
The only long-term scenario I know that avoids both this outcome and heat death involves launching colonies into the “dust” as Greg Egan described. Unfortunately the assumptions required for that to work may well turn out to be false. There’s no known law of nature saying we can’t be trapped in the long term. And if we are trapped, I think I prefer a repeating paradise to heat death.
I don’t know. Heat death seems a lot sadder to me, in part because I know that there’s at least one universe where it will probably happen. Maybe you are just more used to the notion of heat death and so have digested that sour grape but not this one?
I wonder why rational consequentialist agent should do anything but channel all available resources into instrumental goal of finding a way to circumvent heat death. Mixed strategies are obviously suboptimal as expected utility of heat death circumvention is infinite.
Or, we could find that the playing field stabilizes to something that can easily be interpreted as a superintelligence’s preferred state—perhaps with the field divided into subsections in which interesting things happen in repeated cycles, or whatever.
Interesting question: why does this (intuitively and irrationally) seem to me like a sadder fate than something like heat death?
Because it takes the meaning out of the accomplishment? In this scenario, there might be something interpretable as a superintelligence that exists at some point before the scenario settles into repeating, but the end state still seems more to be caused by the initial state than by the superintelligence.
Alternately, it could be because you value novelty, and the repeating nature of the stabilized field precludes that in a way that’s more emotionally salient than heat death.
But this is true of the heat death of the universe, too, eventually...
The only long-term scenario I know that avoids both this outcome and heat death involves launching colonies into the “dust” as Greg Egan described. Unfortunately the assumptions required for that to work may well turn out to be false. There’s no known law of nature saying we can’t be trapped in the long term. And if we are trapped, I think I prefer a repeating paradise to heat death.
I don’t know. Heat death seems a lot sadder to me, in part because I know that there’s at least one universe where it will probably happen. Maybe you are just more used to the notion of heat death and so have digested that sour grape but not this one?
I wonder why rational consequentialist agent should do anything but channel all available resources into instrumental goal of finding a way to circumvent heat death. Mixed strategies are obviously suboptimal as expected utility of heat death circumvention is infinite.