I may have been a little flip there.
My understanding of the thought experiment is—something extrapolates some values and maximizes them, probably using up most of the universe, probably becoming the most significant factor in the species’ future and that of all sentients, and the question is whether the result is “interesting” to us here and now, without specifying the precise way to evaluate that term. From that perspective, I’d say a vast uniform prime-number calculator, whether or not it wipes out all (other?) life, is not “interesting”, in that it’s somewhat conceptually interesting as a story but a rather dull thing to spend most of a universe on.
Today’s ecosystems maximise entropy. Maximising primeness is different, but surely not greatly more interesting—since entropy is widely regarded as being tedious and boring.
Intriguing! But even granting that, there’s a big difference between extrapolating the values of a screwed-up offshoot of an entropy-optimizing process and extrapolating the value of “maximize entropy”. Or do you suspect that a FOOMing AI would be much less powerful and more prone to interesting errors than Eliezer believes?
Truly maximizing entropy would involve burning everything you can burn, tearing the matter of solar systems apart, accelerating stars towards nova, trying to accelerate the evaporation of black holes and prevent their formation, and other things of this sort. It’d look like a dark spot in the sky that’d get bigger at approximately the speed of light.
Fires are crude entropy maximisers. Living systems destroy energy dradients at all scales, resulting in more comprehensive devastation than mere flames can muster.
Of course, maximisation is often subject to constraints. Your complaint is rather like saying that water doesn’t “truly minimise” its altitude—since otherwise it would end up at the planet’s core. That usage is simply not what the terms “maximise” and “minimise” normally refer to.
I may have been a little flip there. My understanding of the thought experiment is—something extrapolates some values and maximizes them, probably using up most of the universe, probably becoming the most significant factor in the species’ future and that of all sentients, and the question is whether the result is “interesting” to us here and now, without specifying the precise way to evaluate that term. From that perspective, I’d say a vast uniform prime-number calculator, whether or not it wipes out all (other?) life, is not “interesting”, in that it’s somewhat conceptually interesting as a story but a rather dull thing to spend most of a universe on.
Today’s ecosystems maximise entropy. Maximising primeness is different, but surely not greatly more interesting—since entropy is widely regarded as being tedious and boring.
Intriguing! But even granting that, there’s a big difference between extrapolating the values of a screwed-up offshoot of an entropy-optimizing process and extrapolating the value of “maximize entropy”. Or do you suspect that a FOOMing AI would be much less powerful and more prone to interesting errors than Eliezer believes?
Truly maximizing entropy would involve burning everything you can burn, tearing the matter of solar systems apart, accelerating stars towards nova, trying to accelerate the evaporation of black holes and prevent their formation, and other things of this sort. It’d look like a dark spot in the sky that’d get bigger at approximately the speed of light.
Fires are crude entropy maximisers. Living systems destroy energy dradients at all scales, resulting in more comprehensive devastation than mere flames can muster.
Of course, maximisation is often subject to constraints. Your complaint is rather like saying that water doesn’t “truly minimise” its altitude—since otherwise it would end up at the planet’s core. That usage is simply not what the terms “maximise” and “minimise” normally refer to.
Yeah! Compelling, but not “interesting”. Likewise, I expect that actually maximizing the fitness of a species would be similarly “boring”.