I think Martian Yudkowsky is a dangerous intuition pump. We’re invited to imagine a creature just like Eliezer except green and with antennae; we naturally imagine him having values as similar to us as, say, a Star Trek alien. From there we observe the similarity of values we just pushed in, and conclude that values like “interesting” are likely to be shared across very alien creatures. Real Martian Yudkowsky is much more alien than that, and is much more likely to say
There is little prospect of an outcome that realizes even the value of being flarn, unless the first superintelligences undergo detailed inheritance from Martian values.
Imagine, an intelligence that didn’t have the universal emotion of badweather!
Of course, extraterrestrial sentients may possess physiological states corresponding to limbic-like emotions that have no direct analog in human experience. Alien species, having evolved under a different set of environmental constraints than we, also could have a different but equally adaptive emotional repertoire. For example, assume that human observers land on another and discover an intelligent animal with an acute sense of absolute humidity and absolute air pressure. For this creature, there may exist an emotional state responding to an unfavorable change in the weather. Physiologically, the emotion could be mediated by the ET equivalent of the human limbic system; it might arise following the secretion of certain strength-enhancing and libido-arousing hormones into the alien’s bloodstream in response to the perceived change in weather. Immediately our creature begins to engage in a variety of learned and socially-approved behaviors, including furious burrowing and building, smearing tree sap over its pelt, several different territorial defense ceremonies, and vigorous polygamous copulations with nearby females, apparently (to humans) for no reason at all. Would our astronauts interpret this as madness? Or love? Lust? Fear? Anger? None of these is correct, of course the alien is feeling badweather.
I suggest you guys taboo interesting, because I strongly suspect you’re using it with slightly different meanings. (And BTW, as a Martian Yudkowsky I imagine something with values at least as alien as Babyeaters’ or Superhappys’.)
It’s another discussion, really, but it sounds as though you are denying the idea of “interestingness” as a universal instrumental value—whereas I would emphasize that “interestingness” is really just our name for whether something sustains our interest or not—and ‘interest’ is a pretty basic functional property of any agent with mobile sensors. There’ll be other similarities in the area too—such as novelty-seeking. So shared common ground is only to be expected.
Anyway, I am not too wedded to Martian Yudkowsky. The problematical idea is that you could have a nanotech-capable spacefaring civilization that is not “interesting”. If such a thing isn’t “interesting” then—WTF?
So: do you really think that humans wouldn’t find a martian civilization interesting? Surely there would be many humans who would be incredibly interested.
I find Jupiter interesting. I think a paperclip maximizer (choosing a different intuition pump for the same point) could be more interesting than Jupiter, but it would generate an astronomically tiny fraction of the total potential for interestingness in this universe.
Life isn’t much of an “interestingness” maximiser. Expecting to produce more than a tiny fraction of the total potential for interestingness in this universe seems as though it would be rather unreasonable.
I agree that a paperclip maximiser would be more boring than an ordinary entropy-maximising civilization—though I don’t know by how much—probably not by a huge amount—the basic problems it faces are much the same—the paperclip maximiser just has fewer atoms to work with.
I think Martian Yudkowsky is a dangerous intuition pump. We’re invited to imagine a creature just like Eliezer except green and with antennae; we naturally imagine him having values as similar to us as, say, a Star Trek alien. From there we observe the similarity of values we just pushed in, and conclude that values like “interesting” are likely to be shared across very alien creatures. Real Martian Yudkowsky is much more alien than that, and is much more likely to say
Imagine, an intelligence that didn’t have the universal emotion of badweather!
I suggest you guys taboo interesting, because I strongly suspect you’re using it with slightly different meanings. (And BTW, as a Martian Yudkowsky I imagine something with values at least as alien as Babyeaters’ or Superhappys’.)
It’s another discussion, really, but it sounds as though you are denying the idea of “interestingness” as a universal instrumental value—whereas I would emphasize that “interestingness” is really just our name for whether something sustains our interest or not—and ‘interest’ is a pretty basic functional property of any agent with mobile sensors. There’ll be other similarities in the area too—such as novelty-seeking. So shared common ground is only to be expected.
Anyway, I am not too wedded to Martian Yudkowsky. The problematical idea is that you could have a nanotech-capable spacefaring civilization that is not “interesting”. If such a thing isn’t “interesting” then—WTF?
Yes, I am; I think that the human value of interestingness is much, much more specific than the search space optimization you’re pointing at.
[This reply was to an earlier version of timtyler’s comment]
So: do you really think that humans wouldn’t find a martian civilization interesting? Surely there would be many humans who would be incredibly interested.
I find Jupiter interesting. I think a paperclip maximizer (choosing a different intuition pump for the same point) could be more interesting than Jupiter, but it would generate an astronomically tiny fraction of the total potential for interestingness in this universe.
Life isn’t much of an “interestingness” maximiser. Expecting to produce more than a tiny fraction of the total potential for interestingness in this universe seems as though it would be rather unreasonable.
I agree that a paperclip maximiser would be more boring than an ordinary entropy-maximising civilization—though I don’t know by how much—probably not by a huge amount—the basic problems it faces are much the same—the paperclip maximiser just has fewer atoms to work with.