Wei, I was agreeing with you that these were important questions—not necessarily agreeing with your thesis “there’s more to intelligence than optimization”. Once you start dealing in questions like those, using a word like “intelligence” implies that all the answers are to be found in a single characteristic and that this characteristic has something to do with the raw power of a mind. Whereas I would be more tempted to look at the utility function, or the structure of the prior—an AI that fails to see a question where we see one is not necessarily stupid; it may simply not care about our own stupidity, and be structured too far outside it to ever see a real question as opposed to a problem of finding a word-string that satisfies certain apes that they have been answered.
Human morality “compresses”, in some sense, to a certain evolutionary algorithm including the stupidities of that algorithm (which is why it didn’t create expected fitness maximizers) and various contingencies about the ancestral environment we were in and a good dose of sheer path dependency. But since running the same Earth over again wouldn’t necessarily create anything like humans, you can’t literally compress morality to that.
On the other hand, intelligence—or let us rather say “optimization under real-world constraints”—is something that evolution did cough out, and if you took another world starting with a different first replicator, you would rate the probability far higher of seeing “efficient cross-domain optimization” than “human morality”, and the probability of seeing a mind that obsessed about qualia would be somewhere in between.
So “efficient cross-domain optimization” is something you can get starting from the criterion of “optimization”, looking at the environment to figure out the generalizations, testing things to see if they “work” according to a criterion already possessed—with no need to look at human brains as a reference.
Wei, I was agreeing with you that these were important questions—not necessarily agreeing with your thesis “there’s more to intelligence than optimization”. Once you start dealing in questions like those, using a word like “intelligence” implies that all the answers are to be found in a single characteristic and that this characteristic has something to do with the raw power of a mind. Whereas I would be more tempted to look at the utility function, or the structure of the prior—an AI that fails to see a question where we see one is not necessarily stupid; it may simply not care about our own stupidity, and be structured too far outside it to ever see a real question as opposed to a problem of finding a word-string that satisfies certain apes that they have been answered.
Human morality “compresses”, in some sense, to a certain evolutionary algorithm including the stupidities of that algorithm (which is why it didn’t create expected fitness maximizers) and various contingencies about the ancestral environment we were in and a good dose of sheer path dependency. But since running the same Earth over again wouldn’t necessarily create anything like humans, you can’t literally compress morality to that.
On the other hand, intelligence—or let us rather say “optimization under real-world constraints”—is something that evolution did cough out, and if you took another world starting with a different first replicator, you would rate the probability far higher of seeing “efficient cross-domain optimization” than “human morality”, and the probability of seeing a mind that obsessed about qualia would be somewhere in between.
So “efficient cross-domain optimization” is something you can get starting from the criterion of “optimization”, looking at the environment to figure out the generalizations, testing things to see if they “work” according to a criterion already possessed—with no need to look at human brains as a reference.