Put a human being in an environment which is novel to them. Say, empiricism doesn’t hold—the laws of this environment are such that “That which has happened before is less likely to happen again” (a reference to an old Overcoming Bias post I can’t locate).
Is that human being going to behave “stupidly” in this environment? Do -we- fail the intelligence test? You acknowledge that we could—but if you’re defining intelligence in such a way that nothing actually satisfies that definition, what the heck are you achieving, here?
I’m not sure your criteria is all that useful. (And I’m not even sure it’s that well defined, actually.)
People fail at novel environments as mundane as needing to find a typo in an html file or paying attention to fact-checks during political debates. You don’t have to come up with extreme philosophical counterexamples to find domains in which it’s interesting to distinguish between the behavior of different non-experts (and such that these differences feel like “intelligence”).
but if you’re defining intelligence in such a way that nothing actually satisfies that definition, what the heck are you achieving, here?
The no free lunch theorems imply there’s no such thing as a universal definition of general intelligence. So I think general intelligence should be a matter of degree, rather than kind.
I’m not sure your criteria is all that useful. (And I’m not even sure it’s that well defined, actually.)
It’s not well defined, yet. But I think there’s a germ of a good idea there, that I’m teasing out with the help of commenters here.
“That which has happened before is less likely to happen again” (a reference to an old Overcoming Bias post I can’t locate).
Good point. In fact, that is the type of environment which is required for the No Free Lunch theorems mentioned in the post to even be relevant. A typical interpretation in the evolutionary computing field would be that it’s the type of environment where an anti-GA (a genetic algorithm which selects individuals with worse fitness) does better than a GA. There are good reasons to say that such environments can’t occur for important classes of problems typically tackled by EC. In the context of this post, I wonder whether such an environment is even physically realisable.
(I think a lot of people misinterpret NFL theorems.)
Say, empiricism doesn’t hold—the laws of this environment are such that “That which has happened before is less likely to happen again” (a reference to an old Overcoming Bias post I can’t locate).
Then we would observe this, and update on it—after all, this mysterious law is presumably immune to itself, or it would have stopped by now,right?
I’m curious to know how you expect Bayesian updates to work in a universe in which empiricism doesn’t hold. (I’m not denying it’s possible, I just can’t figure out what information you could actually maintain about the universe.)
If things have always been less likely after they happened in the past, then, conditioning on that, something happening is Bayesian evidence that it wont happen again.
What exactly do you mean by “empiricism does not hold”? Do you mean that there are no laws governing reality? Is that even a thinkable notion? I’m not sure. Or perhaps you mean that everything is probabilistically independent from everything else. Then no update would ever change the probability distribution of any variable except the one on whose value we update, but that is something we could notice. We just couldn’t make any effective predictions on that basis—and we would know that.
Put a human being in an environment which is novel to them. Say, empiricism doesn’t hold—the laws of this environment are such that “That which has happened before is less likely to happen again” (a reference to an old Overcoming Bias post I can’t locate).
Is that human being going to behave “stupidly” in this environment? Do -we- fail the intelligence test? You acknowledge that we could—but if you’re defining intelligence in such a way that nothing actually satisfies that definition, what the heck are you achieving, here?
I’m not sure your criteria is all that useful. (And I’m not even sure it’s that well defined, actually.)
People fail at novel environments as mundane as needing to find a typo in an html file or paying attention to fact-checks during political debates. You don’t have to come up with extreme philosophical counterexamples to find domains in which it’s interesting to distinguish between the behavior of different non-experts (and such that these differences feel like “intelligence”).
The no free lunch theorems imply there’s no such thing as a universal definition of general intelligence. So I think general intelligence should be a matter of degree, rather than kind.
It’s not well defined, yet. But I think there’s a germ of a good idea there, that I’m teasing out with the help of commenters here.
Good point. In fact, that is the type of environment which is required for the No Free Lunch theorems mentioned in the post to even be relevant. A typical interpretation in the evolutionary computing field would be that it’s the type of environment where an anti-GA (a genetic algorithm which selects individuals with worse fitness) does better than a GA. There are good reasons to say that such environments can’t occur for important classes of problems typically tackled by EC. In the context of this post, I wonder whether such an environment is even physically realisable.
(I think a lot of people misinterpret NFL theorems.)
Then we would observe this, and update on it—after all, this mysterious law is presumably immune to itself, or it would have stopped by now,right?
I’m curious to know how you expect Bayesian updates to work in a universe in which empiricism doesn’t hold. (I’m not denying it’s possible, I just can’t figure out what information you could actually maintain about the universe.)
If things have always been less likely after they happened in the past, then, conditioning on that, something happening is Bayesian evidence that it wont happen again.
What exactly do you mean by “empiricism does not hold”? Do you mean that there are no laws governing reality? Is that even a thinkable notion? I’m not sure. Or perhaps you mean that everything is probabilistically independent from everything else. Then no update would ever change the probability distribution of any variable except the one on whose value we update, but that is something we could notice. We just couldn’t make any effective predictions on that basis—and we would know that.