The No Free Lunch theorem is irrelevant in worlds like ours that are a subset of possible data structures (world arrangements). I’m surprised this isn’t better understood. I think Steve Byrnes did a nice writeup of this logic. I can find the link if you like.
Hmm, but here the set of possible world states would be the domain of the function we’re optimising, not the function itself. Like, No-Free-Lunch states (from wikipedia):
Theorem 1: Given a finite set V and a finite set S of real numbers, assume that f:S→V is chosen at random according to uniform distribution on the set SV of all possible functions from V to S. For the problem of optimizing f over the set V, then no algorithm performs better than blind search.
Here V is the set of possible world arrangements, which is admittedly much smaller than all possible data structures, but the theorem still holds because we’re averaging over all possible value functions on this set of worlds, a set which is not physically restricted by anything.
I’d be very interested if you can find Byrnes’ writeup.
Here it is: The No Free Lunch theorem for dummies. See particularly the second section: Sidenote: Why NFL has basically nothing to do with AGI and the first link to Yudkowsky’s post on essentially the same thing.
I think the thing about your descripton is that S → V is not going to be chosen at random in our world.
The no free lunch theorem states in essence (I’m pretty sure) that no classifier can both classify a big gray thing with tusks and big ears as both an elephant and not-an-elephant. That’s fine, because the remainder of an AGI system can choose (by any other criteria) to make elephants either a goal or an anti-goal or neither.
If the NFL theorem applied to general intelligences, it seems like humans couldn’t love elephants at one time and hate them at a later time, with no major changes to their perceptual systems. It proves too much.
The No Free Lunch theorem is irrelevant in worlds like ours that are a subset of possible data structures (world arrangements). I’m surprised this isn’t better understood. I think Steve Byrnes did a nice writeup of this logic. I can find the link if you like.
Hmm, but here the set of possible world states would be the domain of the function we’re optimising, not the function itself. Like, No-Free-Lunch states (from wikipedia):
Theorem 1: Given a finite set V and a finite set S of real numbers, assume that f:S→V is chosen at random according to uniform distribution on the set SV of all possible functions from V to S. For the problem of optimizing f over the set V, then no algorithm performs better than blind search.
Here V is the set of possible world arrangements, which is admittedly much smaller than all possible data structures, but the theorem still holds because we’re averaging over all possible value functions on this set of worlds, a set which is not physically restricted by anything.
I’d be very interested if you can find Byrnes’ writeup.
Here it is: The No Free Lunch theorem for dummies. See particularly the second section: Sidenote: Why NFL has basically nothing to do with AGI and the first link to Yudkowsky’s post on essentially the same thing.
I think the thing about your descripton is that S → V is not going to be chosen at random in our world.
The no free lunch theorem states in essence (I’m pretty sure) that no classifier can both classify a big gray thing with tusks and big ears as both an elephant and not-an-elephant. That’s fine, because the remainder of an AGI system can choose (by any other criteria) to make elephants either a goal or an anti-goal or neither.
If the NFL theorem applied to general intelligences, it seems like humans couldn’t love elephants at one time and hate them at a later time, with no major changes to their perceptual systems. It proves too much.