I think it’s tangentially relevant in certain cases. Here’s something I wrote in another context, where I think it’s perhaps useful to understand what we really mean when we say “intelligence.”
We consider humans intelligent not because they do better on all possible optimization problems (they don’t, due to the no free lunch theorem), but because they do better on a subset of problems that are actually encountered in the real world. For instance, humans have particular cognitive architectures that allow us to understand language well, and language is something that humans need to understand. This can be seen even more clearly with “cognitive biases”, which are errors in logical reasoning that are nevertheless advantageous for surviving in the ancestral human world. Recently, people have tried to rid themselves of such biases, because of their belief that they no longer help in the modern world, a perfect example of the fact that human intelligence is highly domain-dependent.
We can’t do arithmetic nearly as well as machines, but we don’t view them as intelligent because we can do many more things than they can, more flexibly and better. The machine might reply that it can do many things as well: it can quickly multiply 3040 by 2443, but also 42323 by 3242, and 379 by 305, and in an absolutely huge number of these calculations. The human might respond that these are all “just multiplication problems”; the machine might say that human problems are “just thriving-on-planet-earth problems”. When we say intelligence, we essentially just mean “ability to excel at thriving-on-planet-earth problems,” which requires knowledge and cognitive architectures that are specifically good at thriving-on-planet-earth. Thinking too much about “generality” tends to confuse; instead consider performance on thriving-on-planet-earth problems, or particular subsets of those problems.
I think I disagree (that generality is irrelevant (this is only the case, because nfl-theorems use unreasonable priors)). If your problem has “any” structure: your environment is not maximally random, then you can use Occam’s razor and make sense of your environment. No need for the “real world”. The paper on universal intelligence is great, by the way, if formalizing intelligence seems interesting.
To spell out how this applies to your comment: If you use a reasonable prior, then intelligence is well-defined: some things are smarter than others. For example, GPT3 is smarter than GPT2.
I think it’s tangentially relevant in certain cases. Here’s something I wrote in another context, where I think it’s perhaps useful to understand what we really mean when we say “intelligence.”
Agree it’s mostly not relevant.
I think I disagree (that generality is irrelevant (this is only the case, because nfl-theorems use unreasonable priors)). If your problem has “any” structure: your environment is not maximally random, then you can use Occam’s razor and make sense of your environment. No need for the “real world”. The paper on universal intelligence is great, by the way, if formalizing intelligence seems interesting.
To spell out how this applies to your comment: If you use a reasonable prior, then intelligence is well-defined: some things are smarter than others. For example, GPT3 is smarter than GPT2.