I didn’t say that GPT’s task is harder than any possible perspective on a form of work you could regard a human brain as trying to do; I said that GPT’s task is harder than being an actual human; in other words, being an actual human is not enough to solve GPT’s task.
I don’t see how the comparison of hardness of ‘GPT task’ and ‘being an actual human’ should technically work—to me it mostly seems like a type error.
- The task ‘predict the activation of photoreceptors in human retina’ clearly has same difficulty as ‘predict next word on the internet’ in the limit. (cf Why Simulator AIs want to be Active Inference AIs)
- Maybe you mean something like task + performance threshold. Here ‘predict the activation of photoreceptors in human retina well enough to be able to function as a typical human’ is clearly less difficult than task + performance threshold ‘predict next word on the internet, almost perfectly’. But this comparison does not seem to be particularly informative.
- Going in this direction we can make comparisonsbetween thresholds closer to reality e.g. ‘predict the activation of photoreceptors in human retina, and do other similar computation well enough to be able to function as a typical human’ vs. ‘predict next word on the internet, at the level of GPT4’ . This seems hard to order—humans are usually able to do the human task and would fail at the GPT4 task at GPT4 level; GPT4 is able to do the GPT4 task and would fail at the human task.
- You can’t make an ordering between cognitive systems based on ‘system A can’t do task T system B can, therefore B>A’ . There are many tasks which human’s can’t solve, but this implies very little. E.g. a human is unable to remember 50 thousand digit random number and my phone can easily, but there are also many things which human can do and my phone can’t.
From the above the possibly interesting direction of comparisons of ‘human skills’ and ‘GPT-4 skills’ is something like ‘why can’t GPT4 solve the human task at human level’ and ‘why can’t human solve the GPT task on GPT4 level’ and ‘why are the skills are a bit hard to compare’.
Some thoughts on this
- GPT4 clearly is “width superhuman”: it’s task is ~modelling of textual output of the whole humanity. This isn’t a great fit for the architecture and bounds of a single human mind roughly for the same reasons why a single human mind would do worse than Amazon recommender in recommending products to each of hundred million users. In contrast a human would probably do better in recommending products to one specific user whose preferences the human recommender would try to predict in detail.
Humanity as a whole would probably do significantly better at this task, if you e.g. imagine assigning every human one other human to model (and study in depth, read all their text outputs, etc)
- GPT4 clearly isn’t “samples → abstractions” better than humans, needing more data to learn the pattern.
- With overall ability to find abstractions, it seems unclear to what extent did GPT “learn smart algorithms independently because they are useful to predict human outputs” vs. “learned smart algorithms because they are implicitly reflected in human text”, and at the current level I would expect a mixture of both
What the main post is responding to is the argument: “We’re just training AIs to imitate human text, right, so that process can’t make them get any smarter than the text they’re imitating, right? So AIs shouldn’t learn abilities that humans don’t have; because why would you need those abilities to learn to imitate humans?” And to this the main post says, “Nope.”
The main post is not arguing: “If you abstract away the tasks humans evolved to solve, from human levels of performance at those tasks, the tasks AIs are being trained to solve are harder than those tasks in principle even if they were being solved perfectly.” I agree this is just false, and did not think my post said otherwise.
I do agree the argument “We’re just training AIs to imitate human text, right, so that process can’t make them get any smarter than the text they’re imitating, right? So AIs shouldn’t learn abilities that humans don’t have; because why would you need those abilities to learn to imitate humans?” is wrong and clearly the answer is “Nope”.
At the same time I do not think parts of your argument in the post are locally valid or good justification for the claim.
Correct and locally valid argument why GPTs are not capped by human level was already written here.
In a very compressed form, you can just imagine GPTs have text as their “sensory inputs” generated by the entire universe, similarly to you having your sensory inputs generated by the entire universe. Neither human intelligence nor GPTs are constrained by the complexity of the task (also: in the abstract, it’s the same task). Because of that, “task difficulty” is not a promising way how to compare these systems, and it is necessary to look into actual cognitive architectures and bounds.
With the last paragraph, I’m somewhat confused by what you mean by “tasks humans evolved to solve”. Does e.g. sending humans to the Moon, or detecting Higgs boson, count as a “task humans evolved to solve” or not?
I’d really like to see Eliezer engage with this comment, because to me it looks like the following sentence’s well-foundedness is rightly being questioned.
it’s naked mathematical truth that the task GPTs are being trained on is harder than being an actual human.
While I generally agree that powerful optimizers are dangerous, the fact that the GPT task and the “being an actual human” task are somewhat different has nothing to do with it.
I didn’t say that GPT’s task is harder than any possible perspective on a form of work you could regard a human brain as trying to do; I said that GPT’s task is harder than being an actual human; in other words, being an actual human is not enough to solve GPT’s task.
I don’t see how the comparison of hardness of ‘GPT task’ and ‘being an actual human’ should technically work—to me it mostly seems like a type error.
- The task ‘predict the activation of photoreceptors in human retina’ clearly has same difficulty as ‘predict next word on the internet’ in the limit. (cf Why Simulator AIs want to be Active Inference AIs)
- Maybe you mean something like task + performance threshold. Here ‘predict the activation of photoreceptors in human retina well enough to be able to function as a typical human’ is clearly less difficult than task + performance threshold ‘predict next word on the internet, almost perfectly’. But this comparison does not seem to be particularly informative.
- Going in this direction we can make comparisons between thresholds closer to reality e.g. ‘predict the activation of photoreceptors in human retina, and do other similar computation well enough to be able to function as a typical human’ vs. ‘predict next word on the internet, at the level of GPT4’ . This seems hard to order—humans are usually able to do the human task and would fail at the GPT4 task at GPT4 level; GPT4 is able to do the GPT4 task and would fail at the human task.
- You can’t make an ordering between cognitive systems based on ‘system A can’t do task T system B can, therefore B>A’ . There are many tasks which human’s can’t solve, but this implies very little. E.g. a human is unable to remember 50 thousand digit random number and my phone can easily, but there are also many things which human can do and my phone can’t.
From the above the possibly interesting direction of comparisons of ‘human skills’ and ‘GPT-4 skills’ is something like ‘why can’t GPT4 solve the human task at human level’ and ‘why can’t human solve the GPT task on GPT4 level’ and ‘why are the skills are a bit hard to compare’.
Some thoughts on this
- GPT4 clearly is “width superhuman”: it’s task is ~modelling of textual output of the whole humanity. This isn’t a great fit for the architecture and bounds of a single human mind roughly for the same reasons why a single human mind would do worse than Amazon recommender in recommending products to each of hundred million users. In contrast a human would probably do better in recommending products to one specific user whose preferences the human recommender would try to predict in detail.
Humanity as a whole would probably do significantly better at this task, if you e.g. imagine assigning every human one other human to model (and study in depth, read all their text outputs, etc)
- GPT4 clearly isn’t “samples → abstractions” better than humans, needing more data to learn the pattern.
- With overall ability to find abstractions, it seems unclear to what extent did GPT “learn smart algorithms independently because they are useful to predict human outputs” vs. “learned smart algorithms because they are implicitly reflected in human text”, and at the current level I would expect a mixture of both
What the main post is responding to is the argument: “We’re just training AIs to imitate human text, right, so that process can’t make them get any smarter than the text they’re imitating, right? So AIs shouldn’t learn abilities that humans don’t have; because why would you need those abilities to learn to imitate humans?” And to this the main post says, “Nope.”
The main post is not arguing: “If you abstract away the tasks humans evolved to solve, from human levels of performance at those tasks, the tasks AIs are being trained to solve are harder than those tasks in principle even if they were being solved perfectly.” I agree this is just false, and did not think my post said otherwise.
I do agree the argument “We’re just training AIs to imitate human text, right, so that process can’t make them get any smarter than the text they’re imitating, right? So AIs shouldn’t learn abilities that humans don’t have; because why would you need those abilities to learn to imitate humans?” is wrong and clearly the answer is “Nope”.
At the same time I do not think parts of your argument in the post are locally valid or good justification for the claim.
Correct and locally valid argument why GPTs are not capped by human level was already written here.
In a very compressed form, you can just imagine GPTs have text as their “sensory inputs” generated by the entire universe, similarly to you having your sensory inputs generated by the entire universe. Neither human intelligence nor GPTs are constrained by the complexity of the task (also: in the abstract, it’s the same task). Because of that, “task difficulty” is not a promising way how to compare these systems, and it is necessary to look into actual cognitive architectures and bounds.
With the last paragraph, I’m somewhat confused by what you mean by “tasks humans evolved to solve”. Does e.g. sending humans to the Moon, or detecting Higgs boson, count as a “task humans evolved to solve” or not?
I’d really like to see Eliezer engage with this comment, because to me it looks like the following sentence’s well-foundedness is rightly being questioned.
While I generally agree that powerful optimizers are dangerous, the fact that the GPT task and the “being an actual human” task are somewhat different has nothing to do with it.