I think the above quote from janus would add to (3) that it requires GPT to also learn the environment and the human-environment interactions, aside from just mimicking human capabilities. I know what you said doesn’t contradict this, but I think there’s a difference in emphasis, i.e. imitation of humans (or some other consequentialist) not necessarily being the main source of capability.
I think most of the capabilities on earth exist in humans, not in the environment. For instance if you have a rock, it’s just gonna sit there; it’s not gonna make a rocket and fly to the moon. This is why I emphasize GPT as getting its capabilities from humans, since there are not many other things in the environment it could get capabilities from.
I agree that insofar as there are other things in the environment with capabilities (e.g. computers outputting big tables of math results) that get fed into GPT, it also gains some capabilities from them.
Like, LLM-style transformer pretrained on protein sequences get their “protein-prediction capability” purely from “environment generative-rule learning,” and none from imitation learning of a consequentialist’s output.
I think they get their capabilities from evolution, which is a consequentialist optimizer?
I think most of the capabilities on earth exist in humans, not in the environment. For instance if you have a rock, it’s just gonna sit there; it’s not gonna make a rocket and fly to the moon. This is why I emphasize GPT as getting its capabilities from humans, since there are not many other things in the environment it could get capabilities from.
I agree that insofar as there are other things in the environment with capabilities (e.g. computers outputting big tables of math results) that get fed into GPT, it also gains some capabilities from them.
I think they get their capabilities from evolution, which is a consequentialist optimizer?