OP’s arguments against RL seem to be based on a conception of RL as mapping a set of stimuli directly to an action, which would be silly. They could also be taken as arguments that the brain cannot possibly be implemented in neurons.
Wrong, sorry Phil. I addressed exactly this in the essay itself.
Nothing whatever changes if the mapping from stimuli to action involves various degrees of indirectness, UNLESS the stuff in the middle is so smart, in and of itself, that it starts to dominate the behavior.
And, as a matter of fact, that is always what happens in so-called RL systems (as is explained in the essay). Extra machinery is added to the RL, to get it to work, and in practice that extra machinery does the heavy lifting. Sometimes the extra machinery is invisible—as when the experimenter uses their own intelligence to pre-package stimuli—but it can also be visible machinery, in the form of code that does extra processing.
The trouble is, that extra machinery ends up doing so much work that RL by itself is pointless.
Since I made this point with great force, already, it is a little exasperating to have to answer it as if it was not there in the OP, and instead the OP was just “silly”.
As for your second point: “They could also be taken as arguments that the brain cannot possibly be implemented in neurons.” that is not worth explaining. It obviously does no such thing.
OP’s arguments against RL seem to be based on a conception of RL as mapping a set of stimuli directly to an action, which would be silly. They could also be taken as arguments that the brain cannot possibly be implemented in neurons.
Wrong, sorry Phil. I addressed exactly this in the essay itself.
Nothing whatever changes if the mapping from stimuli to action involves various degrees of indirectness, UNLESS the stuff in the middle is so smart, in and of itself, that it starts to dominate the behavior.
And, as a matter of fact, that is always what happens in so-called RL systems (as is explained in the essay). Extra machinery is added to the RL, to get it to work, and in practice that extra machinery does the heavy lifting. Sometimes the extra machinery is invisible—as when the experimenter uses their own intelligence to pre-package stimuli—but it can also be visible machinery, in the form of code that does extra processing.
The trouble is, that extra machinery ends up doing so much work that RL by itself is pointless.
Since I made this point with great force, already, it is a little exasperating to have to answer it as if it was not there in the OP, and instead the OP was just “silly”.
As for your second point: “They could also be taken as arguments that the brain cannot possibly be implemented in neurons.” that is not worth explaining. It obviously does no such thing.