If something interests us, we can perform trials. Because our knowledge is integrated with our decisionmaking, we can learn causality that way. What ChatGPT does is pick up both knowledge and decisionmaking by imitation, which is why it can also exhibit causal reasoning without itself necessarily acting agentically during training.
Ok. My guess is that Pearl would say something more like that we have an innate ability to represent causal models, and only after that follow with what you said. He thinks that having the causal model representation is necessary, that you can’t just look at trials and decisions to make causal inferences, if you don’t have this special causal machinery inside you. (Personally, I disagree this is a good frame.)
How does he think that humans get the causal information in the first place?
If something interests us, we can perform trials. Because our knowledge is integrated with our decisionmaking, we can learn causality that way. What ChatGPT does is pick up both knowledge and decisionmaking by imitation, which is why it can also exhibit causal reasoning without itself necessarily acting agentically during training.
Is this your opinion, or what you think Pearl’s opinion is?
It’s a loose guess at what Pearl’s opinion is. I’m not sure this boundary exists at all.
Ok. My guess is that Pearl would say something more like that we have an innate ability to represent causal models, and only after that follow with what you said. He thinks that having the causal model representation is necessary, that you can’t just look at trials and decisions to make causal inferences, if you don’t have this special causal machinery inside you. (Personally, I disagree this is a good frame.)