I think the intuition error in the Chinese Room thought experiment is that the Chinese Room doesn’t know Chinese, just because it’s the wrong size/made out of the wrong stuff.
If GPT-3 was literally a Giant Lookup Table of all possible prompts with their completions then sure, I could see what you’re saying, but it isn’t. GPT is big but it isn’t that big. All of its basic “knowledge” it gains during training but I don’t see why that means all the “reasoning” it produces happens during training as well.
I am inclined to think you are right about GPT-3 reasoning in the same sense a human does even without the ability to change its ANN weights, after seeing what GPT-4 can do with the same handicap.
I think the intuition error in the Chinese Room thought experiment is that the Chinese Room doesn’t know Chinese, just because it’s the wrong size/made out of the wrong stuff.
If GPT-3 was literally a Giant Lookup Table of all possible prompts with their completions then sure, I could see what you’re saying, but it isn’t. GPT is big but it isn’t that big. All of its basic “knowledge” it gains during training but I don’t see why that means all the “reasoning” it produces happens during training as well.
I am inclined to think you are right about GPT-3 reasoning in the same sense a human does even without the ability to change its ANN weights, after seeing what GPT-4 can do with the same handicap.