It seems like the simulacrum reasons, but I’m thinking what it is really doing is more like reading to us from a HUGE choose-your-own-adventure book that was ‘written’ before you gave the prompt, when all that information in the training data was used to create this giant association map, the size of which escapes easy human intuition, thereby misleading us into thinking that more real time thinking must necessarily be occurring then actually is.
40 GB of text is about 20 billion pages, equivalent to about 66 million books. That’s as many book as are published in 33 years as of 2012 stats.
175 Billion parameters equals a really huge choose-your-own-adventure book, yet its characters needn’t be reasoning. Not real time while you are reading that book, anyway. They are mere fiction.
GPT really is the Chinese Room, and causes the same type of intuition error.
Does this eliminate all risk with this type of program no matter how large they get? Maybe not. Whoever created the Chinese Room had to be an intelligent agent, themselves.
I think the intuition error in the Chinese Room thought experiment is that the Chinese Room doesn’t know Chinese, just because it’s the wrong size/made out of the wrong stuff.
If GPT-3 was literally a Giant Lookup Table of all possible prompts with their completions then sure, I could see what you’re saying, but it isn’t. GPT is big but it isn’t that big. All of its basic “knowledge” it gains during training but I don’t see why that means all the “reasoning” it produces happens during training as well.
I am inclined to think you are right about GPT-3 reasoning in the same sense a human does even without the ability to change its ANN weights, after seeing what GPT-4 can do with the same handicap.
It seems like the simulacrum reasons, but I’m thinking what it is really doing is more like reading to us from a HUGE choose-your-own-adventure book that was ‘written’ before you gave the prompt, when all that information in the training data was used to create this giant association map, the size of which escapes easy human intuition, thereby misleading us into thinking that more real time thinking must necessarily be occurring then actually is.
40 GB of text is about 20 billion pages, equivalent to about 66 million books. That’s as many book as are published in 33 years as of 2012 stats.
175 Billion parameters equals a really huge choose-your-own-adventure book, yet its characters needn’t be reasoning. Not real time while you are reading that book, anyway. They are mere fiction.
GPT really is the Chinese Room, and causes the same type of intuition error.
Does this eliminate all risk with this type of program no matter how large they get? Maybe not. Whoever created the Chinese Room had to be an intelligent agent, themselves.
I think the intuition error in the Chinese Room thought experiment is that the Chinese Room doesn’t know Chinese, just because it’s the wrong size/made out of the wrong stuff.
If GPT-3 was literally a Giant Lookup Table of all possible prompts with their completions then sure, I could see what you’re saying, but it isn’t. GPT is big but it isn’t that big. All of its basic “knowledge” it gains during training but I don’t see why that means all the “reasoning” it produces happens during training as well.
I am inclined to think you are right about GPT-3 reasoning in the same sense a human does even without the ability to change its ANN weights, after seeing what GPT-4 can do with the same handicap.