Instructions to carry out paperwork. You read a process off from the book and not a symbol.
If you are allowed to make (guided by the book) notes then you have memory that persists between “lookups”.
If you have new paper available and there are recursive instructions in the book it might be quite a while you write symbols for “internal consumption” before you produce any symbol that is put in the output slot.
It might be that the original idea was less specified but I think it points in the same direction as effective method.
Its instructions need only to be followed rigorously to succeed. In other words, it requires no ingenuity to succeed
With “you do not need to know what you are doing” meaning that “sticking to the book” is sufficient ie no ingenuity.
But brainless action still involves more stuff than just writing a single (character/phrase) to the output slot.
This is basically black box intelligence, and there’s no reason to make the assumption that black box methods cannot work. Indeed black boxes are already used for AI today.
It may be nice for it to be white box, but I see no reason for black boxes not to be intelligent or conscious.
It is about that the human knows how to read the book and doesn’t misunderstand it. If you where in the chinense room and had a book written in english and you do not know even english you do not know how to operate the room. It is a “no box” in that the human does not need to bring anything to the table (and the book does it all).
If you knew what the chinese was about it wouldn’t be an obstacle if you could not read a book written in english. But knowing english whether or not you know chinese doesn’t make a difference.
So the critical disagreement is assuming we can add enough energy to learn everything in the book with no priors, and arbitrarily large memory capacity, then it is equivalent to actually knowing Chinese, since you can store arbitrarily large memory in your head, which includes a vast but finite ruleset for Chinese. Of course to learn new languages, you will have to expend it again, which rapidly spirals into an uncontrollable energy cost, which is why Chinese Rooms can’t actually be built in real life, which is why the success of GPT refutes Searle and Gary Marcus’s thesis that they are just Chinese Rooms.
We are allowed to only look at a single page / sentence at a time which is quite a lot more possible with a finite read-head and not need to remember pages we have turned away from.
You can google to benefit from the whole internet without needing to download the whole internet. You can work a 2 TB hardrive while only having 64MiB L1 cache. You can run arbitrary python programs with a CPU core with an instruction set that is small, finite and can not be appended.
You can manage a 2-hour paperwork session with a 6-second memory buffer.
I guess the human needs to bring english in their head instead of having a literally empty head. But having english in your head is not energywise a miracle to do.
It already has all the rules for learning Chinese, so it can manage to remain a useful source of learning even if the owner passes it up for something else.
Instructions to carry out paperwork. You read a process off from the book and not a symbol.
If you are allowed to make (guided by the book) notes then you have memory that persists between “lookups”.
If you have new paper available and there are recursive instructions in the book it might be quite a while you write symbols for “internal consumption” before you produce any symbol that is put in the output slot.
It might be that the original idea was less specified but I think it points in the same direction as effective method.
With “you do not need to know what you are doing” meaning that “sticking to the book” is sufficient ie no ingenuity.
But brainless action still involves more stuff than just writing a single (character/phrase) to the output slot.
This is basically black box intelligence, and there’s no reason to make the assumption that black box methods cannot work. Indeed black boxes are already used for AI today.
It may be nice for it to be white box, but I see no reason for black boxes not to be intelligent or conscious.
It is about that the human knows how to read the book and doesn’t misunderstand it. If you where in the chinense room and had a book written in english and you do not know even english you do not know how to operate the room. It is a “no box” in that the human does not need to bring anything to the table (and the book does it all).
If you knew what the chinese was about it wouldn’t be an obstacle if you could not read a book written in english. But knowing english whether or not you know chinese doesn’t make a difference.
So the critical disagreement is assuming we can add enough energy to learn everything in the book with no priors, and arbitrarily large memory capacity, then it is equivalent to actually knowing Chinese, since you can store arbitrarily large memory in your head, which includes a vast but finite ruleset for Chinese. Of course to learn new languages, you will have to expend it again, which rapidly spirals into an uncontrollable energy cost, which is why Chinese Rooms can’t actually be built in real life, which is why the success of GPT refutes Searle and Gary Marcus’s thesis that they are just Chinese Rooms.
No, I think I am not claiming about energy usage.
We are allowed to only look at a single page / sentence at a time which is quite a lot more possible with a finite read-head and not need to remember pages we have turned away from.
You can google to benefit from the whole internet without needing to download the whole internet. You can work a 2 TB hardrive while only having 64MiB L1 cache. You can run arbitrary python programs with a CPU core with an instruction set that is small, finite and can not be appended.
You can manage a 2-hour paperwork session with a 6-second memory buffer.
I guess the human needs to bring english in their head instead of having a literally empty head. But having english in your head is not energywise a miracle to do.
Then it is equivalent to who/what knowing Chinese?
Both the human and the book.
Separately or in conjunction?
Separately.
Well, I don’t see how the Operator can speak Chinese without the Book, or vice versa.
Specifically, once it has memorized the book, it doesn’t need to use the book anymore, and can rely on it’s own memory.
So the operator can manage without the book, but the book can’t manage without the operator...?
The book can manage without the operator.
How?
It already has all the rules for learning Chinese, so it can manage to remain a useful source of learning even if the owner passes it up for something else.
So it can’t just speak Chinese.
Yes, that’s right.