If I’m not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn’t mean A is B.
I don’t agree with Eliezer here. I don’t think we have a deep enough understanding of consciousness to make confident predictions about what is and isn’t conscious beyond “most humans are probably conscious sometimes”.
The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that.
If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.
If I’m not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn’t mean A is B.
See also: Chinese Room Problem
I see, but in that case what is the claim about gpt3, that if it had behavioral equivalence to a complicated social being it would have consciousness?
I don’t agree with Eliezer here. I don’t think we have a deep enough understanding of consciousness to make confident predictions about what is and isn’t conscious beyond “most humans are probably conscious sometimes”.
The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that.
If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.