Say you had a system that implemented a sophisticated social reasoning algorith, and that was actually conscious. Now make a list of literally every sensory input and the behavioral output that the sensory input causes, and write it down in a very (very) long book. This book implements the same exact sophisticated social reasoning algorithm. To think that the book has sentience sounds to me like a statement of magical thinking, not of physicalism.
To think that the book has sentience sounds to me like a statement of magical thinking, not of physicalism.
I’m pretty sure this is because you’re defining “sentience” as some extra-physical property possessed by the algorithm, something with physicalism explicitly rejects.
Consciousness isn’t something that arises when algorithms compute complex social games. Consciousness is when some algorithm computes complex physical games. (under a purely physical theory of consciousness such as EY’s).
To understand how physicalism can talk about metaphysical categories, consider numbers. Some physical systems have the property of being “two of something” as understood by human beings. Two sheep standing in a field, for example. Or two rocks piled on of one another. There’s no magical thing that happens when “two” of something come into existence. They don’t suddenly send a glimmer of two-ness off into a pure platonic realm of numbers. They simply are “two”, and what makes them “two” is that being “two of something” is a category readily recognized by human beings (and presumably other intelligent beings).
Similarly, a physicalist theory of consciousness defines certain physical systems as conscious if they meet certain criteria. Specifically for EY, these criteria are self-recognition and complex social games. It matters no more whether they are implemented by a Chinese room or a computer or a bunch of meat. What matters is that they implement a particular algorithm.
When confronted with the Chinese-room consciousness, EY might say something like: “I recognize that this system is capable of self reflection and social reasoning in much the same way that I am, therefore I recognize that it is conscious in much the same way as I am.”
If I’m not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn’t mean A is B.
I don’t agree with Eliezer here. I don’t think we have a deep enough understanding of consciousness to make confident predictions about what is and isn’t conscious beyond “most humans are probably conscious sometimes”.
The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that.
If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.
Say you had a system that implemented a sophisticated social reasoning algorith, and that was actually conscious. Now make a list of literally every sensory input and the behavioral output that the sensory input causes, and write it down in a very (very) long book. This book implements the same exact sophisticated social reasoning algorithm. To think that the book has sentience sounds to me like a statement of magical thinking, not of physicalism.
I’m pretty sure this is because you’re defining “sentience” as some extra-physical property possessed by the algorithm, something with physicalism explicitly rejects.
Consciousness isn’t something that arises when algorithms compute complex social games. Consciousness is when some algorithm computes complex physical games. (under a purely physical theory of consciousness such as EY’s).
To understand how physicalism can talk about metaphysical categories, consider numbers. Some physical systems have the property of being “two of something” as understood by human beings. Two sheep standing in a field, for example. Or two rocks piled on of one another. There’s no magical thing that happens when “two” of something come into existence. They don’t suddenly send a glimmer of two-ness off into a pure platonic realm of numbers. They simply are “two”, and what makes them “two” is that being “two of something” is a category readily recognized by human beings (and presumably other intelligent beings).
Similarly, a physicalist theory of consciousness defines certain physical systems as conscious if they meet certain criteria. Specifically for EY, these criteria are self-recognition and complex social games. It matters no more whether they are implemented by a Chinese room or a computer or a bunch of meat. What matters is that they implement a particular algorithm.
When confronted with the Chinese-room consciousness, EY might say something like: “I recognize that this system is capable of self reflection and social reasoning in much the same way that I am, therefore I recognize that it is conscious in much the same way as I am.”
If I’m not mistaken, that book is behaviourally equivalent to the original algorithm but is not the same algorithm. From an outside view, they have different computational complexity. There are a number of different ways of defining program equivalence, but equivalence is different from identity. A is equivalent to B doesn’t mean A is B.
See also: Chinese Room Problem
I see, but in that case what is the claim about gpt3, that if it had behavioral equivalence to a complicated social being it would have consciousness?
I don’t agree with Eliezer here. I don’t think we have a deep enough understanding of consciousness to make confident predictions about what is and isn’t conscious beyond “most humans are probably conscious sometimes”.
The hypothesis that consciousness is an emergent property of certain algorithms is plausible, but only that.
If that turns out to be the case, then whether or not humans, GPT-3, or sufficiently large books are capable of consciousness depends on the details of the requirements of the algorithm.