It is good to taboo words, but it is also good to criticize the attempts of others to taboo words, if you can make the case that those attempts fail to capture something important.
For example, it seems possible that a computer could predict your actions to high precision, but by running computations so different from the ones that you would have run yourself that the simulated-you doesn’t have subjective experiences. (If I understand it correctly, this is the idea behind Eliezer’s search for a non-person predicate. It would be good if this is possible, because then a superintelligence could run alternate histories without torturing millions of sentient simulated beings.) If such a thing is possible, then any superficial behavioristic attempt to taboo “subjective experience” will be missing something important.
Furthermore, I can mount this critique of such an attempt without being obliged to taboo “subjective experience” myself. That is, making the critique is valuable even if it doesn’t offer an alternative way to taboo “subjective experience”.
It’s not clear to me that “understanding” means “subjective experience,” which is one of several reasons why I think it’s reasonable for me to ask that we taboo “understanding.”
The only good taboo of understanding I’ve ever read came from an LW quotes thread, quoting Feynman, quoting Dirac:
I understand what an equation means if I have a way of figuring out the characteristics of its solution without actually solving it.
By this criterion, the Chinese Room might not actually understand Chinese, where a human Chinese speaker does—ie, you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?
I would say I understand a system to the extent that I’m capable of predicting its behavior given novel inputs. Which seems to be getting at something similar to Dirac’s version.
you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?
IIRC, the CR as Searle describes it would include rules for responding to the question “What are likely last words that end this sentence?” in the same way a Chinese speaker would. So presumably it is capable of doing that, if asked.
And, definitionally, of doing so without understanding.
To my way of thinking, that makes the CR a logical impossibility, and reasoning forward from an assumption of its existence can lead to nonsensical conclusions.
Good point—I was thinking of “figuring out the characteristics” fuzzily; but if defined as giving correctly predictive output in response to a given interrogative, the room either does it correctly, or isn’t a fully-functioning Chinese Room.
Taboo “understanding.”
It is good to taboo words, but it is also good to criticize the attempts of others to taboo words, if you can make the case that those attempts fail to capture something important.
For example, it seems possible that a computer could predict your actions to high precision, but by running computations so different from the ones that you would have run yourself that the simulated-you doesn’t have subjective experiences. (If I understand it correctly, this is the idea behind Eliezer’s search for a non-person predicate. It would be good if this is possible, because then a superintelligence could run alternate histories without torturing millions of sentient simulated beings.) If such a thing is possible, then any superficial behavioristic attempt to taboo “subjective experience” will be missing something important.
Furthermore, I can mount this critique of such an attempt without being obliged to taboo “subjective experience” myself. That is, making the critique is valuable even if it doesn’t offer an alternative way to taboo “subjective experience”.
It’s not clear to me that “understanding” means “subjective experience,” which is one of several reasons why I think it’s reasonable for me to ask that we taboo “understanding.”
I didn’t mean to suggest that “understanding” means “subjective experience”, or to suggest that anyone else was suggesting that.
The only good taboo of understanding I’ve ever read came from an LW quotes thread, quoting Feynman, quoting Dirac:
By this criterion, the Chinese Room might not actually understand Chinese, where a human Chinese speaker does—ie, you hear all but the last word of a sentence, can you give a much tighter probability distribution over the end of the sentence than maxent over common vocabulary that fits grammatically?
I would say I understand a system to the extent that I’m capable of predicting its behavior given novel inputs. Which seems to be getting at something similar to Dirac’s version.
IIRC, the CR as Searle describes it would include rules for responding to the question “What are likely last words that end this sentence?” in the same way a Chinese speaker would. So presumably it is capable of doing that, if asked.
And, definitionally, of doing so without understanding.
To my way of thinking, that makes the CR a logical impossibility, and reasoning forward from an assumption of its existence can lead to nonsensical conclusions.
Good point—I was thinking of “figuring out the characteristics” fuzzily; but if defined as giving correctly predictive output in response to a given interrogative, the room either does it correctly, or isn’t a fully-functioning Chinese Room.