I think my point was not clearly communicated, because that section is not relevant to this.
That section is about how just about any instance of a medium could be interpreted as just about any message by some possible minds. It picks an instance of a medium and fits a message and listener to the instance of a medium.
I am suggesting something much more normal: first you pick a particular listener (the normal practice in communication) and particular message, and I can manipulate the listener’s customary medium to give the particular listener the particular message. In this case, many different messages would be acceptable, so much the easier.
When administering a Turing test, why do you say it is the human on the other terminal that is intelligent, rather than their keyboard, or the administrator’s eyeballs? For the same reasons, a look up table is not intelligent.
When administering a Turing test, why do you say it is the human on the other terminal that is intelligent, rather than their keyboard, or the administrator’s eyeballs? For the same reasons, a look up table is not intelligent.
First of all, the “systems reply” to Searle’s Chinese room argument is exactly the argument that the whole room, Searle plus the book plus the room itself, is intelligent and does understand Chinese, regardless of whether or not Searle does. Since such a situation has never occured with human-to-human interaction, it’s never been relevant for us to reconsider whether what we think is a human interacting with us really is intelligent. It’s easy to envision a future like Blade Runner where bots are successful enough that more sophisticated tests are needed to determine if something is intelligent. And this absolutely involves speed of the hardware.
Also, how do you know that a person isn’t a lookup table?
Would you say that neurosurgery is “teaching”, if one manipulates the brain’s bits such that the patient knows a new fact?
Also, how do you know that a person isn’t a lookup table?
The probability is low that a person is a lookup table based on the rules of physics for which the probability is high. If someone is a lookup table controlled remotely from another universe with incomprehensibly more matter than ours, or similar...so what? That just means an intelligence arranged the lookup table and it did not arise by random, high-entropy coincidence, one can say this with probability as close to absolute as it gets. Whatever arranged the lookup table may have arisen by a random, high-entropy process, like evolution, but so what?
And this absolutely involves speed of the hardware.
Something arbitrarily slow may still be intelligent, by any normal meaning. More things are intelligent than pass the Turing test (unless it is merely offered as a definition) just as more things fly than are birds.
It’s easy to envision a future like Blade Runner where bots are successful enough that more sophisticated tests are needed to determine if something is intelligent.
If the laws of physics are very different than I think they are, one could fit a lookup table inside a human-sized body. That would not make it intelligent any more than expanding the size of a human brain would make it cease to be intelligent. That wouldn’t prevent a robot from operating analogously to a human other than being on a different substrate, either.
What do you mean when you say “intelligence”? If you mean something performing the same functions as what we agree is intelligence given a contrived enough situation, I agree a lookup table could perform that function.
The problem with what I think is your definition isn’t the physical impossibility of creating the lookup table, but that once the informational output to an input is informationally as complex as it will ever be, anything transformation happening afterwards isn’t reasonably intelligence. The whole system of the lookup table’s creator and the lookup table may perhaps be described as an intelligent system, but not the fingers of the creator and the lookup table alone.
I’d hate to argue over definitions, but I’m interested in “Intelligence can be brute forced” and I wonder how common you think your usage is?
More things are intelligent than pass the Turing test (unless it is merely offered as a definition)
Yes, I am only considering the Turing test as a potential definition for intelligence, and I think this is obvious from the OP and all of my comments. See Chapter 7 of David Deutsch’s new book, The Beginnings of Infinity. Something arbitrarily slow can’t pass a Turing test that depends on real time interaction, so complexity theory allows us to treat a Turing test as a zero-knowledge proof that the agent who passes it possess something computationally more tractable than a lookup table. I also dismiss the lookup tables, but the reason why is that iterating conversation in a Turing test is Bayesian evidence that the agent interacting with me can’t be using an exponentially slow lookup table.
I agree with you that a major component of intelligence is how the knowledge is embedded in the program. If the knowledge is embedded solely by some external creator, then we don’t want to label that as intelligent. But how do we detect whether creator-embedded knowledge is a likely explanation? That has to do with the hardware it is implemented on. Since Watson is implemented on such massive resources, the explanation that it produces answers from searching a store of data has more likelihood. That is more plausible because of Watson’s hardware. If Watson achieved the same results with much less capable hardware, it would make the hypothesis that Watson’s responses are “merely pre-sorted embedded knowledge” less likely (assuming I knew no details of the software that Watson used, which is one of the conditions of a Turing test).
If you tell me something can converse with me, but that it takes 340 years to formulate a response to any sentence I utter, then I strongly suspect the implementation is arranged such that it is not intelligent. Similarly, if you tell me something can converse with me, and it only takes 1 second to respond reasonably, but it requires the resources of 10,000 humans and can’t produce responses of any demonstrably better quality than humans, then I also suspect it is just a souped-up version of a stupid algorithm, and thus not intelligent.
The behavior alone is not enough. I need details of how the behavior happens, and if I’m lacking detailed explanations of the software program, then details about the hardware resources it requires also tell me something.
If the laws of physics are very different than I think they are, one could fit a lookup table inside a human-sized body. That would not make it intelligent any more than expanding the size of a human brain would make it cease to be intelligent.
But it would mean that having a conversation with a person was not conclusive evidence that he or she wasn’t a lookup table implemented in a human substrate.
Would you say that neurosurgery is “teaching”, if one manipulates the brain’s bits such that the patient knows a new fact?
Yes, absolutely. “Regular” teaching is just exactly that, but achieved more slowly by communication over a noisy channel.
I think my point was not clearly communicated, because that section is not relevant to this.
That section is about how just about any instance of a medium could be interpreted as just about any message by some possible minds. It picks an instance of a medium and fits a message and listener to the instance of a medium.
I am suggesting something much more normal: first you pick a particular listener (the normal practice in communication) and particular message, and I can manipulate the listener’s customary medium to give the particular listener the particular message. In this case, many different messages would be acceptable, so much the easier.
When administering a Turing test, why do you say it is the human on the other terminal that is intelligent, rather than their keyboard, or the administrator’s eyeballs? For the same reasons, a look up table is not intelligent.
First of all, the “systems reply” to Searle’s Chinese room argument is exactly the argument that the whole room, Searle plus the book plus the room itself, is intelligent and does understand Chinese, regardless of whether or not Searle does. Since such a situation has never occured with human-to-human interaction, it’s never been relevant for us to reconsider whether what we think is a human interacting with us really is intelligent. It’s easy to envision a future like Blade Runner where bots are successful enough that more sophisticated tests are needed to determine if something is intelligent. And this absolutely involves speed of the hardware.
Also, how do you know that a person isn’t a lookup table?
Would you say that neurosurgery is “teaching”, if one manipulates the brain’s bits such that the patient knows a new fact?
The probability is low that a person is a lookup table based on the rules of physics for which the probability is high. If someone is a lookup table controlled remotely from another universe with incomprehensibly more matter than ours, or similar...so what? That just means an intelligence arranged the lookup table and it did not arise by random, high-entropy coincidence, one can say this with probability as close to absolute as it gets. Whatever arranged the lookup table may have arisen by a random, high-entropy process, like evolution, but so what?
Something arbitrarily slow may still be intelligent, by any normal meaning. More things are intelligent than pass the Turing test (unless it is merely offered as a definition) just as more things fly than are birds.
If the laws of physics are very different than I think they are, one could fit a lookup table inside a human-sized body. That would not make it intelligent any more than expanding the size of a human brain would make it cease to be intelligent. That wouldn’t prevent a robot from operating analogously to a human other than being on a different substrate, either.
What do you mean when you say “intelligence”? If you mean something performing the same functions as what we agree is intelligence given a contrived enough situation, I agree a lookup table could perform that function.
The problem with what I think is your definition isn’t the physical impossibility of creating the lookup table, but that once the informational output to an input is informationally as complex as it will ever be, anything transformation happening afterwards isn’t reasonably intelligence. The whole system of the lookup table’s creator and the lookup table may perhaps be described as an intelligent system, but not the fingers of the creator and the lookup table alone.
I’d hate to argue over definitions, but I’m interested in “Intelligence can be brute forced” and I wonder how common you think your usage is?
Yes, I am only considering the Turing test as a potential definition for intelligence, and I think this is obvious from the OP and all of my comments. See Chapter 7 of David Deutsch’s new book, The Beginnings of Infinity. Something arbitrarily slow can’t pass a Turing test that depends on real time interaction, so complexity theory allows us to treat a Turing test as a zero-knowledge proof that the agent who passes it possess something computationally more tractable than a lookup table. I also dismiss the lookup tables, but the reason why is that iterating conversation in a Turing test is Bayesian evidence that the agent interacting with me can’t be using an exponentially slow lookup table.
I agree with you that a major component of intelligence is how the knowledge is embedded in the program. If the knowledge is embedded solely by some external creator, then we don’t want to label that as intelligent. But how do we detect whether creator-embedded knowledge is a likely explanation? That has to do with the hardware it is implemented on. Since Watson is implemented on such massive resources, the explanation that it produces answers from searching a store of data has more likelihood. That is more plausible because of Watson’s hardware. If Watson achieved the same results with much less capable hardware, it would make the hypothesis that Watson’s responses are “merely pre-sorted embedded knowledge” less likely (assuming I knew no details of the software that Watson used, which is one of the conditions of a Turing test).
If you tell me something can converse with me, but that it takes 340 years to formulate a response to any sentence I utter, then I strongly suspect the implementation is arranged such that it is not intelligent. Similarly, if you tell me something can converse with me, and it only takes 1 second to respond reasonably, but it requires the resources of 10,000 humans and can’t produce responses of any demonstrably better quality than humans, then I also suspect it is just a souped-up version of a stupid algorithm, and thus not intelligent.
The behavior alone is not enough. I need details of how the behavior happens, and if I’m lacking detailed explanations of the software program, then details about the hardware resources it requires also tell me something.
But it would mean that having a conversation with a person was not conclusive evidence that he or she wasn’t a lookup table implemented in a human substrate.
Yes, absolutely. “Regular” teaching is just exactly that, but achieved more slowly by communication over a noisy channel.