“His beliefs have great personal value to him, and it costs us nothing to let him keep them (as long as he doesn’t initiate theological debates). Why not respect that?”
Values may be misplaced, and they have consequences. This particular issue doesn’t have much riding on it (on the face of it, anyway), but many do. Moreover, how we think is in many ways as important as what we think. The fellows ad hoc moves are problematic. Ad hoc adjustments to our theories/beliefs to avoid disconfirmation are like confirmation bias and other fallacies and biases—they are hurdles creativity, making better decisions and increasing our understanding of ourselves and the world. This all sounds more hard-nosed than I really am, but you get the point.
“By definition, wouldn’t our AI friend have clearly defined rules that tell us what it believes?”
You seem to envision AI as a massive database of scripts chosen according to circumstance, but this is not feasible. The number of possible scripts to enable intelligent behavior would be innumerable. No, an AI need not have “clearly defined rules” in the sense of being intelligible to humans. I suspect anything robust enough to pass the Turing Test in any meaningful (non-domain restricted) sense would be either too complicated to decode or predict upon its inspection, or would be the result of some artificial evolutionary process that would be no more decodable than a brain. Have you ever looked at complex code—it can be difficult if not impossible for a person to understand as code, let alone all the possible ways it may implement (thus bugs, infinite loops, etc.). As Turing said, “Machines take me by surprise with great frequency.”
“You’ll just have to take my word for it that I had other unquantifiable impulses.”
But you would not take the word of an AI that exhibited human level robustness in its actions? Why?
“I think you might be misapplying the Turing test. Let’s frame this as a statistical problem. When you perform analysis, you separate factors into those that have predictive power and those that don’t. A successful Turing test would tell us that a perfect predictive formula is possible, and that we might be able to ignore some factors that don’t help us anticipate behaviour. It wouldn’t tell us that those factors don’t exist however.”
Funny, I’m afraid that you might be misapplying the Turing Test. The Turing Test is not supposed to provide a maximally predictive “formula” for a putative intelligence. Rather, passing it is arguably supposed to demonstrate that the subject is, in some substantive sense of the word, intelligent.
Mark M.,
“His beliefs have great personal value to him, and it costs us nothing to let him keep them (as long as he doesn’t initiate theological debates). Why not respect that?”
Values may be misplaced, and they have consequences. This particular issue doesn’t have much riding on it (on the face of it, anyway), but many do. Moreover, how we think is in many ways as important as what we think. The fellows ad hoc moves are problematic. Ad hoc adjustments to our theories/beliefs to avoid disconfirmation are like confirmation bias and other fallacies and biases—they are hurdles creativity, making better decisions and increasing our understanding of ourselves and the world. This all sounds more hard-nosed than I really am, but you get the point.
“By definition, wouldn’t our AI friend have clearly defined rules that tell us what it believes?”
You seem to envision AI as a massive database of scripts chosen according to circumstance, but this is not feasible. The number of possible scripts to enable intelligent behavior would be innumerable. No, an AI need not have “clearly defined rules” in the sense of being intelligible to humans. I suspect anything robust enough to pass the Turing Test in any meaningful (non-domain restricted) sense would be either too complicated to decode or predict upon its inspection, or would be the result of some artificial evolutionary process that would be no more decodable than a brain. Have you ever looked at complex code—it can be difficult if not impossible for a person to understand as code, let alone all the possible ways it may implement (thus bugs, infinite loops, etc.). As Turing said, “Machines take me by surprise with great frequency.”
“You’ll just have to take my word for it that I had other unquantifiable impulses.”
But you would not take the word of an AI that exhibited human level robustness in its actions? Why?
“I think you might be misapplying the Turing test. Let’s frame this as a statistical problem. When you perform analysis, you separate factors into those that have predictive power and those that don’t. A successful Turing test would tell us that a perfect predictive formula is possible, and that we might be able to ignore some factors that don’t help us anticipate behaviour. It wouldn’t tell us that those factors don’t exist however.”
Funny, I’m afraid that you might be misapplying the Turing Test. The Turing Test is not supposed to provide a maximally predictive “formula” for a putative intelligence. Rather, passing it is arguably supposed to demonstrate that the subject is, in some substantive sense of the word, intelligent.