Sometimes what we are interested in is not “the ability to produce such-and-such an output” but “what the output implies about the machine’s internals”. A few examples of this where I’d hope you share this intuition:
Chess:
When machines could not yet play chess, it was thought that ‘the ability to play chess’ would require general strategic thinking and problem-solving acumen that would be useful in many domains.
However, machines were eventually taught to play chess (very well!) by developing task-specific algorithms that did not generalize to other domains.
Saying “the machine isn’t really thinking strategically, it’s just executing this chess algorithm” doesn’t sound crazy to me.
Pain (or other ‘internal feelings’):
It’s simple to produce a computer program that outputs ‘AIIIIIEEEE! OH GOD PLEASE STOP IT HURTS SO MUCH!!’ when a button is pressed.
Among humans, producing that output is a reliable signal of feeling pain.
I don’t think it is necessarily the case that any machine that produces that output is necessarily feeling pain. Saying “the machine isn’t really feeling pain, it’s just producing text outputs” doesn’t sound crazy to me (though I guess I would be a bit more worried about this in the context of large opaque ML models than in the context of simple scripts).
But no one is saying chess engines are thinking strategically? The actual statement would be “chess engines aren’t actually playing chess they’re just performing MCT searches” which would indeed be stupid.
Sometimes what we are interested in is not “the ability to produce such-and-such an output” but “what the output implies about the machine’s internals”. A few examples of this where I’d hope you share this intuition:
Chess:
When machines could not yet play chess, it was thought that ‘the ability to play chess’ would require general strategic thinking and problem-solving acumen that would be useful in many domains.
However, machines were eventually taught to play chess (very well!) by developing task-specific algorithms that did not generalize to other domains.
Saying “the machine isn’t really thinking strategically, it’s just executing this chess algorithm” doesn’t sound crazy to me.
Pain (or other ‘internal feelings’):
It’s simple to produce a computer program that outputs ‘AIIIIIEEEE! OH GOD PLEASE STOP IT HURTS SO MUCH!!’ when a button is pressed.
Among humans, producing that output is a reliable signal of feeling pain.
I don’t think it is necessarily the case that any machine that produces that output is necessarily feeling pain. Saying “the machine isn’t really feeling pain, it’s just producing text outputs” doesn’t sound crazy to me (though I guess I would be a bit more worried about this in the context of large opaque ML models than in the context of simple scripts).
But no one is saying chess engines are thinking strategically? The actual statement would be “chess engines aren’t actually playing chess they’re just performing MCT searches” which would indeed be stupid.