I think many people making this argument reject brain physicalism, particularly a subset of premise 2, something like “all of experience/the mind is captured by brain activity”
Your example I don’t think is convincing to the stochastic parrot people. It could just be mashup of two types of images the AI has already seen, smashed together. A more convincing proof is OthelloGPT, which stores concepts in the form of boards, states, and legal moves, despite only being trained on sequences of text tokens representing othello moves.
I don’t think people who make this argument explicitly reject brain physicalism—they won’t come to you straight up saying “I believe we have a God-given immortal soul and these mechanical husks will never possess one”. However, if hard-pressed, they’ll probably start giving out arguments that kind of sound like dualism (in which the machines can’t possibly ever catch up to human creativity for reasons). Mostly, I think it’s just the latest iteration of existential discomfort at finding out we’re perhaps not so special after all, like with “the Earth is not the centre of the Universe” and “we’re just descendants of apes, ruled by the same laws as all other animals”.
That said, I think then an interesting direction to take the discussion would be “ok, let’s say these machines can NEVER experience anything and that conscious experience is a requirement to be able to express certain feelings and forms of creativity. Do you think it is also necessary to prove mathematical theorems, make scientific discoveries or plan a deception?”. Because in the end, that’s what would make an AI really dangerous, even if its art remained eternally kinda mid.
About point 1: I think you are right with that assumption, though I believe that many people repeat this argument without having really a stance on (or awareness of) brain physicalism. That’s why I didn’t hesitate to include it. Still, if you have a decent idea of how to improve this article for people who are sceptical of physicalism, I would like to add it.
About point 2: Yeah you might be right … a reference to OthelloGPT would make it more convincing—I will add it later!
Edit: Still, I believe that “mashup” isn’t even a strictly false characterization of concept composition. I think I might add a paragraph explicitly explaining that and how I think about it.
A couple of thoughts:
I think many people making this argument reject brain physicalism, particularly a subset of premise 2, something like “all of experience/the mind is captured by brain activity”
Your example I don’t think is convincing to the stochastic parrot people. It could just be mashup of two types of images the AI has already seen, smashed together. A more convincing proof is OthelloGPT, which stores concepts in the form of boards, states, and legal moves, despite only being trained on sequences of text tokens representing othello moves.
I don’t think people who make this argument explicitly reject brain physicalism—they won’t come to you straight up saying “I believe we have a God-given immortal soul and these mechanical husks will never possess one”. However, if hard-pressed, they’ll probably start giving out arguments that kind of sound like dualism (in which the machines can’t possibly ever catch up to human creativity for reasons). Mostly, I think it’s just the latest iteration of existential discomfort at finding out we’re perhaps not so special after all, like with “the Earth is not the centre of the Universe” and “we’re just descendants of apes, ruled by the same laws as all other animals”.
That said, I think then an interesting direction to take the discussion would be “ok, let’s say these machines can NEVER experience anything and that conscious experience is a requirement to be able to express certain feelings and forms of creativity. Do you think it is also necessary to prove mathematical theorems, make scientific discoveries or plan a deception?”. Because in the end, that’s what would make an AI really dangerous, even if its art remained eternally kinda mid.
Good point. I think I will add it later.
About point 1: I think you are right with that assumption, though I believe that many people repeat this argument without having really a stance on (or awareness of) brain physicalism. That’s why I didn’t hesitate to include it. Still, if you have a decent idea of how to improve this article for people who are sceptical of physicalism, I would like to add it.
About point 2: Yeah you might be right … a reference to OthelloGPT would make it more convincing—I will add it later!
Edit: Still, I believe that “mashup” isn’t even a strictly false characterization of concept composition. I think I might add a paragraph explicitly explaining that and how I think about it.