This is sort of why I made the argument that we can only consider necessary conditions, and look for their absence.
But more to your point, LLMs and human brains aren’t “two agents that are structurally identical.” They aren’t even close. The fact that a hypothetical built-from-scratch human brain might have the same qualia as humans isn’t relevant, because that’s not what’s being discussed.
Also, unless your process was precisely “attempt to copy the human brain,” I find it very unlikely that any AI development process would yield something particularly similar to a human brain.
Yeah, I agree they aren’t structurally identical. Although I tend to doubt how much the structural differences between deep neural nets and human brains matter. We don’t actually have a non-arbitrary way to quantify how different two intelligent systems are internally.
I agree. I made this point and that is why I did not try to argue that LLMs did not have qualia.
But I do believe you can consider necessary conditions and look at their absence. For instance, I can safely declare that a rock does not have qualia, because I know it does not have a brain.
Similarly, I may not be able to measure whether LLMs have emotions, but I can observe that the processes that generated LLMs are highly inconsistent with the processes that caused emotions to emerge in the only case where I know they exist. Pair that with the observation that specific human emotions seem like only one option out of infinitely many, and it makes a strong probabilistic argument.
This is sort of why I made the argument that we can only consider necessary conditions, and look for their absence.
But more to your point, LLMs and human brains aren’t “two agents that are structurally identical.” They aren’t even close. The fact that a hypothetical built-from-scratch human brain might have the same qualia as humans isn’t relevant, because that’s not what’s being discussed.
Also, unless your process was precisely “attempt to copy the human brain,” I find it very unlikely that any AI development process would yield something particularly similar to a human brain.
Yeah, I agree they aren’t structurally identical. Although I tend to doubt how much the structural differences between deep neural nets and human brains matter. We don’t actually have a non-arbitrary way to quantify how different two intelligent systems are internally.
I agree. I made this point and that is why I did not try to argue that LLMs did not have qualia.
But I do believe you can consider necessary conditions and look at their absence. For instance, I can safely declare that a rock does not have qualia, because I know it does not have a brain.
Similarly, I may not be able to measure whether LLMs have emotions, but I can observe that the processes that generated LLMs are highly inconsistent with the processes that caused emotions to emerge in the only case where I know they exist. Pair that with the observation that specific human emotions seem like only one option out of infinitely many, and it makes a strong probabilistic argument.