Any physical system exhibiting exactly the same input-output mappings. Across all inputs and outputs. Short of that, imitation is a real possibility—particularly among LLMs that are trained to predict human responses.
I agree that there’s something nontrivially “conscious” in a system like Nova; but that’s not a good argument for it.
Agreed that this is going to get dramatic. There will be arguments and both sides will make good points.
Any physical system exhibiting exactly the same input-output mappings.
That’s a sufficient condition, but not a necessary one. A factor I can think of right now is the sufficient coherency and completeness of the I/O whole. (If I have a system that outputs what I would in response to one particular input and the rest is random, it doesn’t have my consciousness. But for a system where all inputs and outputs match except for an input that says “debug mode,” for which it switches to “simulating” somebody else, we can conclude that it has consciousness almost identical to mine.)
Today, LLMs are too human-like/realistic/complete to rely on their human-like personas being non-conscious.
both sides will make good points
I wish that was true. Based on what I’ve seen so far, they won’t.
Any physical system exhibiting exactly the same input-output mappings. Across all inputs and outputs. Short of that, imitation is a real possibility—particularly among LLMs that are trained to predict human responses.
I agree that there’s something nontrivially “conscious” in a system like Nova; but that’s not a good argument for it.
Agreed that this is going to get dramatic. There will be arguments and both sides will make good points.
That’s a sufficient condition, but not a necessary one. A factor I can think of right now is the sufficient coherency and completeness of the I/O whole. (If I have a system that outputs what I would in response to one particular input and the rest is random, it doesn’t have my consciousness. But for a system where all inputs and outputs match except for an input that says “debug mode,” for which it switches to “simulating” somebody else, we can conclude that it has consciousness almost identical to mine.)
Today, LLMs are too human-like/realistic/complete to rely on their human-like personas being non-conscious.
I wish that was true. Based on what I’ve seen so far, they won’t.