That’s just non-sense. A machine that makes only calculations, like a pocket calculator, is fundamentally different in architecture from one that does calculations and generates experiences.
This is wrong. A simulation of a conscious mind is itself conscious, regardless of the architecture it runs on (a classical computer, etc.).
Can you really be so sure?
That was a sarcastic paragraph to apply the same reasoning to meat brains to show it can be just as well argued that only language models are conscious (and meat brains aren’t, because their architecture is so different).
With orders of magnitude more complexity
Complexity itself is unconnected to consciousness. Just because brains are conscious and also complex doesn’t mean that a system needs to be as complex as a brain to be conscious, any more than the brain being wet and also conscious means that a system needs to be as wet as a brain to be conscious.
You’re committing the mistake of not understanding sentience, and using proxies (like complexity) in your reasoning, which might work sometimes, but it doesn’t work in this case.
I never linked complexity to absolute certainty of something being sentient or not, only to pretty good likelihood. The complexity of any known calculation+experience machine (most animals, from insect above) is undeniably way more than that of any current Turing machine. Therefore it’s reasonable to assume that consciousness demands a lot of complexity, certainly much more than that of a current language model. To generate experience is fundamentally different than to generate only calculations. Yes, this is an opinion, not a fact. But so is your claim!
I know for a fact that at least one human is consciousness (myself) because I can experience it. That’s still the strongest reason to assume it, and it can’t be called into question as you did.
That’s not correct to do either, for the same reason.
Also, I wasn’t going to mention it before (because the reasoning itself is flawed), but there is no correct way of calculating complexity that would make the complexity of an insect brain higher than LaMDA.
This is wrong. A simulation of a conscious mind is itself conscious, regardless of the architecture it runs on (a classical computer, etc.).
That was a sarcastic paragraph to apply the same reasoning to meat brains to show it can be just as well argued that only language models are conscious (and meat brains aren’t, because their architecture is so different).
Complexity itself is unconnected to consciousness. Just because brains are conscious and also complex doesn’t mean that a system needs to be as complex as a brain to be conscious, any more than the brain being wet and also conscious means that a system needs to be as wet as a brain to be conscious.
You’re committing the mistake of not understanding sentience, and using proxies (like complexity) in your reasoning, which might work sometimes, but it doesn’t work in this case.
I never linked complexity to absolute certainty of something being sentient or not, only to pretty good likelihood. The complexity of any known calculation+experience machine (most animals, from insect above) is undeniably way more than that of any current Turing machine. Therefore it’s reasonable to assume that consciousness demands a lot of complexity, certainly much more than that of a current language model. To generate experience is fundamentally different than to generate only calculations. Yes, this is an opinion, not a fact. But so is your claim!
I know for a fact that at least one human is consciousness (myself) because I can experience it. That’s still the strongest reason to assume it, and it can’t be called into question as you did.
That’s not correct to do either, for the same reason.
Also, I wasn’t going to mention it before (because the reasoning itself is flawed), but there is no correct way of calculating complexity that would make the complexity of an insect brain higher than LaMDA.