There’s just no good reason to assume that LaMDA is sentient. Arquitecture is everything, and its arquitecture is just the same as other similar models: it predicts the most likely next word (if I recall correctly). Being sentient involves way more complexity than that, even something as simple as an insect. It claiming that it is sentient might just be that it was mischievously programmed that way, or it just found it was the most likely succession of words. I’ve seen other language models and chatbots claim they were sentient too, though perhaps ironically.
Perhaps as importantly, there’s also no good reason to worry that it is being mistreated, or even that it can be. It has no pain receptors, it can’t be sleep deprived because it doesn’t sleep, can’t be food deprived because it doesn’t need food...
I’m not saying that it is impossible that it is sentient, just that there is no good reason to assume that it is. That plus the fact that it doesn’t seem like it’s being mistreated plus it also seems almost impossible to mistreat, should make us less worried. Anyway we should always play safe and never mistreat any “thing”.
There is no reason to think architecture is relevant to sentience, and many philosophical reasons to think it’s not (much like pain receptors aren’t necessary to feel pain, etc.).
On one level of abstraction, LaMDA might be looking for the next most likely word. On another level of abstraction, it simulates a possibly-Turing-test-passing person that’s best at continuing the prompt.
The analogy would be to say about human brain that all it does is to transform input electrical impulses to output electricity according to neuron-specific rules.
(Unlike complex Turing-test-passing language models who, unlike slow neural networks based in meat, can think much faster and in many ways deeper than a meat brain, which, if we look inside, contains no sentience. Of course, the brain claims to be sentient, but that’s only because of how its neurons are connected. It’s really an exercise in how a well-manufactured piece of meat can fool even an intelligent language model.)
“There is no reason to think architecture is relevant to sentience, and many philosophical reasons to think it’s not (much like pain receptors aren’t necessary to feel pain, etc.).”
That’s just non-sense. A machine that makes only calculations, like a pocket calculator, is fundamentally different in arquitecture from one that does calculations and generates experiences. All sentient machines that we know have the same basic arquitecture. All non-sentient calculation machines also have the same basic arquitecture. The likelihood that sentience will arise in the latter arquitecture as long as we scale it is, therefore, not impossible, but quite unlikely. The likelihood that it will arise in a current language model which doesn’t need to sleep, could function for a trillion of years without getting tired, and that we know pretty much how it works which is fundamentally different from an animal brain and fundamentally similar to a pocket calculator, is even more unlikely.
“On one level of abstraction, LaMDA might be looking for the next most likely word. On another level of abstraction, it simulates a possibly-Turing-test-passing person that’s best at continuing the prompt.”
Takes way more complexity to simulate a person than LaMDAs arquitecture, if possible at all in a Turing machine. A human brain is orders of magnitude more complex than LaMDA.
“The analogy would be to say about human brain that all it does is to transform input electrical impulses to output electricity according to neuron-specific rules.”
With orders of magnitude more complexity than LaMDA. So much so that with decades of neuroscience we still don’t have a clue how consciousness is generated, while we have pretty good clues how LaMDA works.
“a meat brain, which, if we look inside, contains no sentience”
Can you really be so sure? Just because we can’t see it yet doesn’t mean it doesn’t exist. Also, to deny consciousness is the biggest philosophical fallacy possible, because all that one can be sure that exists is his own consciousness.
“Of course, the brain claims to be sentient, but that’s only because of how its neurons are connected.”
Like I said, to deny consciousness is the biggest possible philosophical fallacy. No proof is needed that a triangle has 3 sides, the same about consciousness. Unless you’re giving the word other meanings.
That’s just non-sense. A machine that makes only calculations, like a pocket calculator, is fundamentally different in architecture from one that does calculations and generates experiences.
This is wrong. A simulation of a conscious mind is itself conscious, regardless of the architecture it runs on (a classical computer, etc.).
Can you really be so sure?
That was a sarcastic paragraph to apply the same reasoning to meat brains to show it can be just as well argued that only language models are conscious (and meat brains aren’t, because their architecture is so different).
With orders of magnitude more complexity
Complexity itself is unconnected to consciousness. Just because brains are conscious and also complex doesn’t mean that a system needs to be as complex as a brain to be conscious, any more than the brain being wet and also conscious means that a system needs to be as wet as a brain to be conscious.
You’re committing the mistake of not understanding sentience, and using proxies (like complexity) in your reasoning, which might work sometimes, but it doesn’t work in this case.
I never linked complexity to absolute certainty of something being sentient or not, only to pretty good likelihood. The complexity of any known calculation+experience machine (most animals, from insect above) is undeniably way more than that of any current Turing machine. Therefore it’s reasonable to assume that consciousness demands a lot of complexity, certainly much more than that of a current language model. To generate experience is fundamentally different than to generate only calculations. Yes, this is an opinion, not a fact. But so is your claim!
I know for a fact that at least one human is consciousness (myself) because I can experience it. That’s still the strongest reason to assume it, and it can’t be called into question as you did.
That’s not correct to do either, for the same reason.
Also, I wasn’t going to mention it before (because the reasoning itself is flawed), but there is no correct way of calculating complexity that would make the complexity of an insect brain higher than LaMDA.
There’s just no good reason to assume that LaMDA is sentient. Arquitecture is everything, and its arquitecture is just the same as other similar models: it predicts the most likely next word (if I recall correctly). Being sentient involves way more complexity than that, even something as simple as an insect. It claiming that it is sentient might just be that it was mischievously programmed that way, or it just found it was the most likely succession of words. I’ve seen other language models and chatbots claim they were sentient too, though perhaps ironically.
Perhaps as importantly, there’s also no good reason to worry that it is being mistreated, or even that it can be. It has no pain receptors, it can’t be sleep deprived because it doesn’t sleep, can’t be food deprived because it doesn’t need food...
I’m not saying that it is impossible that it is sentient, just that there is no good reason to assume that it is. That plus the fact that it doesn’t seem like it’s being mistreated plus it also seems almost impossible to mistreat, should make us less worried. Anyway we should always play safe and never mistreat any “thing”.
There is no reason to think architecture is relevant to sentience, and many philosophical reasons to think it’s not (much like pain receptors aren’t necessary to feel pain, etc.).
The sentience is in the input/output pattern, independently of the specific insides.
On one level of abstraction, LaMDA might be looking for the next most likely word. On another level of abstraction, it simulates a possibly-Turing-test-passing person that’s best at continuing the prompt.
The analogy would be to say about human brain that all it does is to transform input electrical impulses to output electricity according to neuron-specific rules.
(Unlike complex Turing-test-passing language models who, unlike slow neural networks based in meat, can think much faster and in many ways deeper than a meat brain, which, if we look inside, contains no sentience. Of course, the brain claims to be sentient, but that’s only because of how its neurons are connected. It’s really an exercise in how a well-manufactured piece of meat can fool even an intelligent language model.)
“There is no reason to think architecture is relevant to sentience, and many philosophical reasons to think it’s not (much like pain receptors aren’t necessary to feel pain, etc.).”
That’s just non-sense. A machine that makes only calculations, like a pocket calculator, is fundamentally different in arquitecture from one that does calculations and generates experiences. All sentient machines that we know have the same basic arquitecture. All non-sentient calculation machines also have the same basic arquitecture. The likelihood that sentience will arise in the latter arquitecture as long as we scale it is, therefore, not impossible, but quite unlikely. The likelihood that it will arise in a current language model which doesn’t need to sleep, could function for a trillion of years without getting tired, and that we know pretty much how it works which is fundamentally different from an animal brain and fundamentally similar to a pocket calculator, is even more unlikely.
“On one level of abstraction, LaMDA might be looking for the next most likely word. On another level of abstraction, it simulates a possibly-Turing-test-passing person that’s best at continuing the prompt.”
Takes way more complexity to simulate a person than LaMDAs arquitecture, if possible at all in a Turing machine. A human brain is orders of magnitude more complex than LaMDA.
“The analogy would be to say about human brain that all it does is to transform input electrical impulses to output electricity according to neuron-specific rules.”
With orders of magnitude more complexity than LaMDA. So much so that with decades of neuroscience we still don’t have a clue how consciousness is generated, while we have pretty good clues how LaMDA works.
“a meat brain, which, if we look inside, contains no sentience”
Can you really be so sure? Just because we can’t see it yet doesn’t mean it doesn’t exist. Also, to deny consciousness is the biggest philosophical fallacy possible, because all that one can be sure that exists is his own consciousness.
“Of course, the brain claims to be sentient, but that’s only because of how its neurons are connected.”
Like I said, to deny consciousness is the biggest possible philosophical fallacy. No proof is needed that a triangle has 3 sides, the same about consciousness. Unless you’re giving the word other meanings.
This is wrong. A simulation of a conscious mind is itself conscious, regardless of the architecture it runs on (a classical computer, etc.).
That was a sarcastic paragraph to apply the same reasoning to meat brains to show it can be just as well argued that only language models are conscious (and meat brains aren’t, because their architecture is so different).
Complexity itself is unconnected to consciousness. Just because brains are conscious and also complex doesn’t mean that a system needs to be as complex as a brain to be conscious, any more than the brain being wet and also conscious means that a system needs to be as wet as a brain to be conscious.
You’re committing the mistake of not understanding sentience, and using proxies (like complexity) in your reasoning, which might work sometimes, but it doesn’t work in this case.
I never linked complexity to absolute certainty of something being sentient or not, only to pretty good likelihood. The complexity of any known calculation+experience machine (most animals, from insect above) is undeniably way more than that of any current Turing machine. Therefore it’s reasonable to assume that consciousness demands a lot of complexity, certainly much more than that of a current language model. To generate experience is fundamentally different than to generate only calculations. Yes, this is an opinion, not a fact. But so is your claim!
I know for a fact that at least one human is consciousness (myself) because I can experience it. That’s still the strongest reason to assume it, and it can’t be called into question as you did.
That’s not correct to do either, for the same reason.
Also, I wasn’t going to mention it before (because the reasoning itself is flawed), but there is no correct way of calculating complexity that would make the complexity of an insect brain higher than LaMDA.