“The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists.”
With conventional computers we can prove that there’s no causal role for sentience in them by running the program on a Chinese Room processor. Something extra is required for sentience to be real, and we have no model for introducing that extra thing. A simulation on conventional computer hardware of a system with sentience in it (where there is simulated sentience rather than real sentience) would have to simulate that something extra in order for that simulated sentience to appear in it. If that extra something doesn’t exist, there is no sentience.
“This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be.”
Every interaction is quantum, and when you have neural nets working on mechanisms that are too hard to untangle, there are opportunities for some kind of mechanism being involved that we can’t yet observe. What we can actually model appears to tell us that sentience must be a fiction, but we believe that things like pain feel too real to be fake.
“Anyway, even if this were true, how would you know that?”
Unless someone comes up with a theoretical model that shows a way for sentience to have a real role, we aren’t going to get answers until we can see the full mechanism by which damage signals lead to the brain generating data that makes claims about an experience of pain. If, once we have that full mechanism, we see that the brain is merely mapping data to inputs by applying rules that generate fictions about feelings, then we’ll know that feelings are fake. If they aren’t fake though, we’ll see sentience in action and we’ll discover how it works (and thereby find out what we actually are).
“If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it?”
If classical physics doesn’t support a model that enables sentience to be real, we will either have to reject the idea of sentience or look for it elsewhere.
(And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
If sentience is real, all the models are wrong because none of them show sentience working in any causal way which enables them to drive the generation of data to document the existence of sentience. All the models shout at us that there is no sentience in there playing any viable role and that it’s all wishful thinking, while our experience of feelings shouts at us that they are very real.
All I want to see is a model that illustrates the simplest role for sentience. If we have a sensor, a processor and a response, we can call the sensor a “pain” sensor and run a program that makes a motor function to remove the device away from the thing that might be damaging it, and we could call this a pain response, but there’s no pain there—there’s just the assertion of someone looking at it that pain is involved because that person wants the system to be like him/herself—“I feel pain in that situation, therefore that device must feel pain.” But no—there is no role for pain there. If we run a more intelligent program on the processor, we can put some data in memory which says “Ouch! That hurt!”, and whenever an input comes from the “pain” sensor, we can have the program make the device display “Ouch!” That hurt!” on a screen. The person looking on can now say, “There you go! That’s the proof that it felt pain!” Again though, there’s no pain involved—we can edit the data so that it puts “Oh Yes! Give me more of that!” whenever a signal comes from the “pain” sensor, and it then becomes obvious that this data tells us nothing about any real experience at all.
With a more intelligent program, it can understand the idea of damage and damage avoidance, so it can make sure the the data that’s mapped to different inputs makes more sense, but the true data should say “I received data from a sensor that indicates likely damage” rather than “that hurt”. The latter claim asserts the existence of sentience, while the former one doesn’t. If we ask the device if it really felt pain, it should only say yes if there was actually pain there, and with a conventional processor, we know that there isn’t any. If we build such a device and keep triggering the sensor to make it generate the claim that it’s felt pain, we know that it’s just making it up about feeling pain—we can’t actually make it suffer by torturing it, but will just cause it to go on repeating its fake claim.
“The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists.”
With conventional computers we can prove that there’s no causal role for sentience in them by running the program on a Chinese Room processor. Something extra is required for sentience to be real, and we have no model for introducing that extra thing. A simulation on conventional computer hardware of a system with sentience in it (where there is simulated sentience rather than real sentience) would have to simulate that something extra in order for that simulated sentience to appear in it. If that extra something doesn’t exist, there is no sentience.
“This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be.”
Every interaction is quantum, and when you have neural nets working on mechanisms that are too hard to untangle, there are opportunities for some kind of mechanism being involved that we can’t yet observe. What we can actually model appears to tell us that sentience must be a fiction, but we believe that things like pain feel too real to be fake.
“Anyway, even if this were true, how would you know that?”
Unless someone comes up with a theoretical model that shows a way for sentience to have a real role, we aren’t going to get answers until we can see the full mechanism by which damage signals lead to the brain generating data that makes claims about an experience of pain. If, once we have that full mechanism, we see that the brain is merely mapping data to inputs by applying rules that generate fictions about feelings, then we’ll know that feelings are fake. If they aren’t fake though, we’ll see sentience in action and we’ll discover how it works (and thereby find out what we actually are).
“If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it?”
If classical physics doesn’t support a model that enables sentience to be real, we will either have to reject the idea of sentience or look for it elsewhere.
(And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
If sentience is real, all the models are wrong because none of them show sentience working in any causal way which enables them to drive the generation of data to document the existence of sentience. All the models shout at us that there is no sentience in there playing any viable role and that it’s all wishful thinking, while our experience of feelings shouts at us that they are very real.
All I want to see is a model that illustrates the simplest role for sentience. If we have a sensor, a processor and a response, we can call the sensor a “pain” sensor and run a program that makes a motor function to remove the device away from the thing that might be damaging it, and we could call this a pain response, but there’s no pain there—there’s just the assertion of someone looking at it that pain is involved because that person wants the system to be like him/herself—“I feel pain in that situation, therefore that device must feel pain.” But no—there is no role for pain there. If we run a more intelligent program on the processor, we can put some data in memory which says “Ouch! That hurt!”, and whenever an input comes from the “pain” sensor, we can have the program make the device display “Ouch!” That hurt!” on a screen. The person looking on can now say, “There you go! That’s the proof that it felt pain!” Again though, there’s no pain involved—we can edit the data so that it puts “Oh Yes! Give me more of that!” whenever a signal comes from the “pain” sensor, and it then becomes obvious that this data tells us nothing about any real experience at all.
With a more intelligent program, it can understand the idea of damage and damage avoidance, so it can make sure the the data that’s mapped to different inputs makes more sense, but the true data should say “I received data from a sensor that indicates likely damage” rather than “that hurt”. The latter claim asserts the existence of sentience, while the former one doesn’t. If we ask the device if it really felt pain, it should only say yes if there was actually pain there, and with a conventional processor, we know that there isn’t any. If we build such a device and keep triggering the sensor to make it generate the claim that it’s felt pain, we know that it’s just making it up about feeling pain—we can’t actually make it suffer by torturing it, but will just cause it to go on repeating its fake claim.