It’s not an extraordinary claim: sentience would have to be part of the physics of what’s going on, and the extraordinary claim would be that sentience can have a causal role in data generation without any such interaction. To steer the generation of data (and affect what the data says), you have to interact with the system that’s generating the data in some way, and the only options are to do it using some physical method or by resorting to magic (which can’t really be magic, so again it’s really going to be some physical method).
The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists. (IIRC Hofstadter described how different “levels of reality” are somewhat “blocked off” from each other in practice, in that you don’t need to understand quantum mechanics to know how biology works and so on. This would suggest that it is very unlikely that the highest level could indicate much about the lowest level.)
In conventional computers we go to great lengths to avoid noise disrupting the computations, not least because they would typically cause bugs and crashes (and this happens in machines that are exposed to radiation, temperature extremes or voltage going out of the tolerable range). But the brain could allow something quantum to interact with neural nets in ways that we might mistake for noise (something which wouldn’t happen in a simulation of a neural computer on conventional hardware [unless this is taken into account by the simulation], and which also wouldn’t happen on a neural computer that isn’t built in such a way as to introduce a role for such a mechanism to operate).
This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be. (The scientifically-phrased quantum mind hypothesis by Penrose wasn’t immediately rejected for this reason, so I suspect there’s something wrong with this reasoning. It was, however, falsified.)
Anyway,even if this were true, how would you know that?
It’s still hard to imagine a mechanism involving this that resolves the issue of how sentience has a causal role in anything (and how the data system can be made aware of it in order to generate data to document its existence), but it has to do so somehow if sentience is real.
If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it? (And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
“The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists.”
With conventional computers we can prove that there’s no causal role for sentience in them by running the program on a Chinese Room processor. Something extra is required for sentience to be real, and we have no model for introducing that extra thing. A simulation on conventional computer hardware of a system with sentience in it (where there is simulated sentience rather than real sentience) would have to simulate that something extra in order for that simulated sentience to appear in it. If that extra something doesn’t exist, there is no sentience.
“This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be.”
Every interaction is quantum, and when you have neural nets working on mechanisms that are too hard to untangle, there are opportunities for some kind of mechanism being involved that we can’t yet observe. What we can actually model appears to tell us that sentience must be a fiction, but we believe that things like pain feel too real to be fake.
“Anyway, even if this were true, how would you know that?”
Unless someone comes up with a theoretical model that shows a way for sentience to have a real role, we aren’t going to get answers until we can see the full mechanism by which damage signals lead to the brain generating data that makes claims about an experience of pain. If, once we have that full mechanism, we see that the brain is merely mapping data to inputs by applying rules that generate fictions about feelings, then we’ll know that feelings are fake. If they aren’t fake though, we’ll see sentience in action and we’ll discover how it works (and thereby find out what we actually are).
“If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it?”
If classical physics doesn’t support a model that enables sentience to be real, we will either have to reject the idea of sentience or look for it elsewhere.
(And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
If sentience is real, all the models are wrong because none of them show sentience working in any causal way which enables them to drive the generation of data to document the existence of sentience. All the models shout at us that there is no sentience in there playing any viable role and that it’s all wishful thinking, while our experience of feelings shouts at us that they are very real.
All I want to see is a model that illustrates the simplest role for sentience. If we have a sensor, a processor and a response, we can call the sensor a “pain” sensor and run a program that makes a motor function to remove the device away from the thing that might be damaging it, and we could call this a pain response, but there’s no pain there—there’s just the assertion of someone looking at it that pain is involved because that person wants the system to be like him/herself—“I feel pain in that situation, therefore that device must feel pain.” But no—there is no role for pain there. If we run a more intelligent program on the processor, we can put some data in memory which says “Ouch! That hurt!”, and whenever an input comes from the “pain” sensor, we can have the program make the device display “Ouch!” That hurt!” on a screen. The person looking on can now say, “There you go! That’s the proof that it felt pain!” Again though, there’s no pain involved—we can edit the data so that it puts “Oh Yes! Give me more of that!” whenever a signal comes from the “pain” sensor, and it then becomes obvious that this data tells us nothing about any real experience at all.
With a more intelligent program, it can understand the idea of damage and damage avoidance, so it can make sure the the data that’s mapped to different inputs makes more sense, but the true data should say “I received data from a sensor that indicates likely damage” rather than “that hurt”. The latter claim asserts the existence of sentience, while the former one doesn’t. If we ask the device if it really felt pain, it should only say yes if there was actually pain there, and with a conventional processor, we know that there isn’t any. If we build such a device and keep triggering the sensor to make it generate the claim that it’s felt pain, we know that it’s just making it up about feeling pain—we can’t actually make it suffer by torturing it, but will just cause it to go on repeating its fake claim.
The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists. (IIRC Hofstadter described how different “levels of reality” are somewhat “blocked off” from each other in practice, in that you don’t need to understand quantum mechanics to know how biology works and so on. This would suggest that it is very unlikely that the highest level could indicate much about the lowest level.)
This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be. (The scientifically-phrased quantum mind hypothesis by Penrose wasn’t immediately rejected for this reason, so I suspect there’s something wrong with this reasoning. It was, however, falsified.)
Anyway, even if this were true, how would you know that?
If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it? (And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
“The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists.”
With conventional computers we can prove that there’s no causal role for sentience in them by running the program on a Chinese Room processor. Something extra is required for sentience to be real, and we have no model for introducing that extra thing. A simulation on conventional computer hardware of a system with sentience in it (where there is simulated sentience rather than real sentience) would have to simulate that something extra in order for that simulated sentience to appear in it. If that extra something doesn’t exist, there is no sentience.
“This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be.”
Every interaction is quantum, and when you have neural nets working on mechanisms that are too hard to untangle, there are opportunities for some kind of mechanism being involved that we can’t yet observe. What we can actually model appears to tell us that sentience must be a fiction, but we believe that things like pain feel too real to be fake.
“Anyway, even if this were true, how would you know that?”
Unless someone comes up with a theoretical model that shows a way for sentience to have a real role, we aren’t going to get answers until we can see the full mechanism by which damage signals lead to the brain generating data that makes claims about an experience of pain. If, once we have that full mechanism, we see that the brain is merely mapping data to inputs by applying rules that generate fictions about feelings, then we’ll know that feelings are fake. If they aren’t fake though, we’ll see sentience in action and we’ll discover how it works (and thereby find out what we actually are).
“If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it?”
If classical physics doesn’t support a model that enables sentience to be real, we will either have to reject the idea of sentience or look for it elsewhere.
(And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
If sentience is real, all the models are wrong because none of them show sentience working in any causal way which enables them to drive the generation of data to document the existence of sentience. All the models shout at us that there is no sentience in there playing any viable role and that it’s all wishful thinking, while our experience of feelings shouts at us that they are very real.
All I want to see is a model that illustrates the simplest role for sentience. If we have a sensor, a processor and a response, we can call the sensor a “pain” sensor and run a program that makes a motor function to remove the device away from the thing that might be damaging it, and we could call this a pain response, but there’s no pain there—there’s just the assertion of someone looking at it that pain is involved because that person wants the system to be like him/herself—“I feel pain in that situation, therefore that device must feel pain.” But no—there is no role for pain there. If we run a more intelligent program on the processor, we can put some data in memory which says “Ouch! That hurt!”, and whenever an input comes from the “pain” sensor, we can have the program make the device display “Ouch!” That hurt!” on a screen. The person looking on can now say, “There you go! That’s the proof that it felt pain!” Again though, there’s no pain involved—we can edit the data so that it puts “Oh Yes! Give me more of that!” whenever a signal comes from the “pain” sensor, and it then becomes obvious that this data tells us nothing about any real experience at all.
With a more intelligent program, it can understand the idea of damage and damage avoidance, so it can make sure the the data that’s mapped to different inputs makes more sense, but the true data should say “I received data from a sensor that indicates likely damage” rather than “that hurt”. The latter claim asserts the existence of sentience, while the former one doesn’t. If we ask the device if it really felt pain, it should only say yes if there was actually pain there, and with a conventional processor, we know that there isn’t any. If we build such a device and keep triggering the sensor to make it generate the claim that it’s felt pain, we know that it’s just making it up about feeling pain—we can’t actually make it suffer by torturing it, but will just cause it to go on repeating its fake claim.