Is there a scientific/mechanical model that would enable a machine to feel pain? Not react to pain as if it did feel pain, but to actually feel pain in the same sense as a human does? The answer is no, there is nothing in science or philosophy that can come up with such a model even in theory, much less using current technology.
And that is only a small part of consciousness. Our abilities to understand and appreciate ‘meaning’, our vision, imagination, sense of free will....our general human experience of ourselves and our environment cannot be mathematically modeled or completely understood rationally. That makes some rationalists so uncomfortable that they deny the phenomenon of consciousness even exists at all, that its just an illusion. What an amazing conclusion that is!
Not react to pain as if it did feel pain, but to actually feel pain in the same sense as a human does?
Do you know how to distinguish “actually feeling pain” from “acting as if” it feels pain?
If so, do tell.
If not, would you perhaps also claim that a machine which passes the Turing test is not “actually” conscious, but merely “acts as if” it is conscious ?
Anti-reductionists are always quick to point at “qualia”, “subjective experience”, “consciousness” (or the subjective experience of pain, in this case) as examples of Great Big Unexplained Mysteries which have not been/can not be solved by science, but they can never quite explain what exactly the problem is, or what a solution would look like.
Anti-reductionists are always quick to point at “qualia”, “subjective experience”, “consciousness” (or the subjective experience of pain, in this case) as examples of Great Big Unexplained Mysteries which have not been/can not be solved by science, but they can never quite explain what exactly the problem is, or what a solution would look like.
A solution would help dissolve our confusion about how the territory of our consciousness can be produced by the map that is our brain’s computation.
I feel I’ve made some small progress on elements of that front after connecting some seperate ideas from other fields, like Tegmark IV, fractals and great attractors, and calculus. I hope to most some of these ideas later this month, or on February.
Do you know how to distinguish “actually feeling pain” from “acting as if” it feels pain?
Well, I suppose you’d do it the same way you’d distinguish “actually has a cat in a box” from “pretending to have a cat in a box” (without checking the box).
I do think there’s something weird going on with consciousness—why there is something that thinks it has the experience of having thoughts and experiences is as yet unexplained, and is tricky to talk about given the inability to directly access the subject matter—but I imagine it’s in principle explicable.
And saying we need to find a “mysterious” way of understanding it… well, there are all sorts of reasons why that’s not going to work.
Well, I suppose you’d do it the same way you’d distinguish “actually has a cat in a box” from “pretending to have a cat in a box” (without checking the box).
If there is no way to check the content of the box, ever, in any conceivable way, then there is no difference, period.
Sure. But that’s not true of cats / boxes, nor is it necessarily true of consciousness (based on the notion that consciousness is in principle explicable / reducible). The parallels being that we can’t check now, the person acts in such a way that the cat/consciousness is/isn’t a parsimonious explanation of their behavior, it might be difficult to check, you can fake it (to some degree), you can be wrong about it… and perhaps the cat might be a delusion.
Moreover, some people here claim to have values that encompass things that they cannot in principle interact with in any way (things external to their light cone, for example), so I’m not sure your assertion is unproblematic. If you’re going to step on my box, it matters to me whether there’s a cat in it, even if you can’t check that, and it might in fact matter to you as well. But facts tend to have ripples, so it seems likely that there is, in principle at least, a way to check the catbox.
The fact that the problem cannot be explained is because of the limitations of language/logic/reason....the tools that we rely on to explain mechanical phenomenon. Things that require equal signs.
The fact that this subject is not easilly explainable is not a hit against our side, it is a hit against your side. It is the non-rational aspect of consciousness that makes it seemingly impossible to explain in the first place.
The reaction of reductionists and some rationalists (I argue that it is quite rational to conlude that this is indeed a mystery as of present time) that because we cannot explain what that sensation of ‘pain’ is then it may not exist to begin with is dubious at best.
There have, in history, been many occasions where something was not understood. When temperature was not understood, it was still possible to explain to someone what this ill-understood “temperature” was. Specifically, it is simple to make sure that your notion of “colour” or “temperature” is similar to my notion of “colour” or “temperature” even if I don’t understand what “colour” and “temperature” are.
I predict that there has never been a concept that
was not understood at some point in time
was “not easily explicable” in the sense that “the subjective experience of pain” is not easily explicable
Sorry for the allegorical language if it offended you.
There is a difference between not finding a solution for a problem, and not even understanding what a solution may look like even in the abstract form.
It is also not a good sign when the problem gets to be more of a mystery the more science we discover.
The concern here is that we have an irrational view that rationalism is a universal tool. The fact that we have unsolved scientific and intellectual problems is not a proof of that. The fact that there seem to be problems that in their very nature seem to be unsolvable by reason is.
Sorry for the allegorical language if it offended you.
I am not offended
There is a difference between not finding a solution for a problem, and not even understanding what a
solution may look like even in the abstract form.
Certainly. And further on that scale, there is “understanding so little of the problem that you’re nor even sure there’s a problem in the first place”.
Progress on the the P vs NP problem has been largely limited to determining what the solution doesn’t look like , and few if any people have any idea what it does look like, or if it (a solution) even exists (might be undecidable).
So, this scale goes
Solved problems
Unsolved problems where we have a pretty good idea what the solution looks like
Unsolved problems where we have no idea what the solution looks like : subjective experience is not here
problems we suspect exist, but can’t even define properly in the first place : subjective experience is here!
It is also not a good sign when the problem gets to be more of a mystery the more science we discover.
Consciousness and the subjective experience of pain have not gotten more mysterious the more science we discover. At worst, we understand exactly as much now as we did when we started, i.e. nothing (and neurologists would certainly argue we do understand more now).
The concern here is that we have an irrational view that rationalism is a universal tool.
I will make a point about the progress of science in this subject and then use that to step towards a more general argument for the innate mystery of consciousness with regards to reason.
Ever since the time of the enlightenment there has been a real movement in the west to view the world as purely mechanical/physical so that a conclusion of reason as a universal tool could be accepted. That meant the elimination from society of not just God but also the soul and other things.
Ironically it was a particular invention of science and reason that made rationalists realize the problem of eliminiating all non-mechanical/physical realities from a human being: the computer. With the development of the computer it became painfully obvious that human beings were fundamentally different from any designed piece of technology. Although they could theoratically design and program a coputer for all kinds of amazing functions, there is no rational model as to how to make that machine ‘conscious’. It was through computers that mankind realized in the most clear and blunt way the mystery of consciousness.
So to reassert my point....from the development of computers throughout this past century into the advancment of it to this century, the more we progress the more we understand that consciousness does not seem to be a matter of just complexity and sophistication.
Secondly, our faculty of reason itself does not even work in the same way a computer works. A computer’s mechanical structure “signals” a conclusion. The machine moves in a certain way, albeith at the tiniest levels, to signal that something is right or wrong. For us, it is understanding that makes us realize a right or wrong, it is a feeling. Even at the most fundamental level of using reason itself the mystery of consciousness is engaged and operating in a way we do not understand.
With the development of the computer it became painfully obvious that human beings were
fundamentally different from any designed piece of technology.
Evidence-based Citation needed. ( From a neurologist or computer scientist. Nothing about how our own massively parallel architecture differs from the Von Neumann architecture.)
The more we understand of the workings of the brain, the more we can mimic it on a computer. (“Ha, but these are simple tasks! Not difficult tasks like consciousness.” How convenient of you to have chosen a metric you can’t even define to judge progress towards full understanding of the human brain)
there is no rational model as to how to make that machine ‘conscious’.
And there was no such model before the development of computers either.
Your unstated assumption seems to be that it is rational to expect a quick development of a “model of consciousness” (whatever that is) after the invention of the computer. If that were so, you might have a point, but, again : evidence needed.
Secondly, our faculty of reason itself does not even work in the same way a computer works.
Evidence-based Citation needed.
Our brain runs on physics. Although there may be various as-of-yet unknown algorithms running in our brain, there is no reason to assume anything non-computational is going on.
Our brain is physical, no doubt, but as you can imagine I am making a claim that mind (consciousness, spirit, whatever you want to call it) is not the same as brain. There is a connection between the two, but my argument using rational judgment is that consciousness does not seem to be physical because there is no way to understand it rationally. Your point against me is what I use against you. You say I am mistaken because I cannot even define what is consciousness, I say that is precisely the point! The only way you can reply is to hold out for the view that consciousness may not even exist, so it may not be a problem in the first place. And that is a whole other issue, for if consciousness is only an illusion that breaks down the entire human experience of reality.
Furthermore, there are other reasons why the idea of a purely physical human being without any mysterious non-physical reality is extremely problematic:
It would mean no free will. To deny free will is to deny rationality to begin with. How can a conclusion made by reason in turn negate reason?
It would deny any real morality. Fundamdentally a human being would be the same as a piece of wood, except more complex.
It is the western insistence that reason be a univeral tool (and therefore reality be universally physical) that has led them to completely deny dualism. But if you recognize that reason itself is pointing towards its own limits, dualism is not that bad of a conclusion.
It is essentially certain that it is possible in principle to construct out of matter a thing which can feel pain, have an experience of self, etc., to the extent that these are meaningful concepts.
Up to this point in human history no rational or scientific model has been presented that would explain how matter could be put together to feel pain. Or feel anything for that matter. Whether it is possible or impossible to do is another conversation.
Sure, no one does or has ever really had a clue where consciousness comes from. What’s your point? The way you’re saying “no rational or scientific model” rather than “no model whatsoever” implies you think these are poor tools—do you have some alternative in mind?
What we know is that reason is extremely useful when applied to mechanical/material subjects. We should continue to use it in that way.
We know that it has extreme difficulty in explaining and analyzing some key issues, including consciousness and all of its manifestations; pain/pleasure, emotions, imagination, and meaning in general as well as others. Once again, this seems to be the case because consciousness itself is extremely difficult to put into mechanical/material terms. Therefore reason has a problem with it.
If a tool is proficient in explaining some things but not other things, is it ‘rational’ to consider it a universal tool? In this way I am using reason itself to conclude that it is not a universal tool.
So your question is what then should we use to understand consciousness if not reason?
Just as reason seems to do well in understanding things of a certain nature (mechanical/physical), we can look at consciousness and conclude from its mysteries what kind of tool is needed to give us insight into it.
(Notice that I am still using reason throughout this process, it never really leaves our endeavors. We are just being honest in that we recognize something more is there that is beyond its limits.)
Consciousness does not seem to be mechanical or physical in nature because we are not able to even model in theory an explanation for it. Therefore the tool to be used to understand it should have a much more mysterious/abstract nature. Once we make that conclusion it is a whole other topic as to what that other ‘tool’ might be. Whatever it is, it will probably be more elusive and less universally apparent throughout the human population than reason is.
Is there a scientific/mechanical model that would enable a machine to feel pain? Not react to pain as if it did feel pain, but to actually feel pain in the same sense as a human does? The answer is no, there is nothing in science or philosophy that can come up with such a model even in theory, much less using current technology.
And that is only a small part of consciousness. Our abilities to understand and appreciate ‘meaning’, our vision, imagination, sense of free will....our general human experience of ourselves and our environment cannot be mathematically modeled or completely understood rationally. That makes some rationalists so uncomfortable that they deny the phenomenon of consciousness even exists at all, that its just an illusion. What an amazing conclusion that is!
Do you know how to distinguish “actually feeling pain” from “acting as if” it feels pain?
If so, do tell.
If not, would you perhaps also claim that a machine which passes the Turing test is not “actually” conscious, but merely “acts as if” it is conscious ?
Anti-reductionists are always quick to point at “qualia”, “subjective experience”, “consciousness” (or the subjective experience of pain, in this case) as examples of Great Big Unexplained Mysteries which have not been/can not be solved by science, but they can never quite explain what exactly the problem is, or what a solution would look like.
A solution would help dissolve our confusion about how the territory of our consciousness can be produced by the map that is our brain’s computation.
I feel I’ve made some small progress on elements of that front after connecting some seperate ideas from other fields, like Tegmark IV, fractals and great attractors, and calculus. I hope to most some of these ideas later this month, or on February.
Well, I suppose you’d do it the same way you’d distinguish “actually has a cat in a box” from “pretending to have a cat in a box” (without checking the box).
I do think there’s something weird going on with consciousness—why there is something that thinks it has the experience of having thoughts and experiences is as yet unexplained, and is tricky to talk about given the inability to directly access the subject matter—but I imagine it’s in principle explicable.
And saying we need to find a “mysterious” way of understanding it… well, there are all sorts of reasons why that’s not going to work.
If there is no way to check the content of the box, ever, in any conceivable way, then there is no difference, period.
Sure. But that’s not true of cats / boxes, nor is it necessarily true of consciousness (based on the notion that consciousness is in principle explicable / reducible). The parallels being that we can’t check now, the person acts in such a way that the cat/consciousness is/isn’t a parsimonious explanation of their behavior, it might be difficult to check, you can fake it (to some degree), you can be wrong about it… and perhaps the cat might be a delusion.
Moreover, some people here claim to have values that encompass things that they cannot in principle interact with in any way (things external to their light cone, for example), so I’m not sure your assertion is unproblematic. If you’re going to step on my box, it matters to me whether there’s a cat in it, even if you can’t check that, and it might in fact matter to you as well. But facts tend to have ripples, so it seems likely that there is, in principle at least, a way to check the catbox.
The fact that the problem cannot be explained is because of the limitations of language/logic/reason....the tools that we rely on to explain mechanical phenomenon. Things that require equal signs.
The fact that this subject is not easilly explainable is not a hit against our side, it is a hit against your side. It is the non-rational aspect of consciousness that makes it seemingly impossible to explain in the first place.
The reaction of reductionists and some rationalists (I argue that it is quite rational to conlude that this is indeed a mystery as of present time) that because we cannot explain what that sensation of ‘pain’ is then it may not exist to begin with is dubious at best.
“You can’t explain the precession of the perihelion of Mercury” is a hit against Newton’s theory of gravity.
“You can’t explain “zoink”, and I can’t tell you what “zoink” is, nor what an explanation of “zoink” would look like” is not a hit against anything.
Also, arguments are not soldiers, and talking about “hits” and “sides” is unwise.
There have, in history, been many occasions where something was not understood. When temperature was not understood, it was still possible to explain to someone what this ill-understood “temperature” was. Specifically, it is simple to make sure that your notion of “colour” or “temperature” is similar to my notion of “colour” or “temperature” even if I don’t understand what “colour” and “temperature” are.
I predict that there has never been a concept that
was not understood at some point in time
was “not easily explicable” in the sense that “the subjective experience of pain” is not easily explicable
later turned out to be well-defined and to “cut reality at its joints”
If you can come up with an example of such a concept, I will start taking arguments from vague not-easily-explicable concepts far more seriously.
On the other hand, there are at least some concepts that
were not understood at some point in time
were “not easily explicable” in the sense that “the subjective experience of pain” is not easily explicable
turned out to be completely bogus
namely, the concepts of “soul”, “god”, etc...
Sorry for the allegorical language if it offended you.
There is a difference between not finding a solution for a problem, and not even understanding what a solution may look like even in the abstract form.
It is also not a good sign when the problem gets to be more of a mystery the more science we discover.
The concern here is that we have an irrational view that rationalism is a universal tool. The fact that we have unsolved scientific and intellectual problems is not a proof of that. The fact that there seem to be problems that in their very nature seem to be unsolvable by reason is.
I am not offended
Certainly. And further on that scale, there is “understanding so little of the problem that you’re nor even sure there’s a problem in the first place”.
Progress on the the P vs NP problem has been largely limited to determining what the solution doesn’t look like , and few if any people have any idea what it does look like, or if it (a solution) even exists (might be undecidable).
So, this scale goes
Solved problems
Unsolved problems where we have a pretty good idea what the solution looks like
Unsolved problems where we have no idea what the solution looks like : subjective experience is not here
problems we suspect exist, but can’t even define properly in the first place : subjective experience is here!
Consciousness and the subjective experience of pain have not gotten more mysterious the more science we discover. At worst, we understand exactly as much now as we did when we started, i.e. nothing (and neurologists would certainly argue we do understand more now).
It is. Have a look at Solomonoff induction
It’s not proof, but it is evidence.
What makes you think that these problems are “in their very nature unsolvable by reason” ? Is it because you think they are inherently mysterious ?
I will make a point about the progress of science in this subject and then use that to step towards a more general argument for the innate mystery of consciousness with regards to reason.
Ever since the time of the enlightenment there has been a real movement in the west to view the world as purely mechanical/physical so that a conclusion of reason as a universal tool could be accepted. That meant the elimination from society of not just God but also the soul and other things.
Ironically it was a particular invention of science and reason that made rationalists realize the problem of eliminiating all non-mechanical/physical realities from a human being: the computer. With the development of the computer it became painfully obvious that human beings were fundamentally different from any designed piece of technology. Although they could theoratically design and program a coputer for all kinds of amazing functions, there is no rational model as to how to make that machine ‘conscious’. It was through computers that mankind realized in the most clear and blunt way the mystery of consciousness.
So to reassert my point....from the development of computers throughout this past century into the advancment of it to this century, the more we progress the more we understand that consciousness does not seem to be a matter of just complexity and sophistication.
Secondly, our faculty of reason itself does not even work in the same way a computer works. A computer’s mechanical structure “signals” a conclusion. The machine moves in a certain way, albeith at the tiniest levels, to signal that something is right or wrong. For us, it is understanding that makes us realize a right or wrong, it is a feeling. Even at the most fundamental level of using reason itself the mystery of consciousness is engaged and operating in a way we do not understand.
Evidence-based Citation needed. ( From a neurologist or computer scientist. Nothing about how our own massively parallel architecture differs from the Von Neumann architecture.)
The more we understand of the workings of the brain, the more we can mimic it on a computer. (“Ha, but these are simple tasks! Not difficult tasks like consciousness.” How convenient of you to have chosen a metric you can’t even define to judge progress towards full understanding of the human brain)
And there was no such model before the development of computers either.
Your unstated assumption seems to be that it is rational to expect a quick development of a “model of consciousness” (whatever that is) after the invention of the computer. If that were so, you might have a point, but, again : evidence needed.
Evidence-based Citation needed.
Our brain runs on physics. Although there may be various as-of-yet unknown algorithms running in our brain, there is no reason to assume anything non-computational is going on.
Will you change your mind if/when whole brain emulation becomes feasible ?
Our brain is physical, no doubt, but as you can imagine I am making a claim that mind (consciousness, spirit, whatever you want to call it) is not the same as brain. There is a connection between the two, but my argument using rational judgment is that consciousness does not seem to be physical because there is no way to understand it rationally. Your point against me is what I use against you. You say I am mistaken because I cannot even define what is consciousness, I say that is precisely the point! The only way you can reply is to hold out for the view that consciousness may not even exist, so it may not be a problem in the first place. And that is a whole other issue, for if consciousness is only an illusion that breaks down the entire human experience of reality.
Furthermore, there are other reasons why the idea of a purely physical human being without any mysterious non-physical reality is extremely problematic:
It would mean no free will. To deny free will is to deny rationality to begin with. How can a conclusion made by reason in turn negate reason?
It would deny any real morality. Fundamdentally a human being would be the same as a piece of wood, except more complex.
It is the western insistence that reason be a univeral tool (and therefore reality be universally physical) that has led them to completely deny dualism. But if you recognize that reason itself is pointing towards its own limits, dualism is not that bad of a conclusion.
It is essentially certain that it is possible in principle to construct out of matter a thing which can feel pain, have an experience of self, etc., to the extent that these are meaningful concepts.
The proof is very simple.
Up to this point in human history no rational or scientific model has been presented that would explain how matter could be put together to feel pain. Or feel anything for that matter. Whether it is possible or impossible to do is another conversation.
Sure, no one does or has ever really had a clue where consciousness comes from. What’s your point? The way you’re saying “no rational or scientific model” rather than “no model whatsoever” implies you think these are poor tools—do you have some alternative in mind?
What we know is that reason is extremely useful when applied to mechanical/material subjects. We should continue to use it in that way.
We know that it has extreme difficulty in explaining and analyzing some key issues, including consciousness and all of its manifestations; pain/pleasure, emotions, imagination, and meaning in general as well as others. Once again, this seems to be the case because consciousness itself is extremely difficult to put into mechanical/material terms. Therefore reason has a problem with it.
If a tool is proficient in explaining some things but not other things, is it ‘rational’ to consider it a universal tool? In this way I am using reason itself to conclude that it is not a universal tool.
So your question is what then should we use to understand consciousness if not reason?
Just as reason seems to do well in understanding things of a certain nature (mechanical/physical), we can look at consciousness and conclude from its mysteries what kind of tool is needed to give us insight into it.
(Notice that I am still using reason throughout this process, it never really leaves our endeavors. We are just being honest in that we recognize something more is there that is beyond its limits.)
Consciousness does not seem to be mechanical or physical in nature because we are not able to even model in theory an explanation for it. Therefore the tool to be used to understand it should have a much more mysterious/abstract nature. Once we make that conclusion it is a whole other topic as to what that other ‘tool’ might be. Whatever it is, it will probably be more elusive and less universally apparent throughout the human population than reason is.