I don’t know why. I have an AMD Ryzen 5 CPU and my earlier premise should make sense if you know what “single-threaded” means.
Why do you think this unknown particle is not compatible with rocks and CPUs?
I thought it was obvious, but okay… let X be a nontrivial system or pattern with some specific mathematical properties. I can’t conceive of a rule by which any arbitrary physical representation of X could be detected, let alone interacted with. If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.
Does it pay rent in anticipation?
It pays rent in sensation. I have a first-person subjective experience and I am unable to believe that it is only an abstraction. (Otherwise I probably would have turned atheist much sooner.)
I think of consciousness as a process (software) run on our brains (wetware), with the theoretical potential to be run on other hardware. I thought you understood my position. Asking me to pinpoint the hardware component which would contain suffering, tells me you don’t.
To me, saying the cpu (or the gpu) is conscious sounds like saying the cpu is linux—this is a type error. A pc can be running linux. A pc cannot actually be linux, even if “running” is often omitted.
But if one doesn’t know “running” is omitted, one could ask where does the linux-ness come from, if neither the cpu nor the ram are themselves linux.
If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.
But it does know to interact with mammals and not with trees and diamonds?
… Argh! You know what, screw it. This is like arguing how many angels can sit on top of a needle. Occam’s razor says not to.
I have a first-person subjective experience and I am unable to believe that it is only an abstraction.
I honestly don’t see the disconnect. I don’t think the existence of a conscious AGI would invalidate my subjective experiences in the slightest. The explanation is always mundane (“only an abstraction” ?), that doesn’t detract from the beauty of the phenomenon. (See https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real).
(Otherwise I probably would have turned atheist much sooner.)
I believe you are right. Many people cite subjective personal experiences as their reason for being religious. This does make me doubt our ability to draw correct conclusions based on such.
So, I think we’ve cleared up the distinction between illusionism and non-illusionism (not sure if the latter has its own name), yay for that. But note that Linux is a noun and “conscious” is an adjective—another type error—so your analogy doesn’t communicate clearly.
But it does know to interact with mammals and not with trees and diamonds?
I can’t be sure of that. AFAIK, you are correct that we have no falsifiable predictions as of yet—it’s called the “hard problem” for a reason. But illusionism has its own problems. The most obvious problem—that there is no “objective” subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a “boundary” or “experience”, but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me. I think you’re saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you’re fine with it; I’m saying I don’t get it.
But perhaps illusionism’s consequences are a problem? In particular, in a future world filled with AGIs, I don’t see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering “more” than a human, or than another AGI with different code? (I’m not asking for an answer, just asserting that a problem exists.)
But note that Linux is a noun and “conscious” is an adjective—another type error—so your analogy doesn’t communicate clearly.
Linux is also an adjective—linux game/shell/word processor.
Still, let me rephrase then—I don’t need a wet cpu to simulate water.
Why would I need a conscious cpu to simulate consciousness?
AFAIK, you are correct that we have no falsifiable predictions as of yet.
Do you expect this to change? Chalmers doesn’t. In fact, expecting to have falsifiable predictions is itself a falsifiable prediction. So you should drop the “yet”. Only then can you see your position for the null hypothesis it is.
The most obvious problem—that there is no “objective” subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a “boundary” or “experience”, but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me.
There is not a single concept, that could not be redefined. If this is a problem, it is not unique to consciousness.
“A process currently running on human brains” -although far from being a complete definition, already gives us some boundaries.
I think you’re saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you’re fine with it; I’m saying I don’t get it.
Suffering is a state of mind. The physical location is the brain.
By stimulating different parts of the brain, we can cause suffering (and even happiness).
Another way to think about it is this—where does visual recognition happen? How about arithmetic? Both required a biological brain for a long, long time.
And for the hipothetical scenario—let’s say I am playing CS and I throw a grenade—where does it explode?
But perhaps illusionism’s consequences are a problem? In particular, in a future world filled with AGIs, I don’t see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering “more” than a human, or than another AGI with different code? (I’m not asking for an answer, just asserting that a problem exists.)
That’s only the central problem of all of ethics, is it not? Objective morality? How could you tell if a human is suffering more than another human?
I don’t see how qualia helps you with that one. It would be pretty bold to exclude AGIs from your moral considerations, before excluding trees (and qualia has not helped you exclude trees!).
Edit: I now realize your position has little to do with Chalmers. Since you are postulating a qualia particle, which has casual effects, you are a substance dualist. But why rob your position of its falsifiable prediction? Namely—before the question of consciousness is solved, the qualia particle will be found.
“Car” isn’t an adjective just because there’s a “Car factory”; Consider: *”the factory is tall, car, and red”.
Do you expect this to change?
Yes, but I expect it to take a long time because it’s so hard to inspect living humans brains non-destructively. But various people theorize about the early universe all the time despite our inability to see beyond the surface of last scattering… ideas about consciousness should at least be more testable than ideas about how the universe began. Hard problems often suffer delays; my favorite example is the delay between the Michelson–Morley experiment’s negative result and the explanation of that negative result (Einstein’s Special Relativity). Here, even knowing with certainty that something major was missing from physics, it still took 18 years to find an explanation (though I see here an ad-hoc explanation was given by George FitzGerald in 1889 which pointed in the right direction). Today we also have a long-standing paradox where quantum physics doesn’t fit together with relativity, and dark matter and dark energy remain mysterious… just knowing there’s a problem doesn’t always quickly lead to a solution. So, while I directly sense a conflict between my experience and purely reductive consciousness, that doesn’t mean I expect an easy solution. Assuming illusionism, I wouldn’t expect a full explanation of that to be found anytime soon either.
postulating a qualia particle
It was just postulation. I wouldn’t rule out panpsychism.
Namely—before the question of consciousness is solved, the qualia particle will be found.
I do hope we solve this before letting AGIs take over the world, since, if I’m right, they won’t be “truly” conscious unless we can replicate whatever is going on in humans. Whether EAs should care about insect welfare, or even chicken welfare, also hinges on the answer to this question.
I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn’t be bothered to read him myself, I shouldn’t have parroted the interpretations of someone else.
I now have better understanding of your position, which is, in fact, falsifiable.
We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).
I hope I’ve somewhat clarified my own views. But if not, I don’t expect to do better in future comments, so I will bow out.
Yeah, this was a good discussion, though unfortunately I didn’t understand your position beyond a simple level like “it’s all quarks”.
On the question of “where does a virtual grenade explode”, to me this question just highlights the problem. I see a grenade explosion or a “death” as another bit pattern changing in the computer, which, from the computer’s perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about “beauty” and “love” and “being in pain”, but it seems to me that nothing can really matter to the computer because it can’t really feel anything. I once wrote software which actually had a concept that I called “pain”. So there were “pain” variables and of course, I am confident this caused no meaningful pain in the computer.
I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of “nothing really matters: suffering is just an illusion” or, less likely, “pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter”, though I have no idea how this could be true.
* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word “elephant” comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain’s computations: a holistic sense of elephant-ness (and I feel as though I “understand” this output—even though I don’t understand what “understanding” is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of “consciousness” that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie “Being John Malkovich”, and having recently head of the “thousand brains theory”, I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one “huge” particle.
It’s not true that particles behave identically under identical circumstances—that would be determinism.
If it were true, it wouldn’t only apply to consciousness, or mean that “cosnciousness is One” in some sense that doens’t apply to everything else.
There’s a lot of information in N particles. If you want to conserve it all, your huge particle has to exist in 3*N dimensional space. But a freely moving particle in 3*N space would behave locally, so you also need constraints to recover locality. Which is bascially the argument for space realluy being 3 dimensional.
I don’t know why. I have an AMD Ryzen 5 CPU and my earlier premise should make sense if you know what “single-threaded” means.
I thought it was obvious, but okay… let X be a nontrivial system or pattern with some specific mathematical properties. I can’t conceive of a rule by which any arbitrary physical representation of X could be detected, let alone interacted with. If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.
It pays rent in sensation. I have a first-person subjective experience and I am unable to believe that it is only an abstraction. (Otherwise I probably would have turned atheist much sooner.)
I think of consciousness as a process (software) run on our brains (wetware), with the theoretical potential to be run on other hardware. I thought you understood my position. Asking me to pinpoint the hardware component which would contain suffering, tells me you don’t.
To me, saying the cpu (or the gpu) is conscious sounds like saying the cpu is linux—this is a type error. A pc can be running linux. A pc cannot actually be linux, even if “running” is often omitted.
But if one doesn’t know “running” is omitted, one could ask where does the linux-ness come from, if neither the cpu nor the ram are themselves linux.
But it does know to interact with mammals and not with trees and diamonds? … Argh! You know what, screw it. This is like arguing how many angels can sit on top of a needle. Occam’s razor says not to.
Without falsifiable predictions, we have no way to difirentiate a true ad-hoc explanation from a false one. Also, a model with no predictive powers is useless. Its only “benefit” would be to provide piece of mind as a curiosity stopper. (See https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences.)
I honestly don’t see the disconnect. I don’t think the existence of a conscious AGI would invalidate my subjective experiences in the slightest. The explanation is always mundane (“only an abstraction” ?), that doesn’t detract from the beauty of the phenomenon. (See https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real).
I believe you are right. Many people cite subjective personal experiences as their reason for being religious. This does make me doubt our ability to draw correct conclusions based on such.
So, I think we’ve cleared up the distinction between illusionism and non-illusionism (not sure if the latter has its own name), yay for that. But note that Linux is a noun and “conscious” is an adjective—another type error—so your analogy doesn’t communicate clearly.
I can’t be sure of that. AFAIK, you are correct that we have no falsifiable predictions as of yet—it’s called the “hard problem” for a reason. But illusionism has its own problems. The most obvious problem—that there is no “objective” subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a “boundary” or “experience”, but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me. I think you’re saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you’re fine with it; I’m saying I don’t get it.
But perhaps illusionism’s consequences are a problem? In particular, in a future world filled with AGIs, I don’t see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering “more” than a human, or than another AGI with different code? (I’m not asking for an answer, just asserting that a problem exists.)
Linux is also an adjective—linux game/shell/word processor.
Still, let me rephrase then—I don’t need a wet cpu to simulate water. Why would I need a conscious cpu to simulate consciousness?
Do you expect this to change? Chalmers doesn’t. In fact, expecting to have falsifiable predictions is itself a falsifiable prediction. So you should drop the “yet”. Only then can you see your position for the null hypothesis it is.
There is not a single concept, that could not be redefined. If this is a problem, it is not unique to consciousness.
“A process currently running on human brains” -although far from being a complete definition, already gives us some boundaries.
Suffering is a state of mind. The physical location is the brain.
By stimulating different parts of the brain, we can cause suffering (and even happiness).
Another way to think about it is this—where does visual recognition happen? How about arithmetic? Both required a biological brain for a long, long time.
And for the hipothetical scenario—let’s say I am playing CS and I throw a grenade—where does it explode?
That’s only the central problem of all of ethics, is it not? Objective morality? How could you tell if a human is suffering more than another human?
I don’t see how qualia helps you with that one. It would be pretty bold to exclude AGIs from your moral considerations, before excluding trees (and qualia has not helped you exclude trees!).
Edit: I now realize your position has little to do with Chalmers. Since you are postulating a qualia particle, which has casual effects, you are a substance dualist. But why rob your position of its falsifiable prediction? Namely—before the question of consciousness is solved, the qualia particle will be found.
Or am I misrepresenting you again?
“Car” isn’t an adjective just because there’s a “Car factory”; Consider: *”the factory is tall, car, and red”.
Yes, but I expect it to take a long time because it’s so hard to inspect living humans brains non-destructively. But various people theorize about the early universe all the time despite our inability to see beyond the surface of last scattering… ideas about consciousness should at least be more testable than ideas about how the universe began. Hard problems often suffer delays; my favorite example is the delay between the Michelson–Morley experiment’s negative result and the explanation of that negative result (Einstein’s Special Relativity). Here, even knowing with certainty that something major was missing from physics, it still took 18 years to find an explanation (though I see here an ad-hoc explanation was given by George FitzGerald in 1889 which pointed in the right direction). Today we also have a long-standing paradox where quantum physics doesn’t fit together with relativity, and dark matter and dark energy remain mysterious… just knowing there’s a problem doesn’t always quickly lead to a solution. So, while I directly sense a conflict between my experience and purely reductive consciousness, that doesn’t mean I expect an easy solution. Assuming illusionism, I wouldn’t expect a full explanation of that to be found anytime soon either.
It was just postulation. I wouldn’t rule out panpsychism.
Chalmers seems not to believe in a consciousness without physical effects—see his 80000 hours interview. So Yudkowsky’s description of Chalmers’ beliefs seems to be either flat-out wrong, or just outdated.
I do hope we solve this before letting AGIs take over the world, since, if I’m right, they won’t be “truly” conscious unless we can replicate whatever is going on in humans. Whether EAs should care about insect welfare, or even chicken welfare, also hinges on the answer to this question.
Thank you for this discussion.
I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn’t be bothered to read him myself, I shouldn’t have parroted the interpretations of someone else.
I now have better understanding of your position, which is, in fact, falsifiable.
We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).
I hope I’ve somewhat clarified my own views. But if not, I don’t expect to do better in future comments, so I will bow out.
Again, thank you for the discussion.
Yeah, this was a good discussion, though unfortunately I didn’t understand your position beyond a simple level like “it’s all quarks”.
On the question of “where does a virtual grenade explode”, to me this question just highlights the problem. I see a grenade explosion or a “death” as another bit pattern changing in the computer, which, from the computer’s perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about “beauty” and “love” and “being in pain”, but it seems to me that nothing can really matter to the computer because it can’t really feel anything. I once wrote software which actually had a concept that I called “pain”. So there were “pain” variables and of course, I am confident this caused no meaningful pain in the computer.
I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of “nothing really matters: suffering is just an illusion” or, less likely, “pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter”, though I have no idea how this could be true.
* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word “elephant” comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain’s computations: a holistic sense of elephant-ness (and I feel as though I “understand” this output—even though I don’t understand what “understanding” is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of “consciousness” that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie “Being John Malkovich”, and having recently head of the “thousand brains theory”, I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one “huge” particle.
It’s not true that particles behave identically under identical circumstances—that would be determinism.
If it were true, it wouldn’t only apply to consciousness, or mean that “cosnciousness is One” in some sense that doens’t apply to everything else.
There’s a lot of information in N particles. If you want to conserve it all, your huge particle has to exist in 3*N dimensional space. But a freely moving particle in 3*N space would behave locally, so you also need constraints to recover locality. Which is bascially the argument for space realluy being 3 dimensional.