“True for”, as in, “That may be true for you, but not for me. We each choose our own truths.”
“I feel that X.” Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying “I feel that X” in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying “No you don’t”, and watch the explosion. “How dare you try to tell me what I’m feeling!”
Write obscurely.
Never explicitly state your beliefs. Hint at them in terms that the faithful will pick up and applaud, but which give nothing for the enemy to attack. Attack the enemy by stating their beliefs in terms that the faithful will boo, while giving the enemy nothing to dispute.
Ignore the entire machinery of rationality. Treat all human interaction as nothing more than social grooming or status games in a tribe of apes.
Write obscurely. Never explicitly state your beliefs. Ignore the entire machinery of rationality.
All good stuff. Perhaps dark side epistemology is mainly about behaviors, not beliefs? A list of behaviors I noticed while speaking to climate science deniers:
First and foremost, they virtually never admit that they got anything wrong, not even little things. (If you spot someone admitting they were wrong about something, congrats! You may have stumbled upon a real skeptic!)
They don’t construct a map of the enemy’s territory: they have a poor mental model of how the climate system works. After all, they are taught “models can’t be trusted,” even though all science is built on models of some sort. Instead they learn a list of stories, ideas and myths, and they debate mainly by repeating items on their list.
They often ignore your most rock solid arguments, as if you’d said nothing at all, and they attack whatever they perceive to be your weakest point.
They think they are “scientific”. I was astonished at one of them’s ability to sound sciencey.… but then I saw how GPT2 could say plausible things without really understanding what it was saying, and I saw Eliezer talking about the “literary genre” of science, so I guess that’s the answer—certain people somehow pick up and mimick the literary genre of science without understanding or caring about its underlying substance.
They lack self-awareness. You’ll never ever hear them say “Okay, I know this might sound crazy, but those thousands of climate scientists are all wrong. I can’t blame you for agreeing with a supermajority, but if you’ll just hear me out, I will explain how I, a non-scientist, can be certain the contrarians are right. Just let me know if I’ve made some mistake in my reasoning here...” (which reminds me of I an interesting idea I had after reading about philisophical zombies… is it possible that people who seem to lack self-awareness literally lack self-awareness? That they are zombies?)
So, they are not introspective: they’re not thinking about how they think. So they haven’t thought about the Dunning-Kruger effect (meme!), and confirmation bias is something that happens to other people. “Motivated reasoning? Not me! So what if I do? Everybody does it…”
It’s as if schoolyard irony is an important defense mechanism for them. They take accusations often used against them, and toss them at detractors. They’ll say you’re in a “cult” or “religion” for believing humans cause warming, that you lie, fudge data, are “closed-minded”, etc. One guy called me a “denier” (in denial that it’s all a hoax) even though I had not called him a denier. In general you can expect attacks on your character even if you were careful not to attack them, yet these attacks will seem like plausible descriptions of the attacker. Similarly, they may dismiss talk of the scientific literature or consensus as “appeals to authority”, apparently oblivious to the authorities (Rush Limbaugh, Roy Spencer, and many others) upon which their own opinion is based. Last but not least, they’ll complain of “politicizing the science” while politicizing the science.
Lack of knowledge seems to satisfy them as a knowledge substitute — e.g. “I’ve not seen evidence for X, so I can safely assume X is false” or “I’ve not seen evidence against X, so I can safely assume X is true.” Missing knowledge somehow provides not merely hope, but greatconfidence that the experts are wrong.
When you have reached the point where you’re considering whether your opponents are literally zombies without any subjective consciousness… could it be time to consider whether your own thinking has gone wrong somewhere?
Lacking self-awareness (in the sense described above: habitually declining to engage in metacognitive thinking) is different from lacking consciousness/qualia. I am not claiming that they lack the latter. But, I do wonder if there have been any investigations into whether qualia are universal among humans, and I wonder how one would go about detecting qualia (it’s vaguely like a Turing test; a human without qualia would likely not intentionally deceive the tester the way a computer might during a Turing test, but would of course be unaware that there is any difference between his/her experience and anyone else’s, and can be expected to deny any difference exists.)
I don’t think the proponents of qualia
as metaphysical would agree that such a test is possible in theory—otherwise you could put someone in an MRI scan, show him a red square, monitor for activity in his visual cortex and wait for him to confirm he sees “the redness”. This should be enough to conclude some “redness” related experience has occured in the subject’s brain (since qualia is supposed to be individual, differences in experience is expected—it doesn’t have to be exactly the same). And yet the question of philosophical zombies remains (at least according to some philosophers).
If I take a digital picture, I can convert the file to BMP format and extract the “red” bits, but this is no evidence that my phone has qualia of redness. An fMRI scanning a brain will have the same problem. The idea that everyone has qualia is inductive: I have qualia (I used to call it my “soul”), and I know others have it too since I learned about the word itself from them. I can deduce that maybe all humans have it, but it’s doomed to be a “maybe”. If someone were to invent a test for qualia, perhaps we couldn’t even tell if it works properly without solving the hard problem of consciousness.
To avoid semantic confusion, here is the Wikipedia definition of qualia: “In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or /ˈkweɪliə/; singular form: quale) are defined as individual instances of subjective, conscious experience.” https://en.m.wikipedia.org/wiki/Qualia
If I take a digital picture, I can convert the file to BMP format and extract the “red” bits, but this is no evidence that my phone has qualia of redness. An fMRI scanning a brain will have the same problem.
You are skipping the part where we receive confirmation from the patient that he sees the redness. This, combined with the fMRI, should be enough to prove the colour red has been experienced (i.e. processed) by the patient’s brain.
Now one question remains—was this a conscious experience? (Thank you for making me clarify this, I missed it in my previous comment!)
I propose that any meaningful phylosophical definition of consciousness related to humans should cover the medical state of consciousness (i.e. the patient follows a light, knows the day of the week, etc.) If it doesn’t, I would rather taboo “consciousness” and discuss “the mental process of modeling the environment” instead.
Whatever the definition of consciousness, as long as it relates to the function of a healthy human brain, it entails qualia.
However, if the definition of consciousness doesn’t include what’s occuring in the human brain, why bother with it?
The idea that everyone has qualia is inductive: I have qualia (I used to call it my “soul”), and I know others have it too since I learned about the word itself from them. I can deduce that maybe all humans have it, but it’s doomed to be a “maybe”.
I’ve heard people speaking of a soul before—it did not convince me they (or I) have one. I would happily grant them consciousness instead.
If someone were to invent a test for qualia, perhaps we couldn’t even tell if it works properly without solving the hard problem of consciousness.
Even without solving the hard problem of consciousness, as long as we agree that consciousness is a property the human mind has, the test can be administered by a paramedic with a flashlight.
We will need the solution when we try to answer if our phone/dog/A.I. is conscious, though.
(I recently worked out a rudimentary solution (most probably wrong), which relies heavily on Eliezer’s writings on the question of free will later in the Sequences. I am reluctant to share it here, since it would spoil Eliezer’s solution and he advises people to try working it out for themselves first. I could PM or ROT13 in case of interest.)
If someone were to invent a test for qualia, perhaps we couldn’t even tell if it works properly without solving the hard problem of consciousness.
Even without solving the hard problem of consciousness, as long as we agree that consciousness is a property the human mind has, the test can be administered by a paramedic with a flashlight.
Qualiaphiles don’t think qualia are something other than a property the mind has, they think they are not open to any obvious third-party inspection, like shining a flashlight.
If you define consc. as the thing EMT’s can check with a flashlight, all you have done is left qualia out of the definition: you haven’t solved any problem of qualia.
Again, as a non-illusionist, I disagree that physiological consciousness necessarily implies qualia (or that an AGI necessary has qualia). It seems merely to be a reasonable assumption (in the human case only).
Ok. I am still unsure of your position.
Do you think other people have experiences, but we cannot say if those are conscious experiences?
Or are you of the opinion we cannot say anyone has any kind of experiences?
Could you please taboo “qualia”, so I know we are not talking about different things entirely?
Well, the phrase “something-it-is-like to be a thing” is sometimes used as a stand-in for qualia. What I am talking about when I use that word is “the element of experience which, according to the known laws of physics, does not exist”. There is only one level of airplane, and it’s quarks. It seems impossible for a quark (electron, atom) or photon to be aware it is inside a mind. So in the standard reductionist model, there is no meaningful difference between minds and airplanes; a mind cannot feel anything for the same reason an airplane or a computer cannot feel anything. The sun is constantly exploding while being crushed, but it is not in pain. A mind is simply a machine with unusual mappings from inputs to outputs. Redness, cool breezes, pleasure, and suffering are just words that represent states which are correlated with past inputs and moderate the mind’s outputs. Many computer programs (intelligent or not) could be described in similar terms.
Suppose someone invents a shockingly human-like AGI and compiles it to run single-threaded. I run a copy on the same PC I’m using now, inside a GPU-accelerated VR simulation (maybe it runs extremely slowly, at 1⁄500 real time, but we can start it from a saved teenager-level model and speak to it immediately via a terminal in the VR). Some would claim this AGI is “phenomenally conscious”; I claim it is not, since the hardware can’t “know” it’s running an AGI any more than it “knows” it is running a text editor inside a web browser on lesswrong.com. It’s just fetching and executing a sequence of instructions like “mov”, “add”, “push”, “cmp”, “bnz”, just as it always has (and it doesn’t know it’s doing that, either). I claim that, associated with our minds, there is something additional, aside from the quarks, which can feel things or be aware of feelings. This something is not an abstraction (representing a collection of quarks which could be interpreted by another mind as a state that modulates the output of a neural network), but a primitive of some sort that exists in addition to the quarks that embody the state, and interacts with those quarks somehow. I expect this primitive will, like everything else in the universe, follow computable rules, so it will not associate itself with any arbitrary representation of a state, such as my single-threaded AGI or an arrangement of rocks. (by the way, I also assume that this primitive provides something useful to its host, otherwise animals would not evolve an attachment to them.)
Ok, I could decipher this as a vague stand in for experience. I would much prefer something like “the ability to process information about the environment and link it to past memories”, but to each their own.
“the element of experience which, according to the known laws of physics, does not exist”.
Uhm… Are you banking on a revolution in the field of physics?
And later you even show exactly how reductionism not only permits, but also explains our experiences.
So in the standard reductionist model, there is no meaningful difference between minds and airplanes;
Yes, there is. One has states of mind and the other doesn’t. How meaningful this difference is depends on your position on nihilism.
a mind cannot feel anything for the same reason an airplane or a computer cannot feel anything.
Wrong! The end of your paragraph shows why this is a wrong description of reductionism.
A mind is simply a machine with unusual mappings from inputs to outputs. Redness, cool breezes, pleasure, and suffering are just words that represent states which are correlated with past inputs and moderate the mind’s outputs.
Yes. Exactly. Pleasure and suffering are just words, but the states of mind they represent are very much real.
It seems impossible for a quark (electron, atom) or photon to be aware it is inside a mind.
Correct—particals lack the computational power to know anything. Minds, on the other hand, can know they are made of particles. This is not a problem for reductionism. Actually, explaining how simple particles’ interactions lead to observed phenomena on the macro level is the entire point.
Some would claim this AGI is “phenomenally conscious”; I claim it is not, since the hardware can’t “know” it’s running an AGI any more than it “knows” it is running a text editor inside a web browser on lesswrong.com.
Yes, no one would call your GPU conscious.
The AGI is the software, though. The AGI could entertain the hipotesis it lives in a simulation, even before discovering any hard evidence. Much like we do. Depending on its code, it could have states of mind similar to a human and then I would not hesitate to call it conscious.
How willing would you be to put such an AGI in the state of mind described by reductionists as “pain”, even if it is simply a program run on hardware?
but a primitive of some sort that exists in addition to the quarks that embody the state, and interacts with those quarks somehow.
If such a primitive does interact with quarks, we will find it.
I expect this primitive will, like everything else in the universe, follow computable rules
And then we have yet another particle. How is that different from reductionism?
so it will not associate itself with any arbitrary representation of a state, such as my single-threaded AGI or an arrangement of rocks.
Ah, it’s a magical particle. It is smaller than an electron, yet it interacts with the quarks in the brain, but not those in the carbon of a diamond.
Or is it actually big, remote and intelligent on its own (unlike electrons)? So intelligent it knows exactly what to interact with, and exactly when, so as to remain undetected?
If you are not postulating a god, you are at the very least postulating a soul under a new name.
See, once you step outside the boundaries of mundane physics, you get very close to teology very fast.
I wasn’t talking about the GPU. Using the word “yes” to disagree with me is off-putting.
How is that different from reductionism?
I never said I rejected reductionism. I reject illusionism.
Ah, it’s a magical particle. It is smaller than an electron
Quite the opposite. A magical particle would be one that is inexplicably compatible with any and every representation of human-like consciousness (rocks, CPUs of arbitrary design) - with the term “human-like” also remaining undefined. I make no claims as to its size. I claim only that it is not an abstraction, and that therefore known physics does not seem to include it.
So intelligent it knows exactly what to interact with
I do not think it is intelligent, though it may augment intelligence somehow.
How willing would you be to put such an AGI in the state of mind described by reductionists as “pain”
I think it’s fair to give illusionism a tiny probability of truth, which could make me hesitant (especially given its convincing screams), but I would be much more concerned about animal suffering than about my AMD Ryzen 5 3600X suffering.
By the way, where will the suffering be located? Is it in the decode unit? The scheduler? the ALU? The FPU? The BTB? The instruction L1 cache? The data L1 cache? Does the suffering extend to the L2 cache? the L3? out to the chipset and the memory sticks? Is this a question that can be answered at all, and if so, how could one go about finding the answer?
Using the word “yes” to disagree with me is off-putting.
Noted. Thank you for pointing this out.
I wasn’t talking about the GPU.
Good to have that clarified.
… but I would be much more concerned about animal suffering than about my AMD Ryzen 5 3600X suffering.
Huh? I am now confused.
By the way, where will the suffering be located? Is it in the decode unit?...
Pain signals are processed by the brain and suffering happens in the mind.
So, theoretically, the suffering would be happening in the mind running on top of the simulated cortex, inside the matrix. All the hardware would be necessary to run the simulation.
The hardware would not be experiencing the simulation. Just as individual electrons are not seeing red.
I never said I rejected reductionism.
I misunderstood then—you do seem unhappy with the standard reductionist model’s position on emotions and experiences as states of mind.
I reject illusionism.
What do you mean by “illusionism”? Is it only the belief that AGI or a mind upload could be conscious? Or is there more to it?
Quite the opposite. A magical particle would be one that is inexplicably compatible with any and every representation of human-like consciousness (rocks, CPUs of arbitrary design) - with the term “human-like” also remaining undefined. I make no claims as to its size. I claim only that it is not an abstraction, and that therefore known physics does not seem to include it.
And how do you know that? Why do you think this unknown particle is not compatible with rocks and CPUs? Is it because you get to define its behaviour precisely as you need to answer a philosophical question a certain way?
What evidence would it take to falsify your belief in this primitive particle? What predictions does it allow you to make? Does it pay rent in anticipation?
I don’t know why. I have an AMD Ryzen 5 CPU and my earlier premise should make sense if you know what “single-threaded” means.
Why do you think this unknown particle is not compatible with rocks and CPUs?
I thought it was obvious, but okay… let X be a nontrivial system or pattern with some specific mathematical properties. I can’t conceive of a rule by which any arbitrary physical representation of X could be detected, let alone interacted with. If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.
Does it pay rent in anticipation?
It pays rent in sensation. I have a first-person subjective experience and I am unable to believe that it is only an abstraction. (Otherwise I probably would have turned atheist much sooner.)
I think of consciousness as a process (software) run on our brains (wetware), with the theoretical potential to be run on other hardware. I thought you understood my position. Asking me to pinpoint the hardware component which would contain suffering, tells me you don’t.
To me, saying the cpu (or the gpu) is conscious sounds like saying the cpu is linux—this is a type error. A pc can be running linux. A pc cannot actually be linux, even if “running” is often omitted.
But if one doesn’t know “running” is omitted, one could ask where does the linux-ness come from, if neither the cpu nor the ram are themselves linux.
If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.
But it does know to interact with mammals and not with trees and diamonds?
… Argh! You know what, screw it. This is like arguing how many angels can sit on top of a needle. Occam’s razor says not to.
I have a first-person subjective experience and I am unable to believe that it is only an abstraction.
I honestly don’t see the disconnect. I don’t think the existence of a conscious AGI would invalidate my subjective experiences in the slightest. The explanation is always mundane (“only an abstraction” ?), that doesn’t detract from the beauty of the phenomenon. (See https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real).
(Otherwise I probably would have turned atheist much sooner.)
I believe you are right. Many people cite subjective personal experiences as their reason for being religious. This does make me doubt our ability to draw correct conclusions based on such.
So, I think we’ve cleared up the distinction between illusionism and non-illusionism (not sure if the latter has its own name), yay for that. But note that Linux is a noun and “conscious” is an adjective—another type error—so your analogy doesn’t communicate clearly.
But it does know to interact with mammals and not with trees and diamonds?
I can’t be sure of that. AFAIK, you are correct that we have no falsifiable predictions as of yet—it’s called the “hard problem” for a reason. But illusionism has its own problems. The most obvious problem—that there is no “objective” subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a “boundary” or “experience”, but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me. I think you’re saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you’re fine with it; I’m saying I don’t get it.
But perhaps illusionism’s consequences are a problem? In particular, in a future world filled with AGIs, I don’t see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering “more” than a human, or than another AGI with different code? (I’m not asking for an answer, just asserting that a problem exists.)
But note that Linux is a noun and “conscious” is an adjective—another type error—so your analogy doesn’t communicate clearly.
Linux is also an adjective—linux game/shell/word processor.
Still, let me rephrase then—I don’t need a wet cpu to simulate water.
Why would I need a conscious cpu to simulate consciousness?
AFAIK, you are correct that we have no falsifiable predictions as of yet.
Do you expect this to change? Chalmers doesn’t. In fact, expecting to have falsifiable predictions is itself a falsifiable prediction. So you should drop the “yet”. Only then can you see your position for the null hypothesis it is.
The most obvious problem—that there is no “objective” subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a “boundary” or “experience”, but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me.
There is not a single concept, that could not be redefined. If this is a problem, it is not unique to consciousness.
“A process currently running on human brains” -although far from being a complete definition, already gives us some boundaries.
I think you’re saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you’re fine with it; I’m saying I don’t get it.
Suffering is a state of mind. The physical location is the brain.
By stimulating different parts of the brain, we can cause suffering (and even happiness).
Another way to think about it is this—where does visual recognition happen? How about arithmetic? Both required a biological brain for a long, long time.
And for the hipothetical scenario—let’s say I am playing CS and I throw a grenade—where does it explode?
But perhaps illusionism’s consequences are a problem? In particular, in a future world filled with AGIs, I don’t see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering “more” than a human, or than another AGI with different code? (I’m not asking for an answer, just asserting that a problem exists.)
That’s only the central problem of all of ethics, is it not? Objective morality? How could you tell if a human is suffering more than another human?
I don’t see how qualia helps you with that one. It would be pretty bold to exclude AGIs from your moral considerations, before excluding trees (and qualia has not helped you exclude trees!).
Edit: I now realize your position has little to do with Chalmers. Since you are postulating a qualia particle, which has casual effects, you are a substance dualist. But why rob your position of its falsifiable prediction? Namely—before the question of consciousness is solved, the qualia particle will be found.
“Car” isn’t an adjective just because there’s a “Car factory”; Consider: *”the factory is tall, car, and red”.
Do you expect this to change?
Yes, but I expect it to take a long time because it’s so hard to inspect living humans brains non-destructively. But various people theorize about the early universe all the time despite our inability to see beyond the surface of last scattering… ideas about consciousness should at least be more testable than ideas about how the universe began. Hard problems often suffer delays; my favorite example is the delay between the Michelson–Morley experiment’s negative result and the explanation of that negative result (Einstein’s Special Relativity). Here, even knowing with certainty that something major was missing from physics, it still took 18 years to find an explanation (though I see here an ad-hoc explanation was given by George FitzGerald in 1889 which pointed in the right direction). Today we also have a long-standing paradox where quantum physics doesn’t fit together with relativity, and dark matter and dark energy remain mysterious… just knowing there’s a problem doesn’t always quickly lead to a solution. So, while I directly sense a conflict between my experience and purely reductive consciousness, that doesn’t mean I expect an easy solution. Assuming illusionism, I wouldn’t expect a full explanation of that to be found anytime soon either.
postulating a qualia particle
It was just postulation. I wouldn’t rule out panpsychism.
Namely—before the question of consciousness is solved, the qualia particle will be found.
I do hope we solve this before letting AGIs take over the world, since, if I’m right, they won’t be “truly” conscious unless we can replicate whatever is going on in humans. Whether EAs should care about insect welfare, or even chicken welfare, also hinges on the answer to this question.
I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn’t be bothered to read him myself, I shouldn’t have parroted the interpretations of someone else.
I now have better understanding of your position, which is, in fact, falsifiable.
We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).
I hope I’ve somewhat clarified my own views. But if not, I don’t expect to do better in future comments, so I will bow out.
Yeah, this was a good discussion, though unfortunately I didn’t understand your position beyond a simple level like “it’s all quarks”.
On the question of “where does a virtual grenade explode”, to me this question just highlights the problem. I see a grenade explosion or a “death” as another bit pattern changing in the computer, which, from the computer’s perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about “beauty” and “love” and “being in pain”, but it seems to me that nothing can really matter to the computer because it can’t really feel anything. I once wrote software which actually had a concept that I called “pain”. So there were “pain” variables and of course, I am confident this caused no meaningful pain in the computer.
I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of “nothing really matters: suffering is just an illusion” or, less likely, “pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter”, though I have no idea how this could be true.
* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word “elephant” comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain’s computations: a holistic sense of elephant-ness (and I feel as though I “understand” this output—even though I don’t understand what “understanding” is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of “consciousness” that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie “Being John Malkovich”, and having recently head of the “thousand brains theory”, I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one “huge” particle.
It’s not true that particles behave identically under identical circumstances—that would be determinism.
If it were true, it wouldn’t only apply to consciousness, or mean that “cosnciousness is One” in some sense that doens’t apply to everything else.
There’s a lot of information in N particles. If you want to conserve it all, your huge particle has to exist in 3*N dimensional space. But a freely moving particle in 3*N space would behave locally, so you also need constraints to recover locality. Which is bascially the argument for space realluy being 3 dimensional.
Ignore the entire machinery of rationality. Treat all human interaction as nothing more than social grooming or status games in a tribe of apes.
Is there actually anything else to human interaction?
It makes no sense to expect people to engage the machinery of rationality when they don’t believe it’ll further their goals. Even if they benefit from being privately rational, it’s not necessarily in their interest to share their rationality with you. Hence, if you haven’t earned their respect, they’ll conceal their wisdom from you, like the Spartans.
In fact, pretty much everything in Eliezer’s post seems to apply only to the rare situation of two or more people who respect each other enough to actually feel a need to appear logically consistent and make their lies plausible. Usually at least one of the people is in no real need to convince the other of anything (i.e., they have higher status), so they won’t waste any time or energy trying to. Therefore, their statements serve other purposes; mainly, to display their high status and to warn the underling when they’re getting too close to a line they won’t let them cross unpunished. Conspicuously wasting the interlocutor’s time with nonsense serves this purpose very well.
Status, status, status. It gets (some of) us every time. There seems to be very little to life but status to a normal person.
A few general schemas:
“True for”, as in, “That may be true for you, but not for me. We each choose our own truths.”
“I feel that X.” Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying “I feel that X” in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying “No you don’t”, and watch the explosion. “How dare you try to tell me what I’m feeling!”
Write obscurely.
Never explicitly state your beliefs. Hint at them in terms that the faithful will pick up and applaud, but which give nothing for the enemy to attack. Attack the enemy by stating their beliefs in terms that the faithful will boo, while giving the enemy nothing to dispute.
Ignore the entire machinery of rationality. Treat all human interaction as nothing more than social grooming or status games in a tribe of apes.
Argument by innuendo. Politicians love this. Imply, then deny. “I never said that.”
All good stuff. Perhaps dark side epistemology is mainly about behaviors, not beliefs? A list of behaviors I noticed while speaking to climate science deniers:
First and foremost, they virtually never admit that they got anything wrong, not even little things. (If you spot someone admitting they were wrong about something, congrats! You may have stumbled upon a real skeptic!)
They don’t construct a map of the enemy’s territory: they have a poor mental model of how the climate system works. After all, they are taught “models can’t be trusted,” even though all science is built on models of some sort. Instead they learn a list of stories, ideas and myths, and they debate mainly by repeating items on their list.
They often ignore your most rock solid arguments, as if you’d said nothing at all, and they attack whatever they perceive to be your weakest point.
They think they are “scientific”. I was astonished at one of them’s ability to sound sciencey.… but then I saw how GPT2 could say plausible things without really understanding what it was saying, and I saw Eliezer talking about the “literary genre” of science, so I guess that’s the answer—certain people somehow pick up and mimick the literary genre of science without understanding or caring about its underlying substance.
They lack self-awareness. You’ll never ever hear them say “Okay, I know this might sound crazy, but those thousands of climate scientists are all wrong. I can’t blame you for agreeing with a supermajority, but if you’ll just hear me out, I will explain how I, a non-scientist, can be certain the contrarians are right. Just let me know if I’ve made some mistake in my reasoning here...” (which reminds me of I an interesting idea I had after reading about philisophical zombies… is it possible that people who seem to lack self-awareness literally lack self-awareness? That they are zombies?)
So, they are not introspective: they’re not thinking about how they think. So they haven’t thought about the Dunning-Kruger effect (meme!), and confirmation bias is something that happens to other people. “Motivated reasoning? Not me! So what if I do? Everybody does it…”
It’s as if schoolyard irony is an important defense mechanism for them. They take accusations often used against them, and toss them at detractors. They’ll say you’re in a “cult” or “religion” for believing humans cause warming, that you lie, fudge data, are “closed-minded”, etc. One guy called me a “denier” (in denial that it’s all a hoax) even though I had not called him a denier. In general you can expect attacks on your character even if you were careful not to attack them, yet these attacks will seem like plausible descriptions of the attacker. Similarly, they may dismiss talk of the scientific literature or consensus as “appeals to authority”, apparently oblivious to the authorities (Rush Limbaugh, Roy Spencer, and many others) upon which their own opinion is based. Last but not least, they’ll complain of “politicizing the science” while politicizing the science.
Lack of knowledge seems to satisfy them as a knowledge substitute — e.g. “I’ve not seen evidence for X, so I can safely assume X is false” or “I’ve not seen evidence against X, so I can safely assume X is true.” Missing knowledge somehow provides not merely hope, but great confidence that the experts are wrong.
When you have reached the point where you’re considering whether your opponents are literally zombies without any subjective consciousness… could it be time to consider whether your own thinking has gone wrong somewhere?
Lacking self-awareness (in the sense described above: habitually declining to engage in metacognitive thinking) is different from lacking consciousness/qualia. I am not claiming that they lack the latter. But, I do wonder if there have been any investigations into whether qualia are universal among humans, and I wonder how one would go about detecting qualia (it’s vaguely like a Turing test; a human without qualia would likely not intentionally deceive the tester the way a computer might during a Turing test, but would of course be unaware that there is any difference between his/her experience and anyone else’s, and can be expected to deny any difference exists.)
I don’t think the proponents of qualia as metaphysical would agree that such a test is possible in theory—otherwise you could put someone in an MRI scan, show him a red square, monitor for activity in his visual cortex and wait for him to confirm he sees “the redness”. This should be enough to conclude some “redness” related experience has occured in the subject’s brain (since qualia is supposed to be individual, differences in experience is expected—it doesn’t have to be exactly the same). And yet the question of philosophical zombies remains (at least according to some philosophers).
If I take a digital picture, I can convert the file to BMP format and extract the “red” bits, but this is no evidence that my phone has qualia of redness. An fMRI scanning a brain will have the same problem. The idea that everyone has qualia is inductive: I have qualia (I used to call it my “soul”), and I know others have it too since I learned about the word itself from them. I can deduce that maybe all humans have it, but it’s doomed to be a “maybe”. If someone were to invent a test for qualia, perhaps we couldn’t even tell if it works properly without solving the hard problem of consciousness.
To avoid semantic confusion, here is the Wikipedia definition of qualia: “In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or /ˈkweɪliə/; singular form: quale) are defined as individual instances of subjective, conscious experience.” https://en.m.wikipedia.org/wiki/Qualia
You are skipping the part where we receive confirmation from the patient that he sees the redness. This, combined with the fMRI, should be enough to prove the colour red has been experienced (i.e. processed) by the patient’s brain.
Now one question remains—was this a conscious experience? (Thank you for making me clarify this, I missed it in my previous comment!)
I propose that any meaningful phylosophical definition of consciousness related to humans should cover the medical state of consciousness (i.e. the patient follows a light, knows the day of the week, etc.) If it doesn’t, I would rather taboo “consciousness” and discuss “the mental process of modeling the environment” instead.
Whatever the definition of consciousness, as long as it relates to the function of a healthy human brain, it entails qualia.
However, if the definition of consciousness doesn’t include what’s occuring in the human brain, why bother with it?
I’ve heard people speaking of a soul before—it did not convince me they (or I) have one. I would happily grant them consciousness instead.
Even without solving the hard problem of consciousness, as long as we agree that consciousness is a property the human mind has, the test can be administered by a paramedic with a flashlight.
We will need the solution when we try to answer if our phone/dog/A.I. is conscious, though.
(I recently worked out a rudimentary solution (most probably wrong), which relies heavily on Eliezer’s writings on the question of free will later in the Sequences. I am reluctant to share it here, since it would spoil Eliezer’s solution and he advises people to try working it out for themselves first. I could PM or ROT13 in case of interest.)
Qualiaphiles don’t think qualia are something other than a property the mind has, they think they are not open to any obvious third-party inspection, like shining a flashlight.
If you define consc. as the thing EMT’s can check with a flashlight, all you have done is left qualia out of the definition: you haven’t solved any problem of qualia.
Yes. Once I define qualia as “conscious experience”, I necessarily have to leave it out of the definition of “consciousness” (whatever that may be).
My point is that only the question of consciousness remains. And consciousness is worth talking about only if human brains exhibit it.
I am not trying to solve the question of qualia, I am trying to dissolve it as improper.
P.S. Do you mind tabooing “qualia” in any further discission? This way I can be sure we are talking about the same thing.
Again, as a non-illusionist, I disagree that physiological consciousness necessarily implies qualia (or that an AGI necessary has qualia). It seems merely to be a reasonable assumption (in the human case only).
Ok. I am still unsure of your position. Do you think other people have experiences, but we cannot say if those are conscious experiences? Or are you of the opinion we cannot say anyone has any kind of experiences? Could you please taboo “qualia”, so I know we are not talking about different things entirely?
Well, the phrase “something-it-is-like to be a thing” is sometimes used as a stand-in for qualia. What I am talking about when I use that word is “the element of experience which, according to the known laws of physics, does not exist”. There is only one level of airplane, and it’s quarks. It seems impossible for a quark (electron, atom) or photon to be aware it is inside a mind. So in the standard reductionist model, there is no meaningful difference between minds and airplanes; a mind cannot feel anything for the same reason an airplane or a computer cannot feel anything. The sun is constantly exploding while being crushed, but it is not in pain. A mind is simply a machine with unusual mappings from inputs to outputs. Redness, cool breezes, pleasure, and suffering are just words that represent states which are correlated with past inputs and moderate the mind’s outputs. Many computer programs (intelligent or not) could be described in similar terms.
Suppose someone invents a shockingly human-like AGI and compiles it to run single-threaded. I run a copy on the same PC I’m using now, inside a GPU-accelerated VR simulation (maybe it runs extremely slowly, at 1⁄500 real time, but we can start it from a saved teenager-level model and speak to it immediately via a terminal in the VR). Some would claim this AGI is “phenomenally conscious”; I claim it is not, since the hardware can’t “know” it’s running an AGI any more than it “knows” it is running a text editor inside a web browser on lesswrong.com. It’s just fetching and executing a sequence of instructions like “mov”, “add”, “push”, “cmp”, “bnz”, just as it always has (and it doesn’t know it’s doing that, either). I claim that, associated with our minds, there is something additional, aside from the quarks, which can feel things or be aware of feelings. This something is not an abstraction (representing a collection of quarks which could be interpreted by another mind as a state that modulates the output of a neural network), but a primitive of some sort that exists in addition to the quarks that embody the state, and interacts with those quarks somehow. I expect this primitive will, like everything else in the universe, follow computable rules, so it will not associate itself with any arbitrary representation of a state, such as my single-threaded AGI or an arrangement of rocks. (by the way, I also assume that this primitive provides something useful to its host, otherwise animals would not evolve an attachment to them.)
Ok, I could decipher this as a vague stand in for experience. I would much prefer something like “the ability to process information about the environment and link it to past memories”, but to each their own.
Uhm… Are you banking on a revolution in the field of physics? And later you even show exactly how reductionism not only permits, but also explains our experiences.
Yes, there is. One has states of mind and the other doesn’t. How meaningful this difference is depends on your position on nihilism.
Wrong! The end of your paragraph shows why this is a wrong description of reductionism.
Yes. Exactly. Pleasure and suffering are just words, but the states of mind they represent are very much real.
Correct—particals lack the computational power to know anything. Minds, on the other hand, can know they are made of particles. This is not a problem for reductionism. Actually, explaining how simple particles’ interactions lead to observed phenomena on the macro level is the entire point.
Yes, no one would call your GPU conscious. The AGI is the software, though. The AGI could entertain the hipotesis it lives in a simulation, even before discovering any hard evidence. Much like we do. Depending on its code, it could have states of mind similar to a human and then I would not hesitate to call it conscious.
How willing would you be to put such an AGI in the state of mind described by reductionists as “pain”, even if it is simply a program run on hardware?
If such a primitive does interact with quarks, we will find it.
And then we have yet another particle. How is that different from reductionism?
Ah, it’s a magical particle. It is smaller than an electron, yet it interacts with the quarks in the brain, but not those in the carbon of a diamond. Or is it actually big, remote and intelligent on its own (unlike electrons)? So intelligent it knows exactly what to interact with, and exactly when, so as to remain undetected?
If you are not postulating a god, you are at the very least postulating a soul under a new name.
See, once you step outside the boundaries of mundane physics, you get very close to teology very fast.
I wasn’t talking about the GPU. Using the word “yes” to disagree with me is off-putting.
I never said I rejected reductionism. I reject illusionism.
Quite the opposite. A magical particle would be one that is inexplicably compatible with any and every representation of human-like consciousness (rocks, CPUs of arbitrary design) - with the term “human-like” also remaining undefined. I make no claims as to its size. I claim only that it is not an abstraction, and that therefore known physics does not seem to include it.
I do not think it is intelligent, though it may augment intelligence somehow.
I think it’s fair to give illusionism a tiny probability of truth, which could make me hesitant (especially given its convincing screams), but I would be much more concerned about animal suffering than about my AMD Ryzen 5 3600X suffering.
By the way, where will the suffering be located? Is it in the decode unit? The scheduler? the ALU? The FPU? The BTB? The instruction L1 cache? The data L1 cache? Does the suffering extend to the L2 cache? the L3? out to the chipset and the memory sticks? Is this a question that can be answered at all, and if so, how could one go about finding the answer?
Noted. Thank you for pointing this out.
Good to have that clarified.
Huh? I am now confused.
Pain signals are processed by the brain and suffering happens in the mind. So, theoretically, the suffering would be happening in the mind running on top of the simulated cortex, inside the matrix. All the hardware would be necessary to run the simulation. The hardware would not be experiencing the simulation. Just as individual electrons are not seeing red.
I misunderstood then—you do seem unhappy with the standard reductionist model’s position on emotions and experiences as states of mind.
What do you mean by “illusionism”? Is it only the belief that AGI or a mind upload could be conscious? Or is there more to it?
And how do you know that? Why do you think this unknown particle is not compatible with rocks and CPUs? Is it because you get to define its behaviour precisely as you need to answer a philosophical question a certain way?
What evidence would it take to falsify your belief in this primitive particle? What predictions does it allow you to make? Does it pay rent in anticipation?
I don’t know why. I have an AMD Ryzen 5 CPU and my earlier premise should make sense if you know what “single-threaded” means.
I thought it was obvious, but okay… let X be a nontrivial system or pattern with some specific mathematical properties. I can’t conceive of a rule by which any arbitrary physical representation of X could be detected, let alone interacted with. If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.
It pays rent in sensation. I have a first-person subjective experience and I am unable to believe that it is only an abstraction. (Otherwise I probably would have turned atheist much sooner.)
I think of consciousness as a process (software) run on our brains (wetware), with the theoretical potential to be run on other hardware. I thought you understood my position. Asking me to pinpoint the hardware component which would contain suffering, tells me you don’t.
To me, saying the cpu (or the gpu) is conscious sounds like saying the cpu is linux—this is a type error. A pc can be running linux. A pc cannot actually be linux, even if “running” is often omitted.
But if one doesn’t know “running” is omitted, one could ask where does the linux-ness come from, if neither the cpu nor the ram are themselves linux.
But it does know to interact with mammals and not with trees and diamonds? … Argh! You know what, screw it. This is like arguing how many angels can sit on top of a needle. Occam’s razor says not to.
Without falsifiable predictions, we have no way to difirentiate a true ad-hoc explanation from a false one. Also, a model with no predictive powers is useless. Its only “benefit” would be to provide piece of mind as a curiosity stopper. (See https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences.)
I honestly don’t see the disconnect. I don’t think the existence of a conscious AGI would invalidate my subjective experiences in the slightest. The explanation is always mundane (“only an abstraction” ?), that doesn’t detract from the beauty of the phenomenon. (See https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real).
I believe you are right. Many people cite subjective personal experiences as their reason for being religious. This does make me doubt our ability to draw correct conclusions based on such.
So, I think we’ve cleared up the distinction between illusionism and non-illusionism (not sure if the latter has its own name), yay for that. But note that Linux is a noun and “conscious” is an adjective—another type error—so your analogy doesn’t communicate clearly.
I can’t be sure of that. AFAIK, you are correct that we have no falsifiable predictions as of yet—it’s called the “hard problem” for a reason. But illusionism has its own problems. The most obvious problem—that there is no “objective” subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a “boundary” or “experience”, but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me. I think you’re saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you’re fine with it; I’m saying I don’t get it.
But perhaps illusionism’s consequences are a problem? In particular, in a future world filled with AGIs, I don’t see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering “more” than a human, or than another AGI with different code? (I’m not asking for an answer, just asserting that a problem exists.)
Linux is also an adjective—linux game/shell/word processor.
Still, let me rephrase then—I don’t need a wet cpu to simulate water. Why would I need a conscious cpu to simulate consciousness?
Do you expect this to change? Chalmers doesn’t. In fact, expecting to have falsifiable predictions is itself a falsifiable prediction. So you should drop the “yet”. Only then can you see your position for the null hypothesis it is.
There is not a single concept, that could not be redefined. If this is a problem, it is not unique to consciousness.
“A process currently running on human brains” -although far from being a complete definition, already gives us some boundaries.
Suffering is a state of mind. The physical location is the brain.
By stimulating different parts of the brain, we can cause suffering (and even happiness).
Another way to think about it is this—where does visual recognition happen? How about arithmetic? Both required a biological brain for a long, long time.
And for the hipothetical scenario—let’s say I am playing CS and I throw a grenade—where does it explode?
That’s only the central problem of all of ethics, is it not? Objective morality? How could you tell if a human is suffering more than another human?
I don’t see how qualia helps you with that one. It would be pretty bold to exclude AGIs from your moral considerations, before excluding trees (and qualia has not helped you exclude trees!).
Edit: I now realize your position has little to do with Chalmers. Since you are postulating a qualia particle, which has casual effects, you are a substance dualist. But why rob your position of its falsifiable prediction? Namely—before the question of consciousness is solved, the qualia particle will be found.
Or am I misrepresenting you again?
“Car” isn’t an adjective just because there’s a “Car factory”; Consider: *”the factory is tall, car, and red”.
Yes, but I expect it to take a long time because it’s so hard to inspect living humans brains non-destructively. But various people theorize about the early universe all the time despite our inability to see beyond the surface of last scattering… ideas about consciousness should at least be more testable than ideas about how the universe began. Hard problems often suffer delays; my favorite example is the delay between the Michelson–Morley experiment’s negative result and the explanation of that negative result (Einstein’s Special Relativity). Here, even knowing with certainty that something major was missing from physics, it still took 18 years to find an explanation (though I see here an ad-hoc explanation was given by George FitzGerald in 1889 which pointed in the right direction). Today we also have a long-standing paradox where quantum physics doesn’t fit together with relativity, and dark matter and dark energy remain mysterious… just knowing there’s a problem doesn’t always quickly lead to a solution. So, while I directly sense a conflict between my experience and purely reductive consciousness, that doesn’t mean I expect an easy solution. Assuming illusionism, I wouldn’t expect a full explanation of that to be found anytime soon either.
It was just postulation. I wouldn’t rule out panpsychism.
Chalmers seems not to believe in a consciousness without physical effects—see his 80000 hours interview. So Yudkowsky’s description of Chalmers’ beliefs seems to be either flat-out wrong, or just outdated.
I do hope we solve this before letting AGIs take over the world, since, if I’m right, they won’t be “truly” conscious unless we can replicate whatever is going on in humans. Whether EAs should care about insect welfare, or even chicken welfare, also hinges on the answer to this question.
Thank you for this discussion.
I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn’t be bothered to read him myself, I shouldn’t have parroted the interpretations of someone else.
I now have better understanding of your position, which is, in fact, falsifiable.
We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).
I hope I’ve somewhat clarified my own views. But if not, I don’t expect to do better in future comments, so I will bow out.
Again, thank you for the discussion.
Yeah, this was a good discussion, though unfortunately I didn’t understand your position beyond a simple level like “it’s all quarks”.
On the question of “where does a virtual grenade explode”, to me this question just highlights the problem. I see a grenade explosion or a “death” as another bit pattern changing in the computer, which, from the computer’s perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about “beauty” and “love” and “being in pain”, but it seems to me that nothing can really matter to the computer because it can’t really feel anything. I once wrote software which actually had a concept that I called “pain”. So there were “pain” variables and of course, I am confident this caused no meaningful pain in the computer.
I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of “nothing really matters: suffering is just an illusion” or, less likely, “pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter”, though I have no idea how this could be true.
* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word “elephant” comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain’s computations: a holistic sense of elephant-ness (and I feel as though I “understand” this output—even though I don’t understand what “understanding” is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of “consciousness” that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie “Being John Malkovich”, and having recently head of the “thousand brains theory”, I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one “huge” particle.
It’s not true that particles behave identically under identical circumstances—that would be determinism.
If it were true, it wouldn’t only apply to consciousness, or mean that “cosnciousness is One” in some sense that doens’t apply to everything else.
There’s a lot of information in N particles. If you want to conserve it all, your huge particle has to exist in 3*N dimensional space. But a freely moving particle in 3*N space would behave locally, so you also need constraints to recover locality. Which is bascially the argument for space realluy being 3 dimensional.
Is there actually anything else to human interaction?
It makes no sense to expect people to engage the machinery of rationality when they don’t believe it’ll further their goals. Even if they benefit from being privately rational, it’s not necessarily in their interest to share their rationality with you. Hence, if you haven’t earned their respect, they’ll conceal their wisdom from you, like the Spartans.
In fact, pretty much everything in Eliezer’s post seems to apply only to the rare situation of two or more people who respect each other enough to actually feel a need to appear logically consistent and make their lies plausible. Usually at least one of the people is in no real need to convince the other of anything (i.e., they have higher status), so they won’t waste any time or energy trying to. Therefore, their statements serve other purposes; mainly, to display their high status and to warn the underling when they’re getting too close to a line they won’t let them cross unpunished. Conspicuously wasting the interlocutor’s time with nonsense serves this purpose very well.
Status, status, status. It gets (some of) us every time. There seems to be very little to life but status to a normal person.