They’re computationally equivalent by hypothesis. The thesis of substrate independence is that as far as consciousness is concerned the side effects don’t matter and that capturing the essential sameness of the “AND” computation is all that does. If you’re having trouble understanding this, I can’t blame you in the slightest, because it’s that bizarre.
dfranke
Yes, I agree that this kind of atomism is silly, and by implication that things like Drescher’s gensym analogy are even sillier. Nonetheless, the black box needs a label if we want to do something besides point at it and grunt.
I should have predicted that somebody here was going to call me on that. I accept the correction.
Maybe this analogy is helpful: saying “qualia” isn’t giving us insight into consciousness any more than saying “phlogiston” is giving us insight into combustion. However, that doesn’t mean that qualia don’t exist or that any reference to them is nonsensical. Phlogiston exists. However, in our better state of knowledge, we’ve discarded the term and now we call it “hydrocarbons”.
My conclusion in the Mary’s room thought experiment doesn’t challenge either of these versions: something new happens when she steps outside, and there’s a perfectly good purely physical explanation of what and why. It is nothing more than an artifact of how human brains are built that Mary is unable to make the same physical thing happen, with the same result, without the assistance of either red light or appropriate surgical tools. A slightly more evolved Mary with a few extra neurons leading into her hippocampus would have no such difficulty.
Can you state what that version is? Whatever it is, it’s nothing I subscribe to, and I call myself a physicalist.
When she steps outside, something physical happens in her brain that has never happened before. Maybe something “non-physical” (huh?) also happens, maybe it doesn’t. We have gained no insight.
She is specifically not supposed to be pre-equipped with experiential knowledge, which means her brain is in one of the physical states of a brain that has never seen colour.
Well, then when she steps outside, her brain will be put into a physical state that it’s never been in before, and as a result she will feel enlightened. This conclusion gives us no insight whatsoever into what exactly goes on during that state-change or why it’s so special, which is why I think it’s a stupid thought-experiment.
The very premise of “Mary is supposed to have that kind of knowledge” implies that her brain is already in the requisite configuration that the surgery would produce. But if it’s not already in that configuration, she’s not going to be able to get it into that configuration just by looking at the right sequence of squiggles on paper. All knowledge can be represented by a bunch of 1′s and 0′s, and Mary can interpret those 1′s and 0′s as a HOWTO for a surgical procedure. But the knowledge itself consists of a certain configuration of neurons, not 1′s and 0′s.
To say that the surgery is required is to say that there is knowledge not conveyed by third persons descriptions, and that is a problem for sweeping claims of physicalism.
No it isn’t. All it says is that the parts of our brain that interpret written language are hooked up to different parts of our hippocampus than our visual cortex is, and that no set of signals on one input port will ever cause the hippocampus to react in the same way that signals on the other port will.
I think that the “Mary’s Room” thought experiment leads our intuitions astray in a direction completely orthogonal to any remotely interesting question. The confusion can be clarified by taking a biological view of what “knowledge” means. When we talk about our “knowledge” of red, what we’re talking about is what experiencing the sensation of red did to our hippocampus. In principle, you could perform surgery on Mary’s brain that would give her the same kind of memory of red that anyone else has, and given the appropriate technology she could perform the same surgery on herself. However, in the absence of any source of red light, the surgery is required. No amount of simple book study is ever going to impact her brain the same way the surgery would, and this distinction is what leads our intuitions astray. Clarifying this, however, does not bring us any closer to solving the central mystery, which is just what the heck is going on in our brain during the sensation of red.
Plausible? What does that mean, exactly?
What subjective probability would you assign to it?
Not every substance can perform every sub-part role in a consciousness producing computation, so there’s a limit to “independence”. Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I’m not sure what you mean.
I don’t know what the “usual” point of contention is, but this isn’t the one I’m taking a position in opposition to Bostrom on. Look again at my original post and how Bostrom defined substrate-independence and how I paraphrased it. Both Bostrom’s definition and mine mean that xkcd’s desert and certain Giant Look-Up Tables are conscious.
This sounds an awful lot like “making the same argument that I am, merely in different vocabulary”. You say po-tay-to, I say po-tah-to, you say “computations”, I say “physical phenomena”. Take the example of the spark-plug brain from my earlier post. If the computer-with-spark-plugs-attached is conscious but the computer alone is not, do you still consider this confirmation of substrate independence? If so, then I think you’re using an even weaker definition of the term than I am. How about xkcd’s desert? If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it’s plausible that anything in that system experiences human-like consciousness? If you say “no”, then I don’t know whether we’re disagreeing on anything.
The most important difference between Level 1 and Level 2 actions is that Level 1 actions tend to be additive, while Level 2 actions tend to be multiplicative. If you do ten hours of work at McDonald’s, you’ll get paid ten times as much as if you did one hour; the benefits of the hours add together. However, if you take ten typing classes, each one of which improves your ability by 20%, you’ll be 1.2^10 = 6.2 times better at the end than at the beginning: the benefits of the classes multiply (assuming independence).
I’m trying to think of anything in life that actually works this way and I can’t. If I start out being able to type at 20 WPM, taking 100 typing classes is not going to improve that to 1.6 billion WPM; neither is taking 1000 classes or 10000. These sorts of payoffs tend to be roughly logarithmic, not exponential.
Detecting the similarity of two patterns is something that happens in your brain, not something that’s part of reality.
If I’m correctly understanding what you mean by “part of reality” here, then I agree. This kind of “similarity” is another unnatural category. When I made reference in my original post to the level of granularity “sufficient in order model all the essential features of human consciousness”, I didn’t mean this as a binary proposition; just for it to be sufficient that if while you slept somebody made changes to your brain at any smaller level, you wouldn’t wake up thinking “I feel weird”.
As for how this bears on Bostrom’s simulation argument: I’m not familiarized with it properly, but how much of its force does it lose by not being able to appeal to consciousness-based reference classes and the like? I can’t see how that would make simulations impossible; nearest I can guess is that it harms his conclusion that we are probably in a simulation?
Right. All the probabilistic reasoning breaks down, and if your re-explanation patches things at all I don’t understand how. Without reference to consciousness I don’t know how to make sense of the “our” in “our experiences”. Who is the observer who is sampling himself out of a pool of identical copies?
Anthropics is confusing enough to me that it’s possible that I’m making an argument whose conclusion doesn’t depend on its hypothesis, and that the argument I should actually be making is that this part of Bostrom’s reasoning is nonsense regardless of whether you believe in qualia or not.
I’m not trying to hold you to any Platonic claim that there’s any unique set of computational primitives that are more ontologically privileged than others. It’s of course perfectly equivalent to say that it’s NOR gates that are primitive, or that you should be using gates with three-state rather than two state inputs, or whatever. But whatever set of primitives you settle on, you need to settle on something, and I don’t think there’s any such something which invalidates my claim about K-complexity when expressed in formal language familiar to physics.
There are no specifically philosophical truths, only specifically philosophical questions. Philosophy is the precursor to science; its job is to help us state our hypotheses clearly enough that we can test them scientifically. ETA: For example, if you want to determine how many angels can dance on the head of a pin, it’s philosophy’s job to either clarify or reject as nonsensical the concept of an angel, and then in the former case to hand off to science the problem of tracking down some angels to participate in a pin-dancing study.
Those early experimenters with electricity were still taking a position whether they knew it or not: namely, that “will this conduct?” is a productive question to ask—that if p is the subjective probability that it will, then p\(1-p)* is a sufficiently large value that the experiment is worth their time.
She can understand the sequence of chemical reactions that comprises the Calvin cycle just as she can understand what neural impulses occur when red light strikes retinal rods, but she can’t form the memory of either one occurring within her body.