MTraven—They might have a common structural/functional role. It would be plenty interesting if computing a certain algorithm strictly entailed a certain phenomenal quality (or ‘feel’).
Dan—I assume that science is essentially limited to third-personal investigation of public, measurable phenomena. It follows that we can expect to learn more and more about the public, measurable aspects of neural functioning. But it would be a remarkable surprise if such inquiry sufficed to establish conclusions about first-personal phenomenology. (In this respect, the epistemic gap between ‘physics’ and ‘phenomenology’ mirrors the even more famous gap between ‘is’ and ‘ought’.) Who knows, maybe we’ll be surprised? Maybe our current thoughts rest upon severe conceptual errors? Maybe logic is an illusion, and I merely believe in the validity of modus ponens because a demon is messing with my head? We can play “maybe”s all day long, but it doesn’t seem very helpful unless you can actually show that a mistake has been made.
Robin—I can’t tell what you mean. Are you saying there’s a logically possible world that’s identical to ours with respect to the arrangement of fingers and palms, etc., but that does not contain any hands? I’m pretty sure that’s false: fingers etc. entail hands. But if you can describe a world that serves as a counterexample to this claim, I’d be very curious to hear it.
Alternatively, perhaps you’re saying that if we weren’t thinking clearly, and didn’t really understand the term ‘hand’, then we might be fooled into believing that hand-zombies were logically possible. (This would be most likely if our ‘hand’ concept did not explicitly invoke fingers, but rather brought them in implicitly, just as ‘water’ indirectly reduces to H2O, in virtue of being directly analyzable as ‘whatever stuff actually fills the water role’.) I agree with all that, but am yet to be convinced that my judgments about p-zombies rest on any analogous error. [I examine the alleged analogy to conventional a posteriori identities, e.g. water = H2O, here.]
Caledonian—I couldn’t care less what you consider me. I’d much rather see you consider my arguments. Maybe then you’d have something of substance to contribute to the conversation. (N.B. I’m well aware that p-zombies are physically—and hence behaviourally—identical to their conscious counterparts. The dispute is over what conclusions we can draw from this.)
Paul—that can’t be right. If I could somehow learn (contrary to fact) that animals were p-zombies, i.e. they don’t really feel pain despite giving every outward appearance of doing so, that would undermine most arguments for ethical vegetarianism, and instead support the most ‘efficient’ factory farming practices.
Nick,
Eliezer’s one-place function is exactly infallible, because he defines “right” as its output.
I misunderstood some of Eliezer’s notation. I now take his function to be an extrapolation of his volition rather than anyone else’s. I don’t think this weakens my point: if there were a rock somewhere with a lookup table for this function written on it, Eliezer should always follow the rock rather than his own insights (and according to Eliezer everyone else should too), and this remains true even if there is no such rock.
Furthermore, the morality function is based on extrapolated volition. Someone who has only considered one point of view on various moral questions will disagree with their extrapolated (completely knowledgable, completely wise) volition in certain predictable ways. That’s exactly what I mean by a “twist.”