That someone managed to produce an implausibly successful simulation of a human being.
There’s no contradiction in saying “zombies are possible” and “zombie-me would say that zombies are possible”. (But let me add that I don’t mean the sort of zombie which is supposed to be just the physical part of me, with an epiphenomenal consciousness subtracted, because I don’t believe that consciousness is epiphenomenal. By a zombie I mean a simulation of a conscious being, in which the causal role of consciousness is being played by a part that isn’t actually conscious.)
So if you accidentally cut the top of your head open while shaving and discovered that someone had went and replaced your brain with a high-end classical computing CPU sometime while you were sleeping, you couldn’t accept actually being an upload, since the causal structure that produces the thoughts that you are having qualia are still there? (I suppose you might object to the assumed-to-be-zombie upload you being referred to as ‘you’ as well.)
Reason I’m asking is that I’m a bit confused exactly where the problems from just the philosophical part would come in with the outsourcing to uploaded researchers scenario. Some kind of more concrete prediction, like that a neuromorphic AI architecturally isomorphic to a real human central nervous system just plain won’t ever run as intended until you build an quantum octonion monad CPU to house the qualia bit, would be a lot more not-confusing stance, but I don’t think I’ve seen you take that.
I’m going to collect some premises that I think you affirm:
consciousness is something most or all humans have; likewise for the genes that encode this phenotype
consciousness is a quantum phenomenon
the input-output relation of the algorithm that the locus of consciousness implements can be simulated to arbitrary accuracy (with difficulty)
if the simulation isn’t implemented with the right kind of quantum system, it won’t be conscious
I have some questions about the implications of these assertions.
Do you think the high penetrance of consciousness is a result of founder effect + neutral drift or the result of selection (or something else)?
What do you think is the complexity class of the algorithm that the locus of consciousness implements?
If you answered “selection” to the first question, what factors do you think contributed to the selection of the phenotype that implements that algorithm in a way that induces consciousness as a “causal side-effect”?
It’s anthropically necessary that the ontology of our universe permits consciousness, but selection just operates on state machines, and I would guess that self-consciousness is adaptive because of its functional implications. So this is like looking for an evolutionary explanation of why magnetite can become magnetized. Magnetite may be in the brain of birds because it helps them to navigate, and it helps them to navigate because it can be magnetized; but the reason that this substance can be magnetized has to do with physics, not evolution. Similarly, the alleged quantum locus may be there because it has a state-machine structure permitting reflective cognition, and it has that state-machine structure because it’s conscious; but it’s conscious because of some anthropically necessitated ontological traits of our universe, not because of its useful functions. Evolution elsewhere may have produced unconscious intelligences with brains that only perform classical computations.
this is like looking for an evolutionary explanation of why magnetite can become magnetized… the alleged quantum locus… [is] conscious because of some anthropically necessitated ontological traits of our universe, not because of its useful functions
I think you have mistaken the thrust of my questions. I’m not asking for an evolutionary explanation of consciousness per se -- I’m trying to take your view as given and figure out what useful functions one ought to expect to be associated with the locus of consciousness.
What does conscious cognition do that unconscious cognition doesn’t do? The answer to that tells you what consciousness is doing (though not whether these activities are useful...).
So if you observed such a classical upload passing exceedingly carefully designed and administered turing tests, you wouldn’t change your position on this issue? Is there any observation which would falsify your position?
Uploads are a distraction here. It’s the study of the human brain itself which is relevant. I already claim that there is a contradiction between physical atomism and phenomenology, that a conscious experience is a unity which cannot plausibly be identified with the state of a vaguely bounded set of atoms. If you’re a materialist who believes that the brain ultimately consists of trillions of simple particles, then I say that the best you can hope for, as an explanation of consciousness, is property dualism, based on some presently unknown criterion for saying exactly which atoms are part of the conscious experience and which ones aren’t.
(I should emphasize that it would be literally nonsensical to say that a conscious experience is a physical part of the brain but that the boundaries of this part, the criteria for inclusion and exclusion at the atomic level, are vague. The only objectively vague things in the world are underspecified concepts, and consciousness isn’t just a “concept”, it’s a fact.)
So instead I bet on a new physics where you can have complex “elementary” entities, and on the conscious mind being a single, but very complex, entity. This is why I talk about reconstructions of quantum mechanics in terms of tensor products of semilocalized Hilbert spaces of varying dimensionality, and so on. Therefore, the real test of these ideas will be whether they make sense biophysically. If they just don’t, then the options are to try to make dualism work, or paranoid hypotheses like metaphysical idealism and the Cartesian demon. Or just to deny the existence and manifest character of consciousness; not an option for me, but evidently some people manage to do this.
So what should I make of this argument if I happen to know you’re actually an upload running on classical computing hardware?
That someone managed to produce an implausibly successful simulation of a human being.
There’s no contradiction in saying “zombies are possible” and “zombie-me would say that zombies are possible”. (But let me add that I don’t mean the sort of zombie which is supposed to be just the physical part of me, with an epiphenomenal consciousness subtracted, because I don’t believe that consciousness is epiphenomenal. By a zombie I mean a simulation of a conscious being, in which the causal role of consciousness is being played by a part that isn’t actually conscious.)
So if you accidentally cut the top of your head open while shaving and discovered that someone had went and replaced your brain with a high-end classical computing CPU sometime while you were sleeping, you couldn’t accept actually being an upload, since the causal structure that produces the thoughts that you are having qualia are still there? (I suppose you might object to the assumed-to-be-zombie upload you being referred to as ‘you’ as well.)
Reason I’m asking is that I’m a bit confused exactly where the problems from just the philosophical part would come in with the outsourcing to uploaded researchers scenario. Some kind of more concrete prediction, like that a neuromorphic AI architecturally isomorphic to a real human central nervous system just plain won’t ever run as intended until you build an quantum octonion monad CPU to house the qualia bit, would be a lot more not-confusing stance, but I don’t think I’ve seen you take that.
I’m going to collect some premises that I think you affirm:
consciousness is something most or all humans have; likewise for the genes that encode this phenotype
consciousness is a quantum phenomenon
the input-output relation of the algorithm that the locus of consciousness implements can be simulated to arbitrary accuracy (with difficulty)
if the simulation isn’t implemented with the right kind of quantum system, it won’t be conscious
I have some questions about the implications of these assertions.
Do you think the high penetrance of consciousness is a result of founder effect + neutral drift or the result of selection (or something else)?
What do you think is the complexity class of the algorithm that the locus of consciousness implements?
If you answered “selection” to the first question, what factors do you think contributed to the selection of the phenotype that implements that algorithm in a way that induces consciousness as a “causal side-effect”?
It’s anthropically necessary that the ontology of our universe permits consciousness, but selection just operates on state machines, and I would guess that self-consciousness is adaptive because of its functional implications. So this is like looking for an evolutionary explanation of why magnetite can become magnetized. Magnetite may be in the brain of birds because it helps them to navigate, and it helps them to navigate because it can be magnetized; but the reason that this substance can be magnetized has to do with physics, not evolution. Similarly, the alleged quantum locus may be there because it has a state-machine structure permitting reflective cognition, and it has that state-machine structure because it’s conscious; but it’s conscious because of some anthropically necessitated ontological traits of our universe, not because of its useful functions. Evolution elsewhere may have produced unconscious intelligences with brains that only perform classical computations.
I think you have mistaken the thrust of my questions. I’m not asking for an evolutionary explanation of consciousness per se -- I’m trying to take your view as given and figure out what useful functions one ought to expect to be associated with the locus of consciousness.
What does conscious cognition do that unconscious cognition doesn’t do? The answer to that tells you what consciousness is doing (though not whether these activities are useful...).
So if you observed such a classical upload passing exceedingly carefully designed and administered turing tests, you wouldn’t change your position on this issue? Is there any observation which would falsify your position?
Uploads are a distraction here. It’s the study of the human brain itself which is relevant. I already claim that there is a contradiction between physical atomism and phenomenology, that a conscious experience is a unity which cannot plausibly be identified with the state of a vaguely bounded set of atoms. If you’re a materialist who believes that the brain ultimately consists of trillions of simple particles, then I say that the best you can hope for, as an explanation of consciousness, is property dualism, based on some presently unknown criterion for saying exactly which atoms are part of the conscious experience and which ones aren’t.
(I should emphasize that it would be literally nonsensical to say that a conscious experience is a physical part of the brain but that the boundaries of this part, the criteria for inclusion and exclusion at the atomic level, are vague. The only objectively vague things in the world are underspecified concepts, and consciousness isn’t just a “concept”, it’s a fact.)
So instead I bet on a new physics where you can have complex “elementary” entities, and on the conscious mind being a single, but very complex, entity. This is why I talk about reconstructions of quantum mechanics in terms of tensor products of semilocalized Hilbert spaces of varying dimensionality, and so on. Therefore, the real test of these ideas will be whether they make sense biophysically. If they just don’t, then the options are to try to make dualism work, or paranoid hypotheses like metaphysical idealism and the Cartesian demon. Or just to deny the existence and manifest character of consciousness; not an option for me, but evidently some people manage to do this.