The latest zombie debates have made me reflect on how a purely philosophical dispute may lack any resolution and yet be harmless as long as the consequences of a ‘confused’ position don’t leak out into the world. Hence, I’ve brainstormed a short list of practical ways in which dualists may go wrong. (And yes, one or two of the ‘dualist errors’ are standard LW positions!)
Undue skepticism about the minds of artificial intelligences, leading to the possibility of prejudice. (JaronLanier assumes, as a matter of faith, that people are metaphysically special in a way that no AI could possibly be. His belief would have the potential to become ethically monstrous precisely at the point when AGI emerged.)
A desire to answer the misguided question: “What evolutionary benefit is conferred by phenomenal consciousness (as distinct from the merely ‘functional’ abilities to learn, represent the environment, introspect on one’s own state etc)?”
A belief that the ‘all or nothing’ nature of consciousness is detectable somehow. That psychology is incomplete until it tracks down ‘neural correlates of consciousness’ that respect this ‘all or nothingness’, and can resolve all of Dennett’s Orwellian vs Stalinesque disputes.
Grave skepticism about ‘simulationism’ leading to skewed ethics / decision theory. For instance, believing that it would be or might be catastrophic if the earth were replaced by a perfectly and reliably simulated earth.
Belief in transtemporal identity leading to skewed ethics / decision theory: Believing that one would ‘die’ in a teleporter. Vastly overestimating the value of cryonics. (Yes, I think these two are sides of the same coin.)
Belief in the existence of a duality between ‘mental states’ and the underlying physical state suggesting ‘utilitarianism’ in which all moral value is supervenient on mental states, irrespective of the wider universe.
Being under the mistaken impression that there is something meritorious, useful or valuable about the practice of discussing certain philosophical questions (e.g. those concerning the persistence of subjective identity, or whether/how we can know that physiologically normal people have minds, and have non-inverted qualia).
Inference from ‘consciousness’ to the necessity to wreck our best models of physics in some way. (Penrose seeking a non-algorithmic quantum gravity, Chalmers endorsing ‘consciousness causes collapse of the wavefunction’ etc.)
Something about libertarian free will and libertarianism, the details of which have been deleted to avert a mind-killing political dispute.
Inference from ‘consciousness’ to the existence of an immortal soul (together with a corresponding skewing of decision theory: overacceptance of death, overeagerness to die for one’s cause.)
Some of these (1, 4, 5 and 10) are fairly specific misattributions of ethical value. Others (2, 3, 7 and 8) are or might turn into degenerating research programmes (or, in the case of 7, just a waste of time). 6 and 9 are very broad. Even if the conclusions (utilitarianism and libertarianism respectively) are ‘correct’ or ‘right’ in some sense (I would say not, but this isn’t the place to thrash it out) I do think it would be a mistake to justify them to oneself using philosophical beliefs about determinate, autonomous, ‘sovereign’ mental states.
Would anyone like to add any others? Or vehemently disagree with me about something on my list? :-)
Where do dualists go wrong?
The latest zombie debates have made me reflect on how a purely philosophical dispute may lack any resolution and yet be harmless as long as the consequences of a ‘confused’ position don’t leak out into the world. Hence, I’ve brainstormed a short list of practical ways in which dualists may go wrong. (And yes, one or two of the ‘dualist errors’ are standard LW positions!)
Undue skepticism about the minds of artificial intelligences, leading to the possibility of prejudice. (Jaron Lanier assumes, as a matter of faith, that people are metaphysically special in a way that no AI could possibly be. His belief would have the potential to become ethically monstrous precisely at the point when AGI emerged.)
A desire to answer the misguided question: “What evolutionary benefit is conferred by phenomenal consciousness (as distinct from the merely ‘functional’ abilities to learn, represent the environment, introspect on one’s own state etc)?”
A belief that the ‘all or nothing’ nature of consciousness is detectable somehow. That psychology is incomplete until it tracks down ‘neural correlates of consciousness’ that respect this ‘all or nothingness’, and can resolve all of Dennett’s Orwellian vs Stalinesque disputes.
Grave skepticism about ‘simulationism’ leading to skewed ethics / decision theory. For instance, believing that it would be or might be catastrophic if the earth were replaced by a perfectly and reliably simulated earth.
Belief in transtemporal identity leading to skewed ethics / decision theory: Believing that one would ‘die’ in a teleporter. Vastly overestimating the value of cryonics. (Yes, I think these two are sides of the same coin.)
Belief in the existence of a duality between ‘mental states’ and the underlying physical state suggesting ‘utilitarianism’ in which all moral value is supervenient on mental states, irrespective of the wider universe.
Being under the mistaken impression that there is something meritorious, useful or valuable about the practice of discussing certain philosophical questions (e.g. those concerning the persistence of subjective identity, or whether/how we can know that physiologically normal people have minds, and have non-inverted qualia).
Inference from ‘consciousness’ to the necessity to wreck our best models of physics in some way. (Penrose seeking a non-algorithmic quantum gravity, Chalmers endorsing ‘consciousness causes collapse of the wavefunction’ etc.)
Something about libertarian free will and libertarianism, the details of which have been deleted to avert a mind-killing political dispute.
Inference from ‘consciousness’ to the existence of an immortal soul (together with a corresponding skewing of decision theory: overacceptance of death, overeagerness to die for one’s cause.)
Some of these (1, 4, 5 and 10) are fairly specific misattributions of ethical value. Others (2, 3, 7 and 8) are or might turn into degenerating research programmes (or, in the case of 7, just a waste of time). 6 and 9 are very broad. Even if the conclusions (utilitarianism and libertarianism respectively) are ‘correct’ or ‘right’ in some sense (I would say not, but this isn’t the place to thrash it out) I do think it would be a mistake to justify them to oneself using philosophical beliefs about determinate, autonomous, ‘sovereign’ mental states.
Would anyone like to add any others? Or vehemently disagree with me about something on my list? :-)