“Ex hypothesi, Mary knows all the relevant third-person specifiable color facts. Our inability to simulate her well doesn’t change that fact.”
“It does if our inability to simulate her well messes with our intuitions. If, as I conjectured, we tend to translate ‘omniscient person’ with ‘scholar with lots of book-learning’ then our intuitions will reflect that, and will hence be wrong.”
‘Ex hypothesi’ here means ‘by stipulation’ or ‘by the terms of the conditional argument’. The assumption is ‘Mary is a color scientist who knows all the relevant facts about color vision, but has never experienced color in her own visual field’. You aren’t denying that this is a consistent, coherent hypothetical. All you’re suggesting is that a being that satisfied this hypothetical would have transhuman or posthuman capacities for data storage and manipulation. So far so good.
You then insist that such a being, if it acquired color vision, would be completely unsurprised by the particular shade of red it now (for the first time) encounters; whereas the dualist insists in that situation the transhuman would learn a new fact, would acquire new, possibilities-ruling-out information. (A sentient supercomputer without the capacity to experience color would run into the exact same trouble.)
Up to that point, the two of you remain in a stalemate. (Or worse than a stalemate, from your perspective, since you find it baffling that anyone could share the dualist’s intuitions or reasoning, whereas the dualist perfectly well understands the intuitive force of your argument, and just doesn’t think it’s strong enough.)
Is Marianna omniscient about light and neuroscience like Mary? If she is, she’d be able to figure out which color is which fairly easily.
So you assert. The goal here isn’t to just repeatedly assert, in various permutations, that dualists are wrong. The goal is to figure out why they think as they do, so we can dissolve the question. Swap out ‘free will’ for ‘irreducible qualia’ in Eliezer’s recommendation:
“It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn’t change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it. [...]
“The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers [...] But once you understand in detail how your brain generates the feeling of the question [...] then you’re done. Then there’s no lingering feeling of confusion, no vague sense of dissatisfaction.
“If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn’t leave anything behind.
“A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
“You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team. In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you’ve left anything unexplained.
’And so, perhaps, you’ll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation. If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how. You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact. You have not taken the illusion apart to see the wheels and gears.”
If you keep rushing again and again to swiftly solve the problem—or, worse, rushing again to affirm that the problem is solved—then it will be harder to notice the points that cause you confusion. My appeal to the Marianna example is a key example of a place that should have made you stop, furrow your brow, and notice that the explanation you gave before to dispell Mary-intuitions doesn’t work for Marianna-intuitions, even though the two seem to be of the same kind. It would be surprising indeed if ‘Mary lacked the ability to visualize redness’ were a big part of the explanation in the former case, yet not in the least bit a part of the latter case, given their obvious parallelism. This suggests that the explanation you first gave is off-base in the Mary case too. Retreating to just asserting that dualism is wrong is missing the important tidal shift that just happened.
There are certain problems we have trouble processing regardless of what level of power we have, because of our mind’s internal architecture.
OK. But ‘Qualia seem irreducible because something about how our brains work makes them seem irreducible’ isn’t the most satisfying of explanations. Could you give a little more detail?
When you see red, you get a handle to a “redness” object, which you can perform certain queries and operations on, such as “does this make me feel hot or cold”, or “how similar is this color to this other color” but you can’t directly access the underlying data structure. [...] Nor can Mary instantiate a redness object in her brain by studying neuroscience.
OK. But couldn’t all of the same be said of ordinary macroscopic objects in our environment, too? When I see a table (a physical table in my environment—not a table-shaped quale in my visual field), I can’t directly access the underlying fine-grained quantum description of the table. Nor can I make tables spontaneously appear in my environment by acquiring superhuman knowledge of the physics of tables. Yet tables don’t seem to pose any problem at all for reductionism.
If tables and qualia have all these things in common, then where does the actual difference lie, the difference that explains why there seems to be a Hard Problem in one case and not in the other?
it’s common knowledge that people with a few days of job experience are much better at doing jobs than people who have spent months reading about the job.
But is that because people who only learn about jobs indirectly are lacking certain key pieces of factual knowledge? The problem raised by Mary’s Room isn’t ‘Explain why Mary intuitively seems to get better at completing various tasks’; it’s ‘Explain why Mary intuitively seems to learn new factual knowledge’. This is made clearer by the Marianna example. Your analogy only helps us give a physicalistic explanation of the former, not the latter.
‘Ex hypothesi’ here means ‘by stipulation’ or ‘by the terms of the conditional argument’. The assumption is ‘Mary is a color scientist who knows all the relevant facts about color vision, but has never experienced color in her own visual field’. You aren’t denying that this is a consistent, coherent hypothetical. All you’re suggesting is that a being that satisfied this hypothetical would have transhuman or posthuman capacities for data storage and manipulation. So far so good.
You then insist that such a being, if it acquired color vision, would be completely unsurprised by the particular shade of red it now (for the first time) encounters; whereas the dualist insists in that situation the transhuman would learn a new fact, would acquire new, possibilities-ruling-out information. (A sentient supercomputer without the capacity to experience color would run into the exact same trouble.)
Up to that point, the two of you remain in a stalemate. (Or worse than a stalemate, from your perspective, since you find it baffling that anyone could share the dualist’s intuitions or reasoning, whereas the dualist perfectly well understands the intuitive force of your argument, and just doesn’t think it’s strong enough.)
So you assert. The goal here isn’t to just repeatedly assert, in various permutations, that dualists are wrong. The goal is to figure out why they think as they do, so we can dissolve the question. Swap out ‘free will’ for ‘irreducible qualia’ in Eliezer’s recommendation:
“It is a fact about human psychology that people think they have free will. Finding a more defensible philosophical position doesn’t change, or explain, that psychological fact. Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it. [...]
“The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers [...] But once you understand in detail how your brain generates the feeling of the question [...] then you’re done. Then there’s no lingering feeling of confusion, no vague sense of dissatisfaction.
“If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question. A vague dissatisfaction should be as much warning as a shout. Really dissolving the question doesn’t leave anything behind.
“A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team. And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.
“You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team. In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you’ve left anything unexplained.
’And so, perhaps, you’ll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation. If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how. You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact. You have not taken the illusion apart to see the wheels and gears.”
If you keep rushing again and again to swiftly solve the problem—or, worse, rushing again to affirm that the problem is solved—then it will be harder to notice the points that cause you confusion. My appeal to the Marianna example is a key example of a place that should have made you stop, furrow your brow, and notice that the explanation you gave before to dispell Mary-intuitions doesn’t work for Marianna-intuitions, even though the two seem to be of the same kind. It would be surprising indeed if ‘Mary lacked the ability to visualize redness’ were a big part of the explanation in the former case, yet not in the least bit a part of the latter case, given their obvious parallelism. This suggests that the explanation you first gave is off-base in the Mary case too. Retreating to just asserting that dualism is wrong is missing the important tidal shift that just happened.
OK. But ‘Qualia seem irreducible because something about how our brains work makes them seem irreducible’ isn’t the most satisfying of explanations. Could you give a little more detail?
OK. But couldn’t all of the same be said of ordinary macroscopic objects in our environment, too? When I see a table (a physical table in my environment—not a table-shaped quale in my visual field), I can’t directly access the underlying fine-grained quantum description of the table. Nor can I make tables spontaneously appear in my environment by acquiring superhuman knowledge of the physics of tables. Yet tables don’t seem to pose any problem at all for reductionism.
If tables and qualia have all these things in common, then where does the actual difference lie, the difference that explains why there seems to be a Hard Problem in one case and not in the other?
But is that because people who only learn about jobs indirectly are lacking certain key pieces of factual knowledge? The problem raised by Mary’s Room isn’t ‘Explain why Mary intuitively seems to get better at completing various tasks’; it’s ‘Explain why Mary intuitively seems to learn new factual knowledge’. This is made clearer by the Marianna example. Your analogy only helps us give a physicalistic explanation of the former, not the latter.