This post is in large part a linkpost for an excellent talk on experimental philosophy, given by Ned Block (don’t be put off by the title): https://www.youtube.com/watch?v=6lHHxcxurhQ . Apologies to those who dislike videos, but (especially at 1.5x or 2x speed) it’s faster and more fun than reading a bunch of his papers, I swear.
Here’s an example: one experiment Block talks about sticks electrodes to peoples’ heads, and then subtly shows them a geometrical shape while they’re doing another task. In the after-experiment report, some participants report they noticed the shape, and their electrode data can then be reviewed to see what brain activity was neccessary for noticing the shape even if they didn’t know they had to notice it. It turns out, your brain doesn’t need to be very active for you to be able to recall seeing the shape.
Block uses results like this to defend his thesis about the richness of conscious perception, and how early in the brain’s perceptual systems activity can be experienced consciously. But this forms an interesting contrast with a deflationary view of consciousness.
Our agent of deflation is Marvin Minsky. Here’s a video of him being deflationist. He has a favorite point, which is that people associate consciousness with lots of tasks, like being able to remember smells, or being able to imagine applying verbs to nouns, et cetera, but that this grouping is a human-made category, and thinking about these things as a group can get in the way of understanding them. The stuff we call conscious activity can, he says, be broken up into lots of sub-processes like smelling and abstract-verb-imagining that have a strong internal coherence but not much overlap with each other.
Which brings us back to Ned Block and consciousness of perception. It’s possible to look at the several experiments Block talks about, not as different probes of a unified consciousness, but as probes of several functions of the brain that fall under the umbrella of consciousness.
Another of Block’s examples is presenting different images to each eye, and using eye-tracking to determine which image the subject is experiencing. It’s natural and effortless for us to think that this sort of consciousness is the same thing as the consciousness of remembering the shape, form the first example. It takes a weird, effortful inversion of perspective for me to think about what if would be like if the brain functions determining the two experiments had very little overlap.
This is the reason I linked to Dennett’s article on the intentional stance earlier—Minsky’s view can be thought of as deliniating a “conscious stance” as separate from a “process stance.” In this view, consciousness is just a convenient way to predict mental things without looking too close. And so from this view, the kinds of brain activity the empirical philosophers are trying to pin down—where do you store representation of things you see, how fast do you put those representations into long-term memory, what where do you figure out which eye’s signals are dominant in determining your representation of the visual field, et cetera—are actually at a level of description below consciousness.
You may already be grumbling about the distinction between hard and easy problems of consciousness. These grumblings are fair, and we will get to that later. I just thought this was too much fun to not share.
Empirical philosophy and inversions
A regular installment. Index is here.
This post is in large part a linkpost for an excellent talk on experimental philosophy, given by Ned Block (don’t be put off by the title): https://www.youtube.com/watch?v=6lHHxcxurhQ . Apologies to those who dislike videos, but (especially at 1.5x or 2x speed) it’s faster and more fun than reading a bunch of his papers, I swear.
Here’s an example: one experiment Block talks about sticks electrodes to peoples’ heads, and then subtly shows them a geometrical shape while they’re doing another task. In the after-experiment report, some participants report they noticed the shape, and their electrode data can then be reviewed to see what brain activity was neccessary for noticing the shape even if they didn’t know they had to notice it. It turns out, your brain doesn’t need to be very active for you to be able to recall seeing the shape.
Block uses results like this to defend his thesis about the richness of conscious perception, and how early in the brain’s perceptual systems activity can be experienced consciously. But this forms an interesting contrast with a deflationary view of consciousness.
Our agent of deflation is Marvin Minsky. Here’s a video of him being deflationist. He has a favorite point, which is that people associate consciousness with lots of tasks, like being able to remember smells, or being able to imagine applying verbs to nouns, et cetera, but that this grouping is a human-made category, and thinking about these things as a group can get in the way of understanding them. The stuff we call conscious activity can, he says, be broken up into lots of sub-processes like smelling and abstract-verb-imagining that have a strong internal coherence but not much overlap with each other.
Which brings us back to Ned Block and consciousness of perception. It’s possible to look at the several experiments Block talks about, not as different probes of a unified consciousness, but as probes of several functions of the brain that fall under the umbrella of consciousness.
Another of Block’s examples is presenting different images to each eye, and using eye-tracking to determine which image the subject is experiencing. It’s natural and effortless for us to think that this sort of consciousness is the same thing as the consciousness of remembering the shape, form the first example. It takes a weird, effortful inversion of perspective for me to think about what if would be like if the brain functions determining the two experiments had very little overlap.
This is the reason I linked to Dennett’s article on the intentional stance earlier—Minsky’s view can be thought of as deliniating a “conscious stance” as separate from a “process stance.” In this view, consciousness is just a convenient way to predict mental things without looking too close. And so from this view, the kinds of brain activity the empirical philosophers are trying to pin down—where do you store representation of things you see, how fast do you put those representations into long-term memory, what where do you figure out which eye’s signals are dominant in determining your representation of the visual field, et cetera—are actually at a level of description below consciousness.
You may already be grumbling about the distinction between hard and easy problems of consciousness. These grumblings are fair, and we will get to that later. I just thought this was too much fun to not share.