I haven’t thought it through very carefully, but my hunch is that you’re not quite getting the crux of the issue.
For one thing, there’s a little compatibilism joke I like:
“I don’t see why the chess engine is going through all the effort of evaluating 400,000 possible move sequences. It’s running a deterministic algorithm! The answer is predetermined!!”
Hahaha. The moral, I think, is that there are two stories:
“the chess engine moved the knight because that was the result of a deterministic process involving a bunch of interacting transistors on a chip”
“the chess engine moved the knight because it evaluated 400,000 move sequences and found that moving the knight was best”
These sound like different stories, but they are in fact the same story told at two different levels of abstraction.
So that’s one hint: we can get tripped up by failing to appreciate that stories at multiple levels of abstraction can all be “true” simultaneously.
Then another related hint is a thing I wrote here about the hard problem of consciousness:
All my internal models are simplified entities, containing their essential behavior and properties, but not usually capturing the nuts-and-bolts of how they work in the real world. (In a programming analogy, you could say that we’re modeling the [global neuronal workspace’s] API & documentation, not its implementation.) Thus, my attention schema does not involve neurons or synapses or GNWs or anything like that, even if, in reality, that’s what it’s modeling.
So I think it’s a slighty wrong starting point to think along the lines of “the processes in our brains to which we have conscious access are a subset of all the processes in our brains”. Instead I think we should be saying, “when we introspect, we’re not directly seeing any of the processes in our brains, instead the things we perceive are certain states of certain models that abstract over some of the processes in our brains”—just as “a dog barking” is an abstract model that represents some complicated statistical predictions about incoming photons and sound waves and so on.
I don’t really have a conclusion here, and I don’t mean for this to be a strong disagreement with what you wrote, just maybe a tweak in the framing or something. :-)
If you read the original post I quoted and linked at the top, Eliezer is interested in explaining why the idea of a debate between “free will” and “determinism” arises in the first place, on the level of cognitive processes. I can see now that I should have explained this in more detail!
So even if compatibilism is true, why, on a cognitive gears-level mechanisms level, does it feel to many people like it makes sense to argue about whether or not we have free will?
Yes I understood that. I think my comment is a relevant thing to keep in mind when thinking about that question.
Like, we can only perceive our introspective world through the lens of abstract models of that world that our brains build. So we should be thinking: What do those models look like?
There’s a thing that Dan Dennett calls (I think) “sophisticated naive physics”, where you take “intuitive physics” as an object of study, determine its ontology, rules, etc. We understand that the ontology and relationships and affordances of “intuitive physics” may be quite different from anything in actual physics, but we can still study it. By the same token, I’m proposing that the question of “why do people have the intuitions they have about free will” should be studied through a lens of “sophisticated naive introspection”, i.e. determine the ontology and relationships and affordances of the abstract model space that we use when we introspect, while accepting that those things may be quite different from anything actually happening in the brain. Whereas your OP seems to take a different perspective, namely that introspection provides an accurate view of a subset of the things in the brain.
By the same token, I’m proposing that the question of “why do people have the intuitions they have about free will” should be studied through a lens of “sophisticated naive introspection”, determine its ontology, rules, etc., while accepting that the things we find there may be quite different from anything actually happening in the brain. Whereas your OP seems to take a different perspective, namely that introspection provides an accurate view of a subset of the things in the brain.
I do like your framing of “sophisticated naive introspection,” and I think it makes sense as an object of study. That seems like a nice generalization of the program of study that Eliezer is calling for in his post.
However, it seems to me that your distinction between an “abstract model of the world that our brains build” vs. an “accurate view,” or the idea that there’s a categorical difference between introspection and what we might call “actual psychology,” seems wrong to me.
Everything we use to study the brain, including naive introspection, is an abstract model of the world. Psychology refines, and sometimes directly contradicts the findings of naive introspection; the same is true of the relationship between naive and scientific physics. However, in both cases, intuition and reality line up pretty nicely, at least for some subset of situations. For that limited set of use-cases, the difference between naive and scientific approaches to the discipline isn’t worth worrying about in practical terms.
One of the unstated assumptions of my argument here is that the abstract psychological model afforded by naive introspection is, in this case, adequate to at least make some headway on this problem. It wouldn’t be nearly adequate to resolve it—I haven’t even really scratched the surface. But that wasn’t my goal here. I just wanted to give the best quick account I could of why “free will” and “determinism” seem to make psychological sense, based on my own intuitions and thoughts and given a short investment of time.
I haven’t thought it through very carefully, but my hunch is that you’re not quite getting the crux of the issue.
For one thing, there’s a little compatibilism joke I like:
Hahaha. The moral, I think, is that there are two stories:
“the chess engine moved the knight because that was the result of a deterministic process involving a bunch of interacting transistors on a chip”
“the chess engine moved the knight because it evaluated 400,000 move sequences and found that moving the knight was best”
These sound like different stories, but they are in fact the same story told at two different levels of abstraction.
So that’s one hint: we can get tripped up by failing to appreciate that stories at multiple levels of abstraction can all be “true” simultaneously.
Then another related hint is a thing I wrote here about the hard problem of consciousness:
So I think it’s a slighty wrong starting point to think along the lines of “the processes in our brains to which we have conscious access are a subset of all the processes in our brains”. Instead I think we should be saying, “when we introspect, we’re not directly seeing any of the processes in our brains, instead the things we perceive are certain states of certain models that abstract over some of the processes in our brains”—just as “a dog barking” is an abstract model that represents some complicated statistical predictions about incoming photons and sound waves and so on.
I don’t really have a conclusion here, and I don’t mean for this to be a strong disagreement with what you wrote, just maybe a tweak in the framing or something. :-)
If you read the original post I quoted and linked at the top, Eliezer is interested in explaining why the idea of a debate between “free will” and “determinism” arises in the first place, on the level of cognitive processes. I can see now that I should have explained this in more detail!
So even if compatibilism is true, why, on a cognitive gears-level mechanisms level, does it feel to many people like it makes sense to argue about whether or not we have free will?
Yes I understood that. I think my comment is a relevant thing to keep in mind when thinking about that question.
Like, we can only perceive our introspective world through the lens of abstract models of that world that our brains build. So we should be thinking: What do those models look like?
There’s a thing that Dan Dennett calls (I think) “sophisticated naive physics”, where you take “intuitive physics” as an object of study, determine its ontology, rules, etc. We understand that the ontology and relationships and affordances of “intuitive physics” may be quite different from anything in actual physics, but we can still study it. By the same token, I’m proposing that the question of “why do people have the intuitions they have about free will” should be studied through a lens of “sophisticated naive introspection”, i.e. determine the ontology and relationships and affordances of the abstract model space that we use when we introspect, while accepting that those things may be quite different from anything actually happening in the brain. Whereas your OP seems to take a different perspective, namely that introspection provides an accurate view of a subset of the things in the brain.
I do like your framing of “sophisticated naive introspection,” and I think it makes sense as an object of study. That seems like a nice generalization of the program of study that Eliezer is calling for in his post.
However, it seems to me that your distinction between an “abstract model of the world that our brains build” vs. an “accurate view,” or the idea that there’s a categorical difference between introspection and what we might call “actual psychology,” seems wrong to me.
Everything we use to study the brain, including naive introspection, is an abstract model of the world. Psychology refines, and sometimes directly contradicts the findings of naive introspection; the same is true of the relationship between naive and scientific physics. However, in both cases, intuition and reality line up pretty nicely, at least for some subset of situations. For that limited set of use-cases, the difference between naive and scientific approaches to the discipline isn’t worth worrying about in practical terms.
One of the unstated assumptions of my argument here is that the abstract psychological model afforded by naive introspection is, in this case, adequate to at least make some headway on this problem. It wouldn’t be nearly adequate to resolve it—I haven’t even really scratched the surface. But that wasn’t my goal here. I just wanted to give the best quick account I could of why “free will” and “determinism” seem to make psychological sense, based on my own intuitions and thoughts and given a short investment of time.
Fair enough :)