I mean, if I don’t have a consistent algorithm (Turing-computable or not) then I won’t feel like I have free will, at least not in the sense that I do right now. Unpredictable is equivalent to random; and if I’m making random decisions then I won’t feel like my decisions match up to some internal narrative. The more I notice that my decisions are random, unpredictable in foresight, the more I will feel like I have no free will (more precisely, feel like I have no will rather than that it is not free). But I’m not sure it’s even coherent to have those sorts of feelings without implementing some kind of consistent algorithm in the background (I’m not sure it isn’t, either, but it certainly feels like there’s a potential problem there).
Not to mention, even if I do not implement any consistent algorithm, it does not follow that “I” (whatever on earth “I” can actually mean in such a case) am able to determine my own decisions. Unpredictable decisions do not in any way support the idea of me having free will, or that whatever is determining my decisions is itself a mental entity which has free will.
I suspect that you are using free will in a way that’s very different from how many people use the term or have intuitions about it. Many when discussing free will emphasize lack of predictability as a basic part. Have you read Scott’s piece? He discusses why that makes sense in some detail. Maybe I should ask what do you think free will would look/feel like?
I think that the initial knee jerk intuitions most people have about free will are incoherent. It wouldn’t look like anything if they were actually true, because they are logically incoherent; there is no way the world could be where a conscious entity decides its own actions, unless potentially cyclic causality is involved (a quick example of a problem, though perhaps not unsolvable: if I determine all of my own actions, and my state of mind determines my actions (this must be true in order for me to feel like I have free will, regardless of whether I do for some definition), then I must determine my own state of mind; and how did I determine the very first state I was in?). However, that’s a very different question to why people, including me, feel like we have free will. The feeling of free will is pretty well linked to our feeling of being able to conceive of counterfactuals in which we make all sorts of different decisions, and then decide between those decisions. Regardless of how over decided (read: predictable) our decision is, we feel like we could, if we just wanted to, make a different decision. The key is that we can’t actually make ourselves want to make a different decision from the one we do in fact want to make. We can imagine that we could want something different, because we don’t know yet what choice we will want to make, but at no point can we actually change which decision we want to make, in a way which does not regress to us wanting to do so.
I also hold, though this was not explored on Less Wrong that I remember, that the existence of a consistent internal narrative is key for free will. Without it we would feel like we were not at the least in complete control of our decisions; we would decide to do one thing but then do another, or remember doing things but not be able to understand why. To the extent these phenomena actually happen in real life, it seems that this holds (and it certainly seems to hold in fantasy, where mind control is often illustrated as feeling like this).
I should also note that while do not hold what I understand to be a standard Compatibilist conception of free will, Compatibilism certainly also holds that unpredictability is not a requirement for free will. Certainly this is not a new idea, and at least a part of my understanding does fall into standard Compatibilism as I understand it. My views are also derived from the free will subsequence here on LW. These ideas are debatable, but they are certainly not all that special. Perhaps I was assuming too little inferential difference; I didn’t attempt to derive the entire argument here, not did I give a link to the sequence which formed my beliefs; I think I assumed that many people would notice the connection, but perhaps not.
You’re straw manning. An entity controlling its own behaviour is so non contradictory that there is a branch of engineering dedicated to it: cybernetics.
I mean, if I don’t have a consistent algorithm (Turing-computable or not) then I won’t feel like I have free will, at least not in the sense that I do right now. Unpredictable is equivalent to random; and if I’m making random decisions then I won’t feel like my decisions match up to some internal narrative. The more I notice that my decisions are random, unpredictable in foresight, the more I will feel like I have no free will (more precisely, feel like I have no will rather than that it is not free). But I’m not sure it’s even coherent to have those sorts of feelings without implementing some kind of consistent algorithm in the background (I’m not sure it isn’t, either, but it certainly feels like there’s a potential problem there).
Not to mention, even if I do not implement any consistent algorithm, it does not follow that “I” (whatever on earth “I” can actually mean in such a case) am able to determine my own decisions. Unpredictable decisions do not in any way support the idea of me having free will, or that whatever is determining my decisions is itself a mental entity which has free will.
I suspect that you are using free will in a way that’s very different from how many people use the term or have intuitions about it. Many when discussing free will emphasize lack of predictability as a basic part. Have you read Scott’s piece? He discusses why that makes sense in some detail. Maybe I should ask what do you think free will would look/feel like?
I think that the initial knee jerk intuitions most people have about free will are incoherent. It wouldn’t look like anything if they were actually true, because they are logically incoherent; there is no way the world could be where a conscious entity decides its own actions, unless potentially cyclic causality is involved (a quick example of a problem, though perhaps not unsolvable: if I determine all of my own actions, and my state of mind determines my actions (this must be true in order for me to feel like I have free will, regardless of whether I do for some definition), then I must determine my own state of mind; and how did I determine the very first state I was in?). However, that’s a very different question to why people, including me, feel like we have free will. The feeling of free will is pretty well linked to our feeling of being able to conceive of counterfactuals in which we make all sorts of different decisions, and then decide between those decisions. Regardless of how over decided (read: predictable) our decision is, we feel like we could, if we just wanted to, make a different decision. The key is that we can’t actually make ourselves want to make a different decision from the one we do in fact want to make. We can imagine that we could want something different, because we don’t know yet what choice we will want to make, but at no point can we actually change which decision we want to make, in a way which does not regress to us wanting to do so.
I also hold, though this was not explored on Less Wrong that I remember, that the existence of a consistent internal narrative is key for free will. Without it we would feel like we were not at the least in complete control of our decisions; we would decide to do one thing but then do another, or remember doing things but not be able to understand why. To the extent these phenomena actually happen in real life, it seems that this holds (and it certainly seems to hold in fantasy, where mind control is often illustrated as feeling like this).
I should also note that while do not hold what I understand to be a standard Compatibilist conception of free will, Compatibilism certainly also holds that unpredictability is not a requirement for free will. Certainly this is not a new idea, and at least a part of my understanding does fall into standard Compatibilism as I understand it. My views are also derived from the free will subsequence here on LW. These ideas are debatable, but they are certainly not all that special. Perhaps I was assuming too little inferential difference; I didn’t attempt to derive the entire argument here, not did I give a link to the sequence which formed my beliefs; I think I assumed that many people would notice the connection, but perhaps not.
You’re straw manning. An entity controlling its own behaviour is so non contradictory that there is a branch of engineering dedicated to it: cybernetics.
There’s plenty of evidence that people rationalise decisions after the event, so you would have a feeling of narrative under any circumstances.