As I was putting the “free will” question to myself, I decided to re-frame it as “would an AI have free will” Answer: obviously not, it’s an optimization process. Then I thought: an AI is different from a trivial arithmetic solver, the AI’s search strategy is not fully determined by the goal. What would an AI be like whose strategy was wholly undetermined? It would thrash around randomly. So, insight: the uncertainty in our strategy is another name for our ignorance of the search domain. At the one end, zero information, total randomness. At the other, full information, determinism. In the middle, a “free” (meaning: ignorant) choice of search strategies which corresponds to the feeling of free will.
Interesting corollary: more knowledgeable people must be less free. To them, strategies we might try are obviously useless.
Larry Niven plays with this idea in Protector… the idea being that if you’re really smart, the right solution presents itself so rapidly that you simply don’t have any choices.
I suspect this is nonsense in any practical sense. Sure, any increase in intelligence will force you to close off some options which you now realize are bogus, but it will likely also make you aware of options you weren’t previously able to recognize.
In my own experience, increased understanding leads to a net gain of options. Perhaps the curve is hyperbolic, but if so I live on the ascending slope.
Do you feel any less free because it never occurs to you to bash your head against a wall, or slit your throat with a steak knife?
I certainly don’t; it would be a terrible inconvenience to have to go through all the really stupid options of things I could do at any given moment before arriving at the reasonable ones.
How much more so, then, for a superintelligence; it does not have to wonder about the stupid questions we humans often ask, but instead can focus on the really interesting decisions that remain to be made. (If you imagine that the space of possible decisions is finite, perhaps it could run out eventually… but my sense is that no intelligence small enough to fit in our universe can run out of possible decisions in our universe.)
“Do you know, can you comprehend, what freedom it gives you if you have no choice? Do you know what it means to be able to choose so swiftly and surely that to all intents and purposes you have no choice? The choice that you make, your decision, is based on such positive knowledge that the second alternative may as well not exist.”
-- Rafael Lefort, “The Teachers of Gurdjieff”, ch. XIV
When you have a purpose, you must act to achieve it. If you do not, you did not have that purpose.
If you are driving a car, you are not free to do anything you like with the steering wheel. You must use it to direct the car along your intended route.
You are only faced with choosing when you do not know the right choice. When you do know, you no longer have that choice. You cannot make your choice and have it still.
It does occasionally occur to me to kill myself, and in my really bad periods I do experience myself as prevented from choosing an eminently desirable path by my own earlier precommitments. But that’s neither here nor there.
Leaving the particulars aside… if there exists some question Q such that intelligence I1 finds Q difficult to answer and I2 finds Q easy to answer because I2 is a superintelligence with respect to I1, then I2 may well at some point consider Q, answer Q, and then move on to the next thing. Or, of course, it might never do so, depending on the relevance of Q to anything that occurs to I2.… as you say, the space of possible decisions is enormous.
I fail to see what follows from this. Can you unpack your thinking a bit, here?
As I was putting the “free will” question to myself, I decided to re-frame it as “would an AI have free will” Answer: obviously not, it’s an optimization process. Then I thought: an AI is different from a trivial arithmetic solver, the AI’s search strategy is not fully determined by the goal. What would an AI be like whose strategy was wholly undetermined? It would thrash around randomly. So, insight: the uncertainty in our strategy is another name for our ignorance of the search domain. At the one end, zero information, total randomness. At the other, full information, determinism. In the middle, a “free” (meaning: ignorant) choice of search strategies which corresponds to the feeling of free will.
Interesting corollary: more knowledgeable people must be less free. To them, strategies we might try are obviously useless.
Larry Niven plays with this idea in Protector… the idea being that if you’re really smart, the right solution presents itself so rapidly that you simply don’t have any choices.
I suspect this is nonsense in any practical sense. Sure, any increase in intelligence will force you to close off some options which you now realize are bogus, but it will likely also make you aware of options you weren’t previously able to recognize.
In my own experience, increased understanding leads to a net gain of options. Perhaps the curve is hyperbolic, but if so I live on the ascending slope.
Do you feel any less free because it never occurs to you to bash your head against a wall, or slit your throat with a steak knife?
I certainly don’t; it would be a terrible inconvenience to have to go through all the really stupid options of things I could do at any given moment before arriving at the reasonable ones.
How much more so, then, for a superintelligence; it does not have to wonder about the stupid questions we humans often ask, but instead can focus on the really interesting decisions that remain to be made. (If you imagine that the space of possible decisions is finite, perhaps it could run out eventually… but my sense is that no intelligence small enough to fit in our universe can run out of possible decisions in our universe.)
-- Rafael Lefort, “The Teachers of Gurdjieff”, ch. XIV
Quoted before here.
When you have a purpose, you must act to achieve it. If you do not, you did not have that purpose.
If you are driving a car, you are not free to do anything you like with the steering wheel. You must use it to direct the car along your intended route.
You are only faced with choosing when you do not know the right choice. When you do know, you no longer have that choice. You cannot make your choice and have it still.
It does occasionally occur to me to kill myself, and in my really bad periods I do experience myself as prevented from choosing an eminently desirable path by my own earlier precommitments. But that’s neither here nor there.
Leaving the particulars aside… if there exists some question Q such that intelligence I1 finds Q difficult to answer and I2 finds Q easy to answer because I2 is a superintelligence with respect to I1, then I2 may well at some point consider Q, answer Q, and then move on to the next thing. Or, of course, it might never do so, depending on the relevance of Q to anything that occurs to I2.… as you say, the space of possible decisions is enormous.
I fail to see what follows from this. Can you unpack your thinking a bit, here?