Do you feel any less free because it never occurs to you to bash your head against a wall, or slit your throat with a steak knife?
I certainly don’t; it would be a terrible inconvenience to have to go through all the really stupid options of things I could do at any given moment before arriving at the reasonable ones.
How much more so, then, for a superintelligence; it does not have to wonder about the stupid questions we humans often ask, but instead can focus on the really interesting decisions that remain to be made. (If you imagine that the space of possible decisions is finite, perhaps it could run out eventually… but my sense is that no intelligence small enough to fit in our universe can run out of possible decisions in our universe.)
“Do you know, can you comprehend, what freedom it gives you if you have no choice? Do you know what it means to be able to choose so swiftly and surely that to all intents and purposes you have no choice? The choice that you make, your decision, is based on such positive knowledge that the second alternative may as well not exist.”
-- Rafael Lefort, “The Teachers of Gurdjieff”, ch. XIV
When you have a purpose, you must act to achieve it. If you do not, you did not have that purpose.
If you are driving a car, you are not free to do anything you like with the steering wheel. You must use it to direct the car along your intended route.
You are only faced with choosing when you do not know the right choice. When you do know, you no longer have that choice. You cannot make your choice and have it still.
It does occasionally occur to me to kill myself, and in my really bad periods I do experience myself as prevented from choosing an eminently desirable path by my own earlier precommitments. But that’s neither here nor there.
Leaving the particulars aside… if there exists some question Q such that intelligence I1 finds Q difficult to answer and I2 finds Q easy to answer because I2 is a superintelligence with respect to I1, then I2 may well at some point consider Q, answer Q, and then move on to the next thing. Or, of course, it might never do so, depending on the relevance of Q to anything that occurs to I2.… as you say, the space of possible decisions is enormous.
I fail to see what follows from this. Can you unpack your thinking a bit, here?
Do you feel any less free because it never occurs to you to bash your head against a wall, or slit your throat with a steak knife?
I certainly don’t; it would be a terrible inconvenience to have to go through all the really stupid options of things I could do at any given moment before arriving at the reasonable ones.
How much more so, then, for a superintelligence; it does not have to wonder about the stupid questions we humans often ask, but instead can focus on the really interesting decisions that remain to be made. (If you imagine that the space of possible decisions is finite, perhaps it could run out eventually… but my sense is that no intelligence small enough to fit in our universe can run out of possible decisions in our universe.)
-- Rafael Lefort, “The Teachers of Gurdjieff”, ch. XIV
Quoted before here.
When you have a purpose, you must act to achieve it. If you do not, you did not have that purpose.
If you are driving a car, you are not free to do anything you like with the steering wheel. You must use it to direct the car along your intended route.
You are only faced with choosing when you do not know the right choice. When you do know, you no longer have that choice. You cannot make your choice and have it still.
It does occasionally occur to me to kill myself, and in my really bad periods I do experience myself as prevented from choosing an eminently desirable path by my own earlier precommitments. But that’s neither here nor there.
Leaving the particulars aside… if there exists some question Q such that intelligence I1 finds Q difficult to answer and I2 finds Q easy to answer because I2 is a superintelligence with respect to I1, then I2 may well at some point consider Q, answer Q, and then move on to the next thing. Or, of course, it might never do so, depending on the relevance of Q to anything that occurs to I2.… as you say, the space of possible decisions is enormous.
I fail to see what follows from this. Can you unpack your thinking a bit, here?