Nadia wanted to solve Alonzo. To reduce him to a canonical, analytic representation, sufficient to reconfigure him at will. If there was a potential Alonzo within potential-Alonzo-space, say, who was utterly devoted to Nadia, who would dote on her and die for her, an Alonzo-solution would make its generation trivial.
from True Names, by Cory Doctorow and Benjamin Rosenbaum
Warning: this post tends toward the character of mainstream philosophy, in that it relies on the author’s intuitions to draw inferences about the nature of reality.
If you are dealing with an intelligence vastly more or less intelligent than yourself, there is no contest. One of you can play the other like tic-tac-toe. The stupid party’s values are simply irrelevant to the final outcome.
If you are dealing with an intelligence extremely close to your own—say, two humans within about five IQ points of each other—then both parties’ values will significantly affect the outcome.
If you are dealing with an intelligence moderately more or less intelligent than yourself, such as a world-class politician or an average eight-year-old child respectively, then the weaker intelligence might be able to slightly affect the outcome.
If we formalize free will as the fact that what we want to do has a causal effect on what we actually do, then perhaps we can characterize the sensation of free will—the desire to loudly assert in political arguments that we have free will—as a belief that our values will have a causal effect on the eventual outcome of reality.
This matches the sense that facing a terrifyingly powerful intelligence, one that can solve us completely, strips away our free will, which in turn probably explains the common misconception that free will is incompatible with reductionism—knowing that an explanation exists feels like having the explanation be known by someone. We don’t want to be understood.
It matches the sense that a person’s free will can be denied by forcing them into a straitjacket and tossing them in a padded cell. It matches the assumption that not having free will would feel like sitting at the wheel of a vehicle that was running on autopilot and refusing manual commands.
In general, we can distinguish three successive stages at which free will can be cut off:
The creature can be constructed non-heuristically to begin with; that is, it lacks a utility function.
The creature can control insufficient resources to be in a winnable state; that is, it is physically helpless.
The creature can be outsmarted; that is, it has a vastly superior opponent.
Probably the last two, and possibly all three, cannot remain cleanly separated under close scrutiny. But the model has such a deep psychological appeal that I think it must be useful somehow, if only as an intermediate step in easing lay folk into compatibilism, or in predicting and manipulating the vast majority of humans that believe or alieve it.
Free Will as Unsolvability by Rivals
from True Names, by Cory Doctorow and Benjamin Rosenbaum
Warning: this post tends toward the character of mainstream philosophy, in that it relies on the author’s intuitions to draw inferences about the nature of reality.
If you are dealing with an intelligence vastly more or less intelligent than yourself, there is no contest. One of you can play the other like tic-tac-toe. The stupid party’s values are simply irrelevant to the final outcome.
If you are dealing with an intelligence extremely close to your own—say, two humans within about five IQ points of each other—then both parties’ values will significantly affect the outcome.
If you are dealing with an intelligence moderately more or less intelligent than yourself, such as a world-class politician or an average eight-year-old child respectively, then the weaker intelligence might be able to slightly affect the outcome.
If we formalize free will as the fact that what we want to do has a causal effect on what we actually do, then perhaps we can characterize the sensation of free will—the desire to loudly assert in political arguments that we have free will—as a belief that our values will have a causal effect on the eventual outcome of reality.
This matches the sense that facing a terrifyingly powerful intelligence, one that can solve us completely, strips away our free will, which in turn probably explains the common misconception that free will is incompatible with reductionism—knowing that an explanation exists feels like having the explanation be known by someone. We don’t want to be understood.
It matches the sense that a person’s free will can be denied by forcing them into a straitjacket and tossing them in a padded cell. It matches the assumption that not having free will would feel like sitting at the wheel of a vehicle that was running on autopilot and refusing manual commands.
In general, we can distinguish three successive stages at which free will can be cut off:
The creature can be constructed non-heuristically to begin with; that is, it lacks a utility function.
The creature can control insufficient resources to be in a winnable state; that is, it is physically helpless.
The creature can be outsmarted; that is, it has a vastly superior opponent.
Probably the last two, and possibly all three, cannot remain cleanly separated under close scrutiny. But the model has such a deep psychological appeal that I think it must be useful somehow, if only as an intermediate step in easing lay folk into compatibilism, or in predicting and manipulating the vast majority of humans that believe or alieve it.