I define free will in an unusual way, which I find to be the most general concept that serves a number of related purposes. This is only defined with respect to a given level of intelligence. For example, something with behaviour that can be completely predicted by an intelligence has no free will with respect to that intelligence, since it is pointless to talk about what it `could’ do; there is only one known thing that is possible. Also, if a system does not make computations about the properties of various outputs that are too complex to predict before choosing one, I would not say that it has free will, just that it’s output is uncertain. In some ways, this is the same definition of possible, but the distinction is useful for thinking about my own free will and discussing others’ free will with them, including in decision theory contexts.
I define free will in an unusual way, which I find to be the most general concept that serves a number of related purposes. This is only defined with respect to a given level of intelligence. For example, something with behaviour that can be completely predicted by an intelligence has no free will with respect to that intelligence, since it is pointless to talk about what it `could’ do; there is only one known thing that is possible. Also, if a system does not make computations about the properties of various outputs that are too complex to predict before choosing one, I would not say that it has free will, just that it’s output is uncertain. In some ways, this is the same definition of possible, but the distinction is useful for thinking about my own free will and discussing others’ free will with them, including in decision theory contexts.