Measure doesn’t help if each action has all possible consequences: you’d just end up with the consequences of all actions having the same measure! Measure helps with managing (reasoning about) infinite collections of consequences, but there still must be non-trivial and “mathematically crisp” dependence between actions and consequences.
No, it could help because the measure could be attached to world-histories, so there is a measure for “(drop ball) leads to (ball to fall downwards)”, which is effectively the kind of thing our laws of physics do for us.
There is also a set of world-histories satisfying (drop ball) which is distinct from the set of world-histories satisfying NOT(drop ball). Of course, by throwing this piece of world model out the window, and only allowing to compensate for its absence with measures, you do make measures indispensable. The problem with what you were saying is in the connotation, of measure somehow being the magical world-modeling juice, which it’s not. (That is, I don’t necessarily disagree, but don’t want this particular solution of using measure to be seen as directly answering the question of predictability, since it can be understood as a curiosity-stopping mysterious answer by someone insufficiently careful.)
I don’t see what the problem is with using measures over world histories as a solution to the problem of predictability.
If certain histories have relatively very high measure, then you can use that fact to derive useful predictions about the future from a knowledge of the present.
I don’t see what the problem is with using measures over world histories as a solution to the problem of predictability.
It’s not a generally valid solution (there are solutions that don’t use measures), though it’s a great solution for most purposes. It’s just that using measures is not a necessary condition for consequentialist decision-making, and I found that thinking in terms of measures is misleading for the purposes of understanding the nature of control.
You said:
Without a measure, you become incapable of making any decisions, because the past ceases to be predictive of the future
Measure doesn’t help if each action has all possible consequences: you’d just end up with the consequences of all actions having the same measure! Measure helps with managing (reasoning about) infinite collections of consequences, but there still must be non-trivial and “mathematically crisp” dependence between actions and consequences.
No, it could help because the measure could be attached to world-histories, so there is a measure for “(drop ball) leads to (ball to fall downwards)”, which is effectively the kind of thing our laws of physics do for us.
There is also a set of world-histories satisfying (drop ball) which is distinct from the set of world-histories satisfying NOT(drop ball). Of course, by throwing this piece of world model out the window, and only allowing to compensate for its absence with measures, you do make measures indispensable. The problem with what you were saying is in the connotation, of measure somehow being the magical world-modeling juice, which it’s not. (That is, I don’t necessarily disagree, but don’t want this particular solution of using measure to be seen as directly answering the question of predictability, since it can be understood as a curiosity-stopping mysterious answer by someone insufficiently careful.)
I don’t see what the problem is with using measures over world histories as a solution to the problem of predictability.
If certain histories have relatively very high measure, then you can use that fact to derive useful predictions about the future from a knowledge of the present.
It’s not a generally valid solution (there are solutions that don’t use measures), though it’s a great solution for most purposes. It’s just that using measures is not a necessary condition for consequentialist decision-making, and I found that thinking in terms of measures is misleading for the purposes of understanding the nature of control.
You said:
Ah, I see, sufficient but not necessary.