It seems to me that you’re changing the subject, or maybe making inferential jumps that are too long for me.
The information to determine which events are possible actions is absent from your model. You can’t calculate it within your setting, only postulate.
If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can’t tell), then I don’t understand how it brings us closer to that goal.
The Hofstadter’s Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter’s Law of Inferential Distance.
Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.
It seems to me that you’re changing the subject, or maybe making inferential jumps that are too long for me.
The information to determine which events are possible actions is absent from your model. You can’t calculate it within your setting, only postulate.
If the overarching goal of this post was finding ways to represent human preference (did you imply that? I can’t tell), then I don’t understand how it brings us closer to that goal.
The Hofstadter’s Law of Inferential Distance: What you are saying is always harder to understand than you expect, even when you take into account Hofstadter’s Law of Inferential Distance.
Of course this post is only a small side-node, and it tells nothing about which events mean what. Human preference is a preference, so even without details the discussion of preference-in-general has some implications for human preference, which the last paragraph of the post alluded to, with regards to picking priors for Bayesian math.