Hmm, maybe the way to fix this is to have each agent in the parliament believe that future experiments will validate its position. More precisely, the agent’s own predictions condition on its value system being correct. Then the parliament would vote to expend resources on information about the value system.
It would be specifying a bottom line if each sub-agent could look at any result and say afterwards that this result supports its position. That’s not what I’m suggesting. I’m saying that each sub-agent should make predictions as if its own value system is correct, rather than having each sub-agent use the same set of predictions generated by the super-agent.
Quick dive into the concrete: I think that legalization of marijuana would be a good thing … but that evaluation is based on my current state of knowledge, including several places where my knowledge is ambiguous. By Baye’s Rule, I can’t possibly have a nonzero expectation for the change in my evaluation based on the discovery of new data.
Am I misunderstanding the situation you hypothesize?
You can have a nonzero expectation for the change in someone else’s evaluation, which is what I was talking about. The super-agent and the sub-agent have different beliefs.
Hmm, maybe the way to fix this is to have each agent in the parliament believe that future experiments will validate its position. More precisely, the agent’s own predictions condition on its value system being correct. Then the parliament would vote to expend resources on information about the value system.
Is it possible to enforce that? It seems like specifying a bottom line to me.
It would be specifying a bottom line if each sub-agent could look at any result and say afterwards that this result supports its position. That’s not what I’m suggesting. I’m saying that each sub-agent should make predictions as if its own value system is correct, rather than having each sub-agent use the same set of predictions generated by the super-agent.
Quick dive into the concrete: I think that legalization of marijuana would be a good thing … but that evaluation is based on my current state of knowledge, including several places where my knowledge is ambiguous. By Baye’s Rule, I can’t possibly have a nonzero expectation for the change in my evaluation based on the discovery of new data.
Am I misunderstanding the situation you hypothesize?
You can have a nonzero expectation for the change in someone else’s evaluation, which is what I was talking about. The super-agent and the sub-agent have different beliefs.
I see—that is sensible.