It would be specifying a bottom line if each sub-agent could look at any result and say afterwards that this result supports its position. That’s not what I’m suggesting. I’m saying that each sub-agent should make predictions as if its own value system is correct, rather than having each sub-agent use the same set of predictions generated by the super-agent.
Quick dive into the concrete: I think that legalization of marijuana would be a good thing … but that evaluation is based on my current state of knowledge, including several places where my knowledge is ambiguous. By Baye’s Rule, I can’t possibly have a nonzero expectation for the change in my evaluation based on the discovery of new data.
Am I misunderstanding the situation you hypothesize?
You can have a nonzero expectation for the change in someone else’s evaluation, which is what I was talking about. The super-agent and the sub-agent have different beliefs.
Is it possible to enforce that? It seems like specifying a bottom line to me.
It would be specifying a bottom line if each sub-agent could look at any result and say afterwards that this result supports its position. That’s not what I’m suggesting. I’m saying that each sub-agent should make predictions as if its own value system is correct, rather than having each sub-agent use the same set of predictions generated by the super-agent.
Quick dive into the concrete: I think that legalization of marijuana would be a good thing … but that evaluation is based on my current state of knowledge, including several places where my knowledge is ambiguous. By Baye’s Rule, I can’t possibly have a nonzero expectation for the change in my evaluation based on the discovery of new data.
Am I misunderstanding the situation you hypothesize?
You can have a nonzero expectation for the change in someone else’s evaluation, which is what I was talking about. The super-agent and the sub-agent have different beliefs.
I see—that is sensible.