Conservation of expected moral evidence, clarified

You know that when you title a post with “clarified”, that you’re just asking for the gods to smite you down, but let’s try...

There has been some confusion about the concept of “conservation of expected moral evidence” that I touched upon in my posts here and here. The fault for the confusion is mine, so this is a brief note to try and explain it better.

The canonical example is that of a child who wants to steal a cookie. That child gets its morality mainly from its parents. The child strongly suspects that if it asks, all parents will indeed confirm that stealing cookies is wrong. So it decides not to ask, and happily steals the cookie.

I argued that this behvaiour showed a lack of “conservation of expected moral evidence”: if the child knows what the answer would be, then that should be equivalent with actually asking. Some people got this immediately, and some people were confused that the agents I defined seemed Bayesian, and so should have conservation of expected evidence already, so how can they violate that principle?

The answer is… both groups are right. The child can be modelled as a Bayesian agent reaching sensible conclusions. If it values “I don’t steal the cookie” at 0, “I steal the cookie without being told not to” at 1, and “I steal the cookie after being told not to” at −1, then its behaviour is rational—and those values are acceptable utility values over possible universes. So the child (and many value loading agents) are Bayesian agents with the usual properties.

But we are adding extra structure to the universe. Based on our understanding of what value loading should be, we are decreeing that the child’s behaviour is incorrect. Though it doesn’t violate expected utility, it violates any sensible meaning of value loading. Our idea of value loading is that, in a sense, values should be independent of many contingent things. There is nothing intrinsically wrong with “stealing cookies is wrong iff the Milky Way contains an even number of pulsars”, but it violates what values should be. Similarly for “stealing cookies is wrong iff I ask about it”.

But lets dig a bit deeper… Classical conservation of expected evidence fails in many cases. For instance, I can certainly influence the variable X=”what Stuart will do in the next ten seconds” (or at least, my decision theory is constructed on assumptions that I can influence that). My decisions change X’s expected value quite dramatically. What I can’t influence is facts that are not contingent on my actions. For instance, I can’t change my expected estimation of the number of pulsars in the galaxy last year. Were I super-powerful, I could change my expected estimation of the number of pulsars in the galaxy next year—by building or destroying pulsars, for instance.

So conservation of expected evidence only applies to things that are independent of the agent’s decisions. When I say we need to have “conservation of expected moral evidence” I’m saying that the agent should treat their (expected) morality as independent of their decisions. The kid failed to do this in the example above, and that’s the problem.

So conservation of expected moral evidence is something that would be automatically true if morality were something real and objective, and is also a desiderata when constructing general moral systems in practice.