So conservation of expected moral evidence is something that would be automatically true if morality were something real and objective, and is also a desiderata when constructing general moral systems in practice.
Yes, but usual learning and prediction algorithms deal entirely with things that are “real and objective”, in the sense that you simply cannot change them (ie: laws of science).
This is yet another domain where my intuitions are outpacing my ability to learn mathematics. For domains where my actions can affect the experiment, I know damn well I should avoid affecting the experiment. The justification is damn simple when you think of data/information as a substance: experiments/learning are done to gain information, and if I alter the outcome of the experiment I gain information only about my own decisions, which I already had, thus rendering the experiment/learning pointless.
This leads to a question of how to model value learning as collecting moral information, and thus make the conclusion epistemically natural to the agent that biasing its own learning process is yielding falsehoods.
Yes, but usual learning and prediction algorithms deal entirely with things that are “real and objective”, in the sense that you simply cannot change them (ie: laws of science).
This is yet another domain where my intuitions are outpacing my ability to learn mathematics. For domains where my actions can affect the experiment, I know damn well I should avoid affecting the experiment. The justification is damn simple when you think of data/information as a substance: experiments/learning are done to gain information, and if I alter the outcome of the experiment I gain information only about my own decisions, which I already had, thus rendering the experiment/learning pointless.
This leads to a question of how to model value learning as collecting moral information, and thus make the conclusion epistemically natural to the agent that biasing its own learning process is yielding falsehoods.