Really interesting, but I’m a bit confused about something. Unless I misunderstand, you’re claiming this has the property of conservation of moral evidence… But near as I can tell, it doesn’t.
Conservation of moral evidence would imply that if it expected that tomorrow it would transition from v to w, then right now it would be acting on w rather than v (except for being indifferent as to whether or not it actually transitions to w), but what you have here would, if I understood what you said correctly, will act on v until that moment it transitions to w, even though it knew in advance it was going to transition to w.
Actually, come to think about it, even specifying the desired behavior would be tricky. Like if the agent assigned a probability of 1⁄2 to the proposition that tomorrow they’d transition from v to w, or some other form of mixed hypothesis re possible future transitions, what rules should an ideal moral-learning reasoner follow today?
I’m not even sure what it should be doing. mix over normalized versions of v and w? what if at least one is unbounded? Yeah, on reflection, I’m not sure what the Right Way for a “conserves expected moral evidence” agent is. There’re some special cases that seem to be well specified, but I’m not sure how I’d want it to behave in the general case.
Really interesting, but I’m a bit confused about something. Unless I misunderstand, you’re claiming this has the property of conservation of moral evidence… But near as I can tell, it doesn’t.
Conservation of moral evidence would imply that if it expected that tomorrow it would transition from v to w, then right now it would be acting on w rather than v (except for being indifferent as to whether or not it actually transitions to w), but what you have here would, if I understood what you said correctly, will act on v until that moment it transitions to w, even though it knew in advance it was going to transition to w.
Indeed! An ideal moral reasoner could not predict the changes to their moral system.
I couldn’t guarantee that, but instead I got a weaker condition: an agent that didn’t care about the changes to their moral system.
Ah, alright.
Actually, come to think about it, even specifying the desired behavior would be tricky. Like if the agent assigned a probability of 1⁄2 to the proposition that tomorrow they’d transition from v to w, or some other form of mixed hypothesis re possible future transitions, what rules should an ideal moral-learning reasoner follow today?
I’m not even sure what it should be doing. mix over normalized versions of v and w? what if at least one is unbounded? Yeah, on reflection, I’m not sure what the Right Way for a “conserves expected moral evidence” agent is. There’re some special cases that seem to be well specified, but I’m not sure how I’d want it to behave in the general case.