Actually, come to think about it, even specifying the desired behavior would be tricky. Like if the agent assigned a probability of 1⁄2 to the proposition that tomorrow they’d transition from v to w, or some other form of mixed hypothesis re possible future transitions, what rules should an ideal moral-learning reasoner follow today?
I’m not even sure what it should be doing. mix over normalized versions of v and w? what if at least one is unbounded? Yeah, on reflection, I’m not sure what the Right Way for a “conserves expected moral evidence” agent is. There’re some special cases that seem to be well specified, but I’m not sure how I’d want it to behave in the general case.
Ah, alright.
Actually, come to think about it, even specifying the desired behavior would be tricky. Like if the agent assigned a probability of 1⁄2 to the proposition that tomorrow they’d transition from v to w, or some other form of mixed hypothesis re possible future transitions, what rules should an ideal moral-learning reasoner follow today?
I’m not even sure what it should be doing. mix over normalized versions of v and w? what if at least one is unbounded? Yeah, on reflection, I’m not sure what the Right Way for a “conserves expected moral evidence” agent is. There’re some special cases that seem to be well specified, but I’m not sure how I’d want it to behave in the general case.