No, I don’t intend “experience-subject” to pick out a specific time. (It’s not obvious to me whether a variant of your system that worked that way would be better or worse than your system as it is.) I’m using that term rather than “agent” because—as I think you point out in te OP—what matters for moral relevance is having experiences rather than performing actions.
So, anyway, I think I now agree that your system does indeed do approximately what you say it does, and many of my previous criticisms do not in fact apply to it; my apologies for the many misunderstandings.
The fact that it’s lavishly uncomputable is a problem for using it in practice, of course :-).
I have some other concerns, but haven’t given the matter enough thought to be confident about how much they matter. For instance: if the fundamental thing we are considering probability distributions over is programs specifying a universe and an experience-subject within that universe, then it seems like maybe physically bigger experience subjects get treated as more important because they’re “easier to locate”, and that seems pretty silly. But (1) I think this effect may be fairly small, and (2) perhaps physically bigger experience-subjects should on average matter more because size probably correlates with some sort of depth-of-experience?
The fact that it’s lavishly uncomputable is a problem for using it in practice, of course :-).
Yep. To be fair, though, I suspect any ethical system that respects agents’ arbitrary preferences would also be incomputable. As a silly example, consider an agent whose terminal values are, “If Turing machine T halts, I want nothing more than to jump up and down. However, if it doesn’t halt, then it is of the utmost importance to me that I never jump up and down and instead sit down and frown.” Then any ethical system that cares about those preferences is incomputable.
Now this is pretty silly example, but I wouldn’t be surprised if there were more realistic ones. For one, it’s important to respect other agents’ moral preferences, and I wouldn’t be surprised if their ideal moral-preferences-on-infinite-reflection would be incomputable. I seems to me that morall philosophers act as some approximation of, “Find the simplest model of morality that mostly agrees with my moral intuitions”. If they include incomputable models, or arbitrary Turing machines that may or may not halt, then the moral value of the world to them would in fact be incomputable, so any ethical system that cares about preferences-given-infinite-reflection would also be incomputable.
I have some other concerns, but haven’t given the matter enough thought to be confident about how much they matter. For instance: if the fundamental thing we are considering probability distributions over is programs specifying a universe and an experience-subject within that universe, then it seems like maybe physically bigger experience subjects get treated as more important because they’re “easier to locate”, and that seems pretty silly. But (1) I think this effect may be fairly small, and (2) perhaps physically bigger experience-subjects should on average matter more because size probably correlates with some sort of depth-of-experience?
I’m not that worried about agents that are physically bigger, but it’s true that there may be some agents or agents descriptions in situations that are easier to pick out (in terms of having a short description length) then others. Maybe there’s something really special about the agent that makes it easy to pin down.
I’m not entirely sure if this would be a bug or a feature. But if it’s a bug, I think it could be dealt with by just choosing the right prior over agents-situations. Specifically, for any description of an environment with finitely-many agents A, make the probability of ending up as a∈A, conditioned only on being one of the agents in that environment, should be constant for all a∈A. This way, the prior isn’t biased in favor of the agents that are easy to pick out.
No, I don’t intend “experience-subject” to pick out a specific time. (It’s not obvious to me whether a variant of your system that worked that way would be better or worse than your system as it is.) I’m using that term rather than “agent” because—as I think you point out in te OP—what matters for moral relevance is having experiences rather than performing actions.
So, anyway, I think I now agree that your system does indeed do approximately what you say it does, and many of my previous criticisms do not in fact apply to it; my apologies for the many misunderstandings.
The fact that it’s lavishly uncomputable is a problem for using it in practice, of course :-).
I have some other concerns, but haven’t given the matter enough thought to be confident about how much they matter. For instance: if the fundamental thing we are considering probability distributions over is programs specifying a universe and an experience-subject within that universe, then it seems like maybe physically bigger experience subjects get treated as more important because they’re “easier to locate”, and that seems pretty silly. But (1) I think this effect may be fairly small, and (2) perhaps physically bigger experience-subjects should on average matter more because size probably correlates with some sort of depth-of-experience?
Yep. To be fair, though, I suspect any ethical system that respects agents’ arbitrary preferences would also be incomputable. As a silly example, consider an agent whose terminal values are, “If Turing machine T halts, I want nothing more than to jump up and down. However, if it doesn’t halt, then it is of the utmost importance to me that I never jump up and down and instead sit down and frown.” Then any ethical system that cares about those preferences is incomputable.
Now this is pretty silly example, but I wouldn’t be surprised if there were more realistic ones. For one, it’s important to respect other agents’ moral preferences, and I wouldn’t be surprised if their ideal moral-preferences-on-infinite-reflection would be incomputable. I seems to me that morall philosophers act as some approximation of, “Find the simplest model of morality that mostly agrees with my moral intuitions”. If they include incomputable models, or arbitrary Turing machines that may or may not halt, then the moral value of the world to them would in fact be incomputable, so any ethical system that cares about preferences-given-infinite-reflection would also be incomputable.
I’m not that worried about agents that are physically bigger, but it’s true that there may be some agents or agents descriptions in situations that are easier to pick out (in terms of having a short description length) then others. Maybe there’s something really special about the agent that makes it easy to pin down.
I’m not entirely sure if this would be a bug or a feature. But if it’s a bug, I think it could be dealt with by just choosing the right prior over agents-situations. Specifically, for any description of an environment with finitely-many agents A, make the probability of ending up as a∈A, conditioned only on being one of the agents in that environment, should be constant for all a∈A. This way, the prior isn’t biased in favor of the agents that are easy to pick out.