I’m not sure how this system avoids infinitarian paralysis. For all actions with finite consequences in an infinite universe (whether in space, time, distribution, or anything else), the change in the expected value resulting from those actions is zero. Actions that may have infinite consequences thus become the only ones that can matter under this theory in an infinite universe.
You could perhaps drag in more exotic forms of arithmetic such as surreal numbers or hyperreals, but then you need to rebuild measure theory and probability from the ground up in that basis. You will likely also need to adopt some unusual axioms such as some analogue of the Axiom of Determinacy to ensure that every distribution of satisfactions has an expected value.
I’m also not sure how this differs from Average Utilitarianism with a bounded utility function.
I’m not sure how this system avoids infinitarian paralysis. For all actions with finite consequences in an infinite universe (whether in space, time, distribution, or anything else), the change in the expected value resulting from those actions is zero.
The causal change from your actions is zero. However, there are still logical connections between your actions and the actions of other agents in very similar circumstances. And you can still consider these logical connections to affect the total expected value of life satisfaction.
It’s true, though, that my ethical system would fail to resolve infinitarian paralysis for someone using causal decision theory. I should have noted it requires a different decision theory. Thanks for drawing this to my attention.
As an example of the system working, imagine you are in a position to do great good to the world, for example by creating friendly AI or something. And you’re considering whether to do it. Then, if you do decide to do it, then that logically implies that any other agent sufficiently similar to you and in sufficiently similar circumstances would also do it. Thus, if you decide to do it, then the expected value of an agent in circumstances of the form, “In a world with someone very similar to JBlack who has the ability to make awesome safe AI” is higher. And the prior probability of ending up in such a world is non-zero. Thus, by deciding to make the safe AI, you can acausally increase the total moral value of the universe.
I’m also not sure how this differs from Average Utilitarianism with a bounded utility function.
The average life satisfaction is undefined in a universe with infinitely-many agents of varying life-satisfaction. Thus, it suffers from infinitarian paralysis. If my system was used by a causal decision theoretic agent, it would also result in infinitarian paralysis, so for such an agent my system would be similar to average utilitarianism with a bounded utility function. But for agents with decision theories that consider acausal effects, it seems rather different.
Yes, that does clear up both of my questions. Thank you!
Presumably the evaluation is not just some sort of average-over-actual-lifespan of some satisfaction rating for the usual reason that (say) annihilating the universe without warning may leave average satisfaction higher than allowing it to continue to exist, even if every agent within it would counterfactually have been extremely dissatisfied if they had known that you were going to do it. This might happen if your estimate of the current average satisfaction was 79% and your predictions of the future were that the average satisfaction over the next trillion years would be only 78.9%.
I’m not sure what your idea of the evaluation actually is though, and how it avoids making it morally right (and perhaps even imperative) to destroy the universe in such situations.
Presumably the evaluation is not just some sort of average-over-actual-lifespan of some satisfaction rating for the usual reason that (say) annihilating the universe without warning may leave average satisfaction higher than allowing it to continue to exist, even if every agent within it would counterfactually have been extremely dissatisfied if they had known that you were going to do it. This might happen if your estimate of the current average satisfaction was 79% and your predictions of the future were that the average satisfaction over the next trillion years would be only 78.9%.
This is a good thing to ask about; I don’t think I provided enough detail on it in the writeup.
I’ll clarify my measure of satisfaction. First off, note that it’s not the same as just asking agents, “How satisfied are you with your life?” and using those answers. As you pointed out, you could then morally get away with killing everyone (at least if you do it in secret).
Instead, calculate satisfaction as follows. Imagine hypothetically telling an agent everything significant about the universe, and then giving them infinite processing power and infinite time to think. Ask them, “Overall, how satisfied are you with that universe and your place in it”? That is the measure of satisfaction with the universe.
So, imagine if someone was considering killing everyone in the universe (without them knowing in advance). Well, then consider what would happen if you calculated satisfaction as above. When the universe is described to the agents, they would note that they and everyone they care about would be killed. Agents usually very much dislike this idea, so they would probably rate their overall satisfaction with the course of the universe as low. So my ethical system would be unlikely to recommend such an action.
Now, my ethical system doesn’t strictly prohibit destroying the universe to avoid low life-satisfaction in future agents. For example, suppose it’s determined that the future will be filled with very unsatisfied lives. Then it’s in principle possible for the system to justify destroying the universe to avoid this. However, destroying the universe would drastically reduce the satisfaction with the universe the agents that do exist, which would decrease the moral value of the world. This would come at a high moral cost, which would make my moral system reluctant to recommend an action that results in such destruction.
That said, it’s possible that the proportion of agents in the universe that currently exist, and thus would need to be killed, is very low. Thus, the overall expected value of life-satisfaction might not change by that much if all the present agents were killed. Thus, the ethical system, as stated, may be willing to do such things in extreme circumstances, despite the moral cost.
I’m not really sure if this is a bug or a feature. Suppose you see that future agents will be unsatisfied with their lives, and you can stop it while ruining the lives of the agents that currently do exist. And you see that the agents that are currently alive make up only a very small proportion of agents that have ever existed. And suppose you have the option of destroying the universe. I’m not really sure what the morally best thing to do is in this situation.
Also, note that this verdict is not unique to my ethical system. Average utilitarianism, in a finite world, acts the same way. If you predict average life satisfaction in the future will be low, then average consequentialism could also recommend killing everyone currently alive.
And other aggregate consequentialist theories sometimes run into problematic(?) behavior related to killing people. For example, classical utilitarianism can recommend secretly killing all the unhappy people in the world, and then getting everyone else to forget about them, in order to decrease total unhappiness.
I’ve thought of a modification to the ethical system that potentially avoids this issue. Personally, though, I prefer the ethical system as stated. I can describe my modification if you’re interested.
I think the key idea of my ethical system is to, in an infinite universe, think about prior probabilities of situations rather than total numbers, proportions, or limits of proportions of them. And I think this idea can be adapted for use in other infinite ethical systems.
Right, I suspected the evaluation might be something like that. It does have the difficulty of being counterfactual and so possibly not even meaningful in many cases, but I do like the fact that it’s based on agent-situations rather than individual agent-actions.
On the other hand, evaluations from the point of view of agents that are sapient beings might be ethically completely dominated by those of 10^12 times as many agents that are ants, and I have no idea how such counterfactual evaluations might be applied to them at all.
Right, I suspected the evaluation might be something like that. It does have the difficulty of being counterfactual and so possibly not even meaningful in many cases.
Interesting. Could you elaborate?
I suppose counterfactuals can be tricky to reason about, but I’ll provide a little more detail on what I had in mind. Imagine making a simulation of an agent that is a fully faithful representation of its mind. However, run the agent simulation in a modified environment that both gives it access to infinite computational resources as well as makes it ask, and answer, the question, “How desirable is that universe”? This isn’t not fully specified; maybe the agent would give different answers depending on how the question is phrase or what its environment is. However, it at least doesn’t sound meaningless to me.
Basically, the counterfactual is supposed to be a way of asking for the agent’s coherent extrapolated volition, except the coherent part doesn’t really apply because it only involves a single agent.
On the other hand, evaluations from the point of view of agents that are sapient beings might be ethically completely dominated by those of 10^12 times as many agents that are ants, and I have no idea how such counterfactual evaluations might be applied to them at all.
Another good thing to ask. I should have made it clear, but I intended that the only agents with actual preferences are asked for their satisfaction of the universe. If ants don’t actually have preferences, then they would not be included in the deliberation.
Now, there’s the problem that some agents might not be able to even conceive of the possible world in question. For example, maybe ants can understand simple aspects of the world like, “I’m hungry”, but unable to understand things about the broader state of the universe. I don’t think this is a major problem, though. If an agent can’t even conceive of something, then I don’t think it would be reasonable to say it has preferences about it. So you can then only query them on the desirability things they can conceive of.
It might be tricky precisely defining what counts as a preference, but I suppose that’s a problem with all ethical systems that care about preferences.
I’m certain that ants do in fact have preferences, even if they can’t comprehend the concept of preferences in abstract or apply them to counterfactual worlds. They have revealed preferences to quite an extent, as does pretty much everything I think of as an agent.
They might not be communicable, numerically expressible, or even consistent, which is part of the problem. When you’re doing the extrapolated satisfaction, how much of what you get reflects the actual agent and how much the choice of extrapolation procedure?
I’m certain that ants do in fact have preferences, even if they can’t comprehend the concept of preferences in abstract or apply them to counterfactual worlds. They have revealed preferences to quite an extent, as does pretty much everything I think of as an agent.
I think the question of whether insects have preferences in morally pretty important, so I’m interested in hearing what made you think they do have them.
I looked online for “do insects have preferences?”, and I saw articles saying they did. I couldn’t really figure out why they thought they did have them, though.
For example, I read that insects have a preference for eating green leaves over red ones. But I’m not really sure how people could have known this. If you see ants go to green leaves when they’re hungry instead of red leaves, this doesn’t seem like it would necessarily be due to any actual preferences. For example, maybe the ant just executed something like the code:
if near_green_leaf() and is_hungry:
go_to_green_leaf()
elif near_red_leaf() and is_hungry:
go_to_red_leaf()
else:
...
That doesn’t really look like actual preferences to me. But I suppose this to some extent comes down to how you want to define what counts as a preference. I took preferences to actually be orderings between possible worlds indicating which one is more desirable. Did you have some other idea of what counts as preferences?
They might not be communicable, numerically expressible, or even consistent, which is part of the problem. When you’re doing the extrapolated satisfaction, how much of what you get reflects the actual agent and how much the choice of extrapolation procedure?
I agree that to some extent their extrapolated satisfactions will come down to the specifics of the extrapolated procedure.
I don’t us to get too distracted here, though. I don’t have a rigorous, non-arbitrary specification of what an agent’s extrapolated preferences are. However, that isn’t the problem I was trying to solve, nor is it a problem specific to my ethical system. My system is intended to provide a method of coming to reasonable moral conclusions in an infinite universe. And it seems to me that it does so. But, I’m very interested in any other thoughts you have on it with respect to if it correctly handles moral recommendations in infinite worlds. Does it seem to be reasonable to you? I’d like to make an actual post about this, with the clarifications we made included.
I’m not sure how this system avoids infinitarian paralysis. For all actions with finite consequences in an infinite universe (whether in space, time, distribution, or anything else), the change in the expected value resulting from those actions is zero. Actions that may have infinite consequences thus become the only ones that can matter under this theory in an infinite universe.
You could perhaps drag in more exotic forms of arithmetic such as surreal numbers or hyperreals, but then you need to rebuild measure theory and probability from the ground up in that basis. You will likely also need to adopt some unusual axioms such as some analogue of the Axiom of Determinacy to ensure that every distribution of satisfactions has an expected value.
I’m also not sure how this differs from Average Utilitarianism with a bounded utility function.
The causal change from your actions is zero. However, there are still logical connections between your actions and the actions of other agents in very similar circumstances. And you can still consider these logical connections to affect the total expected value of life satisfaction.
It’s true, though, that my ethical system would fail to resolve infinitarian paralysis for someone using causal decision theory. I should have noted it requires a different decision theory. Thanks for drawing this to my attention.
As an example of the system working, imagine you are in a position to do great good to the world, for example by creating friendly AI or something. And you’re considering whether to do it. Then, if you do decide to do it, then that logically implies that any other agent sufficiently similar to you and in sufficiently similar circumstances would also do it. Thus, if you decide to do it, then the expected value of an agent in circumstances of the form, “In a world with someone very similar to JBlack who has the ability to make awesome safe AI” is higher. And the prior probability of ending up in such a world is non-zero. Thus, by deciding to make the safe AI, you can acausally increase the total moral value of the universe.
The average life satisfaction is undefined in a universe with infinitely-many agents of varying life-satisfaction. Thus, it suffers from infinitarian paralysis. If my system was used by a causal decision theoretic agent, it would also result in infinitarian paralysis, so for such an agent my system would be similar to average utilitarianism with a bounded utility function. But for agents with decision theories that consider acausal effects, it seems rather different.
Does this clear things up?
Yes, that does clear up both of my questions. Thank you!
Presumably the evaluation is not just some sort of average-over-actual-lifespan of some satisfaction rating for the usual reason that (say) annihilating the universe without warning may leave average satisfaction higher than allowing it to continue to exist, even if every agent within it would counterfactually have been extremely dissatisfied if they had known that you were going to do it. This might happen if your estimate of the current average satisfaction was 79% and your predictions of the future were that the average satisfaction over the next trillion years would be only 78.9%.
I’m not sure what your idea of the evaluation actually is though, and how it avoids making it morally right (and perhaps even imperative) to destroy the universe in such situations.
This is a good thing to ask about; I don’t think I provided enough detail on it in the writeup.
I’ll clarify my measure of satisfaction. First off, note that it’s not the same as just asking agents, “How satisfied are you with your life?” and using those answers. As you pointed out, you could then morally get away with killing everyone (at least if you do it in secret).
Instead, calculate satisfaction as follows. Imagine hypothetically telling an agent everything significant about the universe, and then giving them infinite processing power and infinite time to think. Ask them, “Overall, how satisfied are you with that universe and your place in it”? That is the measure of satisfaction with the universe.
So, imagine if someone was considering killing everyone in the universe (without them knowing in advance). Well, then consider what would happen if you calculated satisfaction as above. When the universe is described to the agents, they would note that they and everyone they care about would be killed. Agents usually very much dislike this idea, so they would probably rate their overall satisfaction with the course of the universe as low. So my ethical system would be unlikely to recommend such an action.
Now, my ethical system doesn’t strictly prohibit destroying the universe to avoid low life-satisfaction in future agents. For example, suppose it’s determined that the future will be filled with very unsatisfied lives. Then it’s in principle possible for the system to justify destroying the universe to avoid this. However, destroying the universe would drastically reduce the satisfaction with the universe the agents that do exist, which would decrease the moral value of the world. This would come at a high moral cost, which would make my moral system reluctant to recommend an action that results in such destruction.
That said, it’s possible that the proportion of agents in the universe that currently exist, and thus would need to be killed, is very low. Thus, the overall expected value of life-satisfaction might not change by that much if all the present agents were killed. Thus, the ethical system, as stated, may be willing to do such things in extreme circumstances, despite the moral cost.
I’m not really sure if this is a bug or a feature. Suppose you see that future agents will be unsatisfied with their lives, and you can stop it while ruining the lives of the agents that currently do exist. And you see that the agents that are currently alive make up only a very small proportion of agents that have ever existed. And suppose you have the option of destroying the universe. I’m not really sure what the morally best thing to do is in this situation.
Also, note that this verdict is not unique to my ethical system. Average utilitarianism, in a finite world, acts the same way. If you predict average life satisfaction in the future will be low, then average consequentialism could also recommend killing everyone currently alive.
And other aggregate consequentialist theories sometimes run into problematic(?) behavior related to killing people. For example, classical utilitarianism can recommend secretly killing all the unhappy people in the world, and then getting everyone else to forget about them, in order to decrease total unhappiness.
I’ve thought of a modification to the ethical system that potentially avoids this issue. Personally, though, I prefer the ethical system as stated. I can describe my modification if you’re interested.
I think the key idea of my ethical system is to, in an infinite universe, think about prior probabilities of situations rather than total numbers, proportions, or limits of proportions of them. And I think this idea can be adapted for use in other infinite ethical systems.
Right, I suspected the evaluation might be something like that. It does have the difficulty of being counterfactual and so possibly not even meaningful in many cases, but I do like the fact that it’s based on agent-situations rather than individual agent-actions.
On the other hand, evaluations from the point of view of agents that are sapient beings might be ethically completely dominated by those of 10^12 times as many agents that are ants, and I have no idea how such counterfactual evaluations might be applied to them at all.
Interesting. Could you elaborate?
I suppose counterfactuals can be tricky to reason about, but I’ll provide a little more detail on what I had in mind. Imagine making a simulation of an agent that is a fully faithful representation of its mind. However, run the agent simulation in a modified environment that both gives it access to infinite computational resources as well as makes it ask, and answer, the question, “How desirable is that universe”? This isn’t not fully specified; maybe the agent would give different answers depending on how the question is phrase or what its environment is. However, it at least doesn’t sound meaningless to me.
Basically, the counterfactual is supposed to be a way of asking for the agent’s coherent extrapolated volition, except the coherent part doesn’t really apply because it only involves a single agent.
Another good thing to ask. I should have made it clear, but I intended that the only agents with actual preferences are asked for their satisfaction of the universe. If ants don’t actually have preferences, then they would not be included in the deliberation.
Now, there’s the problem that some agents might not be able to even conceive of the possible world in question. For example, maybe ants can understand simple aspects of the world like, “I’m hungry”, but unable to understand things about the broader state of the universe. I don’t think this is a major problem, though. If an agent can’t even conceive of something, then I don’t think it would be reasonable to say it has preferences about it. So you can then only query them on the desirability things they can conceive of.
It might be tricky precisely defining what counts as a preference, but I suppose that’s a problem with all ethical systems that care about preferences.
I’m certain that ants do in fact have preferences, even if they can’t comprehend the concept of preferences in abstract or apply them to counterfactual worlds. They have revealed preferences to quite an extent, as does pretty much everything I think of as an agent.
They might not be communicable, numerically expressible, or even consistent, which is part of the problem. When you’re doing the extrapolated satisfaction, how much of what you get reflects the actual agent and how much the choice of extrapolation procedure?
I think the question of whether insects have preferences in morally pretty important, so I’m interested in hearing what made you think they do have them.
I looked online for “do insects have preferences?”, and I saw articles saying they did. I couldn’t really figure out why they thought they did have them, though.
For example, I read that insects have a preference for eating green leaves over red ones. But I’m not really sure how people could have known this. If you see ants go to green leaves when they’re hungry instead of red leaves, this doesn’t seem like it would necessarily be due to any actual preferences. For example, maybe the ant just executed something like the code:
That doesn’t really look like actual preferences to me. But I suppose this to some extent comes down to how you want to define what counts as a preference. I took preferences to actually be orderings between possible worlds indicating which one is more desirable. Did you have some other idea of what counts as preferences?
I agree that to some extent their extrapolated satisfactions will come down to the specifics of the extrapolated procedure.
I don’t us to get too distracted here, though. I don’t have a rigorous, non-arbitrary specification of what an agent’s extrapolated preferences are. However, that isn’t the problem I was trying to solve, nor is it a problem specific to my ethical system. My system is intended to provide a method of coming to reasonable moral conclusions in an infinite universe. And it seems to me that it does so. But, I’m very interested in any other thoughts you have on it with respect to if it correctly handles moral recommendations in infinite worlds. Does it seem to be reasonable to you? I’d like to make an actual post about this, with the clarifications we made included.