A final thing is that I’m puzzled by your claim that selfish and average utility altruist agents won’t care about Doom, and so it won’t affect their decisions. Won’t average utilitarians worry about the negative utility (pain, distress) of agents who are going to face Doom, and consider actions which will mitigate that pain? Won’t selfish agents worry about facing Doom themselves, and engage in survivalist “prepping” (or if that’s going to be no use at all, party like there’s no tomorrow)?
I was simplifying when I said “didn’t care”. And if there’s negative utility around, things are different (I was envisaging the doomsday scenario as something along the lines of painless universal sterility). But let’s go with your model, and say that doomsday will be something painful (slow civilization collapse, say). How will average and total altruists act?
Well, an average altruist would not accept an increase in the risk of doom in exchange for other gains. The doom is very bad, and would mean a small population, so the average badness is large. Gains in the case where doom doesn’t happen would be averaged over a very large population, so would be less significant. The average altruist is willing to sacrifice a lot to avoid doom (but note this argument needs doom=small population AND bad stuff).
What about the total altruist? Well, they still don’t like the doom. But for them the benefits in the “no doom” scenario are not diluted. They would be willing to run a slight increase in the risk of doom, in exchange of some benefit to a lot of people in the no-doom situation. They would turn on the reactor that could provide limitless free energy to the whole future of the human species, even if there was small risk of catastrophic meltdown.
So the fact these two would reason differently is not unexpected. But what I’m trying to get to is that there is no simple single “doomsday argument” for ADT. There are many different scenarios (where you need to specify the situation, the probabilities, the altruisms of the agents, and the decisions they are facing), and in some of them, something that resembles the classical doomsday argument pops up, and in others it doesn’t.
I was simplifying when I said “didn’t care”. And if there’s negative utility around, things are different (I was envisaging the doomsday scenario as something along the lines of painless universal sterility). But let’s go with your model, and say that doomsday will be something painful (slow civilization collapse, say). How will average and total altruists act?
Well, an average altruist would not accept an increase in the risk of doom in exchange for other gains. The doom is very bad, and would mean a small population, so the average badness is large. Gains in the case where doom doesn’t happen would be averaged over a very large population, so would be less significant. The average altruist is willing to sacrifice a lot to avoid doom (but note this argument needs doom=small population AND bad stuff).
What about the total altruist? Well, they still don’t like the doom. But for them the benefits in the “no doom” scenario are not diluted. They would be willing to run a slight increase in the risk of doom, in exchange of some benefit to a lot of people in the no-doom situation. They would turn on the reactor that could provide limitless free energy to the whole future of the human species, even if there was small risk of catastrophic meltdown.
So the fact these two would reason differently is not unexpected. But what I’m trying to get to is that there is no simple single “doomsday argument” for ADT. There are many different scenarios (where you need to specify the situation, the probabilities, the altruisms of the agents, and the decisions they are facing), and in some of them, something that resembles the classical doomsday argument pops up, and in others it doesn’t.