Long ago, in a book on evolutionary biology (I forget which one it was) there was the excellent quote “fitness is what appears to be maximized when what is really being maximized is gene survival” together with an analysis of the peculiar genetic system of the Hymenoptera which predisposes them to evolve eusociality.
The author first presented a classical analysis by a previous author, which used the concept of inclusive fitness, and via a series of logical steps that obviously took a great deal of intelligence to work out, and nontrivial mental effort even to follow the explanation, managed to stretch fitness to cover the case. Oh, but there was an error in the last step that nobody had spotted, so the answer came out wrong.
The newer author then presented his own analysis, discarding the concept of fitness and just talking directly about gene survival. Not only did it give the right answer, but the logic was so simple and transparent you could easily verify the answer was right.
I think there’s a parallel here. You’re obviously putting a lot of intelligence and hard work into trying to analyze these cases in terms of things like selfishness and altruism… but the difficulty evaporates if you discard those concepts and just talk directly about utility.
I want to upvote this for the excellent anecdote, but the comment seems to go off the rails at the end. “Selfishness w.r.t. copies” and “Altruism w.r.t. copies”, here, are two different utility functions that an agent could have. What do you mean by “talking directly about utility”?
I think that the selfishness and altruism concepts are well captured by utility here. All that is needed for, say, the second model, is that the dead guy derives utility from the survivor betting that they’re in a single-person universe.
Altruism was the easiest way to do this, but there are other ways—maybe the money will be given to a charity to prevent the death of hypothetical agents in thought experiments or something (but only if there is a death). Or you could cast it in evolutionary terms (the pair share their genes, and there won’t be enough food for two, and the agents are direct gene-maximisers).
The point is that I’m using a clear utility, and using selfishness or altruism as a shorthand to describing it.
Long ago, in a book on evolutionary biology (I forget which one it was) there was the excellent quote “fitness is what appears to be maximized when what is really being maximized is gene survival” together with an analysis of the peculiar genetic system of the Hymenoptera which predisposes them to evolve eusociality.
The author first presented a classical analysis by a previous author, which used the concept of inclusive fitness, and via a series of logical steps that obviously took a great deal of intelligence to work out, and nontrivial mental effort even to follow the explanation, managed to stretch fitness to cover the case. Oh, but there was an error in the last step that nobody had spotted, so the answer came out wrong.
The newer author then presented his own analysis, discarding the concept of fitness and just talking directly about gene survival. Not only did it give the right answer, but the logic was so simple and transparent you could easily verify the answer was right.
I think there’s a parallel here. You’re obviously putting a lot of intelligence and hard work into trying to analyze these cases in terms of things like selfishness and altruism… but the difficulty evaporates if you discard those concepts and just talk directly about utility.
I want to upvote this for the excellent anecdote, but the comment seems to go off the rails at the end. “Selfishness w.r.t. copies” and “Altruism w.r.t. copies”, here, are two different utility functions that an agent could have. What do you mean by “talking directly about utility”?
I think that the selfishness and altruism concepts are well captured by utility here. All that is needed for, say, the second model, is that the dead guy derives utility from the survivor betting that they’re in a single-person universe.
Altruism was the easiest way to do this, but there are other ways—maybe the money will be given to a charity to prevent the death of hypothetical agents in thought experiments or something (but only if there is a death). Or you could cast it in evolutionary terms (the pair share their genes, and there won’t be enough food for two, and the agents are direct gene-maximisers).
The point is that I’m using a clear utility, and using selfishness or altruism as a shorthand to describing it.