Looking for an evolutionary stable strategy might be an interesting idea.
But the point is not to wonder what would be ideal if your utility were evolutionarily stable, but what to do with your current utility, in these specific situations.
Sorry, by “delta” I meant change, difference, or adjustment.
The reason to investigate evolutionarily stable strategies is to look at the space of workable, self-consistent, winningish strategies. I know my utility function is pretty irrational—even insane. For example, I (try to) change my explicit values when I hear sufficiently strong arguments against my current explicit values. Explaining that is possible for a utilitarian, but it takes some gymnastics, and the upshot of the gymnastics is that utility functions become horrendously complicated and therefore mostly useless.
My bet is that there isn’t actually much room for choice in the space of workable, self-consistent, winningish strategies. That will force most of the consequentialists, whether they ultimately care about particular genes or memes, paperclips or brass copper kettles, to act identically with respect to these puzzles, in order to survive and reproduce to steer the world toward their various goals.
I’m unsure. For a lone agent in the world, who can get copied and uncopied, I think that following my approach here is the correct one. For multiple competing agents, this becomes a trade/competition issue, and I don’t have a good grasp of that.
Don’t know what a delta is, sorry :-)
Looking for an evolutionary stable strategy might be an interesting idea.
But the point is not to wonder what would be ideal if your utility were evolutionarily stable, but what to do with your current utility, in these specific situations.
Delta: http://en.wikipedia.org/wiki/Delta_encoding
Sorry, by “delta” I meant change, difference, or adjustment.
The reason to investigate evolutionarily stable strategies is to look at the space of workable, self-consistent, winningish strategies. I know my utility function is pretty irrational—even insane. For example, I (try to) change my explicit values when I hear sufficiently strong arguments against my current explicit values. Explaining that is possible for a utilitarian, but it takes some gymnastics, and the upshot of the gymnastics is that utility functions become horrendously complicated and therefore mostly useless.
My bet is that there isn’t actually much room for choice in the space of workable, self-consistent, winningish strategies. That will force most of the consequentialists, whether they ultimately care about particular genes or memes, paperclips or brass copper kettles, to act identically with respect to these puzzles, in order to survive and reproduce to steer the world toward their various goals.
I’m unsure. For a lone agent in the world, who can get copied and uncopied, I think that following my approach here is the correct one. For multiple competing agents, this becomes a trade/competition issue, and I don’t have a good grasp of that.