Rational agents incorporate the benefits to others into their utility functions.
as a section header may have thrown me off there.
That aside, I do understand what you’re saying, and I did notice the original contrast between the 1%/1%. Though I’d note it doesn’t follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that’s not an even bet.
The whole arational point is my mistake; the whole paragraph:
But maybe they’re just not as rational as you...
reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail
That’s why what I wrote in that section was:
it’s not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.
You wrote:
But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
I am supposing that. That’s why it’s in the title of the post. I don’t mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that’s the implication.
as a section header may have thrown me off there.
That aside, I do understand what you’re saying, and I did notice the original contrast between the 1%/1%. Though I’d note it doesn’t follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that’s not an even bet.
The whole arational point is my mistake; the whole paragraph:
reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
That’s why what I wrote in that section was:
You wrote:
I am supposing that. That’s why it’s in the title of the post. I don’t mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that’s the implication.