and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail
That’s why what I wrote in that section was:
it’s not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.
You wrote:
But this doesn’t matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.
I am supposing that. That’s why it’s in the title of the post. I don’t mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that’s the implication.
That’s why what I wrote in that section was:
You wrote:
I am supposing that. That’s why it’s in the title of the post. I don’t mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that’s the implication.