If I understand correctly, this argument also appeared in Eliezer Yudkowsky’s post “The Lifespan Dilemma”, which itself credits one of Wei Dai’s comment here. The argument given in The Lifespan Dilemma is essentially identical to the argument in Beckstead and Thomas’ paper.
I think Eliezer and Wei Dai’s comments (and the early part of Beckstead and Thomas) are just direct intuitive arguments against Recklessness.
This post (and the later part of Beckstead and Thomas) argue that Recklessness is not merely intuitively unappealing, but that it requires violating pretty weak dominance principles. You have to believe that there is a set of lotteries Ai each individually better than X, whose mixture is not at least as good as X.
Someone who already bought the intuitive argument against Recklessness doesn’t need to read these posts; they are for someone who already bit the bullet on the lifespan dilemma and wants more bullets.
If I understand correctly, this argument also appeared in Eliezer Yudkowsky’s post “The Lifespan Dilemma”, which itself credits one of Wei Dai’s comment here. The argument given in The Lifespan Dilemma is essentially identical to the argument in Beckstead and Thomas’ paper.
I think Eliezer and Wei Dai’s comments (and the early part of Beckstead and Thomas) are just direct intuitive arguments against Recklessness.
This post (and the later part of Beckstead and Thomas) argue that Recklessness is not merely intuitively unappealing, but that it requires violating pretty weak dominance principles. You have to believe that there is a set of lotteries Ai each individually better than X, whose mixture is not at least as good as X.
Someone who already bought the intuitive argument against Recklessness doesn’t need to read these posts; they are for someone who already bit the bullet on the lifespan dilemma and wants more bullets.