Did you ever read Lara Buchak’s book? Seems related.
Also, I’m not really intuition-pumped by the repeated mugging example. It seems similar to a mugging where Omega only shows up once, but asks you for a recurring payment.
A related issue might be asking if UDT-ish agents who use a computable approximation to the Solomonoff prior are reflectively stable—will they want to “lock out” certain hypotheses that involve lots of computation (e.g. universes provably trying to simulate you via search for simple universes that contain agents who endorse Solomonoff induction). And probably the answer us going to be “it depends,” and you can do verbal argumentation for either option.
Yeah, I expect the Lizard World argument to be the more persuasive argument for a similar point. I’m thinking about reorganizing the post to make it more prominent.
Did you ever read Lara Buchak’s book? Seems related.
Also, I’m not really intuition-pumped by the repeated mugging example. It seems similar to a mugging where Omega only shows up once, but asks you for a recurring payment.
A related issue might be asking if UDT-ish agents who use a computable approximation to the Solomonoff prior are reflectively stable—will they want to “lock out” certain hypotheses that involve lots of computation (e.g. universes provably trying to simulate you via search for simple universes that contain agents who endorse Solomonoff induction). And probably the answer us going to be “it depends,” and you can do verbal argumentation for either option.
Yeah, I expect the Lizard World argument to be the more persuasive argument for a similar point. I’m thinking about reorganizing the post to make it more prominent.