I think this case is essentially the same as the original one, and this similarity is the topic of the post.
It looks like in the original case (and so this one) you should give the $100 if you are an AI running human preference, and most likely if you are a human too, unless human preference gets “updated” (currupted) by the reflectively inconsistent human brain, so that once you learn about the new fact, the new preference says that you shouldn’t give the $100, because the probability of the alternative dropped through the floor (in your representation).
You can read the first thread, the post for a short description of the theoretical reasons for giving up $100 (expected utility, reflective consistency), and more in the comments.
As I noted, I’m not sure it’s what you really should do, as a human, but it looks like it. I changed my mind about this conclusion a couple of times since the problem statement, first believing that you should give up $100, because it was what the UDT suggested, then that you shouldn’t, remembering that human brain probably does erase the counterfactual preference; now I’m back to being unsure about what goes on in the human brain, but trusting the normative theory as a better standard for decisions in the meantime.
Reading through the comments of that post, I understood this to be the gist of the argument for why you would give up the $100:
Before knowing the outcome of the coin flip, you would have taken the wager to pay $100 for a 50% chance to win $10000. Alternatively, if Omega had asked you to “precommit” $100 in case you lost, you would still agree—its nearly exactly the same thing. (Technically it’s an even better wager.) What if Omega asks you to precommit a witless future self? You would like to pre-commit your future self.
So you, your current self, while trying to decide whether to pay Omega or not, have decided that you would actually like to precommit a future self to paying the $100. How do you do that? By being that future person in the present and committing your current self to pay the $100. Indeed you lost, but being consistent with “being a payer” is what you decided you wanted.
I think this case is essentially the same as the original one, and this similarity is the topic of the post.
It looks like in the original case (and so this one) you should give the $100 if you are an AI running human preference, and most likely if you are a human too, unless human preference gets “updated” (currupted) by the reflectively inconsistent human brain, so that once you learn about the new fact, the new preference says that you shouldn’t give the $100, because the probability of the alternative dropped through the floor (in your representation).
Where is the best place to read an explanation of why giving the $100 is what you “should” do? (Or could someone please summarize the rationale?)
You can read the first thread, the post for a short description of the theoretical reasons for giving up $100 (expected utility, reflective consistency), and more in the comments.
As I noted, I’m not sure it’s what you really should do, as a human, but it looks like it. I changed my mind about this conclusion a couple of times since the problem statement, first believing that you should give up $100, because it was what the UDT suggested, then that you shouldn’t, remembering that human brain probably does erase the counterfactual preference; now I’m back to being unsure about what goes on in the human brain, but trusting the normative theory as a better standard for decisions in the meantime.
Reading through the comments of that post, I understood this to be the gist of the argument for why you would give up the $100:
Before knowing the outcome of the coin flip, you would have taken the wager to pay $100 for a 50% chance to win $10000. Alternatively, if Omega had asked you to “precommit” $100 in case you lost, you would still agree—its nearly exactly the same thing. (Technically it’s an even better wager.) What if Omega asks you to precommit a witless future self? You would like to pre-commit your future self.
So you, your current self, while trying to decide whether to pay Omega or not, have decided that you would actually like to precommit a future self to paying the $100. How do you do that? By being that future person in the present and committing your current self to pay the $100. Indeed you lost, but being consistent with “being a payer” is what you decided you wanted.