Well, let’s try a simple example. Suppose you have two competing theories how to produce purple paint:
Add red paint into the vial before the blue paint and then mix them together.
Add blue paint into the vial before the red paint and then mix them together.
Both theories work on practice. And yet, they are incompatible with each other. Philosophers write papers about the conundrum and soon two assumptions are coined: red first assumption—RFA and red second assumption—RSA.
Now, you observe that there are compelling arguments in favor of both theories. Does it mean that it’s an argument in favor of RSA+RFA—adding red both the first and the second time? Even though the result is visibly not purple?
Of course not! It means that something is subtly wrong with both theories, namely that they assume that the order in which we add paint is relevant at all. What is required is that blue and red ingredients are accounted for and are present in the resulting mix.
Do you see the similarity between this example and SIA+EDT case?
Suppose you have two competing theories how to produce purple paint
If producing purple paint here = satisfying ex ante optimality, I just reject the premise that that’s my goal in the first place. I’m trying to make decisions that are optimal with respect to my normative standards (including EDT) and my understanding of the way the world is (including anthropic updating, to the extent I find the independent arguments for updating compelling) — at least insofar as I regard myself as “making decisions.”[1]
Even setting that aside, your example seems very disanalogous because SIA and EDT are just not in themselves attempts to do the same thing (“produce purple paint”). SIA is epistemic, while EDT is decision-theoretic.
The point of analogy is that just as there different ways to account for the fact that red paint is required in the mix—either by adding it first or second, there are different ways to account for the fact that, say, Sleeping Beauty awakens twice on Tails and only once on Heads.
One way is to modify probabilities, saying that probability of awakening on Tails is twice of awakening on Heads—that’s what SIA does. The other is to modify utilities, saying that the reward of correctly guessing Tails is twice as large as Heads in a per awakening betting rule—that’s what EDT does, if I understand correctly. Both ways produce the same product P(Tails)U(Tails), which define the betting odds. But if you modify both utilities and probabilities you obviously get the wrong result.
Now, you are free to choose to bite the bullet that it has never been about getting the correct betting odds in the first place. For some reason, people bite all kind of ridiculous bullets specifically in anthropic reasoning, and so I hoped that re-framing the issue as a recipe for purple paint may snap you out of it, which, apparently, failed to be the case.
But usually when people find themselves in a situation where only one theory out of two can be true, despite there being compelling reasons to believe in both of them, they treat it as a reason to re-examine these reasons, because at least one of these theories is clearly wrong.
And yeah, SIA is wrong. Clearly wrong. It’s so obviously wrong that even according to Carlsmith, who defends it in a series of posts, it implies telekinesis, and the main appeal that at least it’s not as bad as SSA. As I’ve previously commented on this topic:
A common way people tend to justify SIA and all it ridiculousness is by pointing at SSA ridiculousness and claiming that it’s even more ridiculous. Frankly, I’m quite tired of this kind of anthropical whataboutism. It seems to be some kind of weird selective blindness. In no other sphere of knowledge people would accept this as a valid reasoning. But in anthropics, somehow, it works?
The fact that SSA is occasionally stupid doesn’t justify SIA’s occasional stupidity. Both are obviously wrong in general, even though sometimes both may produce correct result.
Now, you are free to choose to bite the bullet that it has never been about getting the correct betting odds in the first place. For some reason, people bite all kind of ridiculous bullets specifically in anthropic reasoning, and so I hoped that re-framing the issue as a recipe for purple paint may snap you out of it, which, apparently, failed to be the case.
By what standard do you judge some betting odds as “correct” here? If it’s ex ante optimality, I don’t see the motivation for that (as discussed in the post), and I’m unconvinced by just calling the verdict a “ridiculous bullet.” If it’s about matching the frequency of awakenings, I just don’t see why the decision should only count N once here — and there doesn’t seem to be a principled epistemology that guarantees you’ll count N exactly once if you use EDT, as I note in “Aside: Non-anthropically updating EDT sometimes ‘fails’ these cases.”
I gave independent epistemic arguments for anthropic updating at the end of the post, which you haven’t addressed, so I’m unconvinced by your insistence that SIA (and I presume you also mean to include max-RC-SSA?) is clearly wrong.
This subsection is another example of “two wrongs make a right” reasoning. You pointing out at some problems of EDT not related to antropic updating and then conclude that then the fact that EDT with anthropic updating has similar problems is okay. This doesn’t make sense. If a theory has a flaw we need to fix the flaw, not treat it as a license to add more flaws to the theory.
I gave independent epistemic arguments for anthropic updating at the end of the post, which you haven’t addressed
I’m sorry but I don’t see any substance in your argument to address. This step renders all the chain of reasoning meaningless:
What is P(w1,i|w1;I(Ω)), i.e., assuming I exist in the given world, how likely am I to be in a given index? Min-RC-SSA would say, “‘I’ am just guaranteed to be in whichever index corresponds to the person ‘I’ am.” This view has some merit (see, e.g., here and Builes (2020)). But it’s not obvious we should endorse it — I think a plausible alternative is that “I” am defined by some first-person perspective.[19] And this perspective, absent any other information, is just as likely to be each of the indices of observers in the world. On this alternative view,P(w1,i|w1;I(Ω))=1/n(Ow1).
You are saying that there is a view 1. that has some merits, but it’s not obvious that it is true so… you just assume the view 2., instead. Why? Why would you do it? What’s the argument that you should assume that? You don’t give any. Just make an ungrounded assumption and go with your reasoning further.
Well, let’s try a simple example. Suppose you have two competing theories how to produce purple paint:
Add red paint into the vial before the blue paint and then mix them together.
Add blue paint into the vial before the red paint and then mix them together.
Both theories work on practice. And yet, they are incompatible with each other. Philosophers write papers about the conundrum and soon two assumptions are coined: red first assumption—RFA and red second assumption—RSA.
Now, you observe that there are compelling arguments in favor of both theories. Does it mean that it’s an argument in favor of RSA+RFA—adding red both the first and the second time? Even though the result is visibly not purple?
Of course not! It means that something is subtly wrong with both theories, namely that they assume that the order in which we add paint is relevant at all. What is required is that blue and red ingredients are accounted for and are present in the resulting mix.
Do you see the similarity between this example and SIA+EDT case?
If producing purple paint here = satisfying ex ante optimality, I just reject the premise that that’s my goal in the first place. I’m trying to make decisions that are optimal with respect to my normative standards (including EDT) and my understanding of the way the world is (including anthropic updating, to the extent I find the independent arguments for updating compelling) — at least insofar as I regard myself as “making decisions.”[1]
Even setting that aside, your example seems very disanalogous because SIA and EDT are just not in themselves attempts to do the same thing (“produce purple paint”). SIA is epistemic, while EDT is decision-theoretic.
E.g. insofar as I’m truly committed to a policy that was optimal from my past (ex ante) perspective, I’m not making a decision now.
The point of analogy is that just as there different ways to account for the fact that red paint is required in the mix—either by adding it first or second, there are different ways to account for the fact that, say, Sleeping Beauty awakens twice on Tails and only once on Heads.
One way is to modify probabilities, saying that probability of awakening on Tails is twice of awakening on Heads—that’s what SIA does. The other is to modify utilities, saying that the reward of correctly guessing Tails is twice as large as Heads in a per awakening betting rule—that’s what EDT does, if I understand correctly. Both ways produce the same product P(Tails)U(Tails), which define the betting odds. But if you modify both utilities and probabilities you obviously get the wrong result.
Now, you are free to choose to bite the bullet that it has never been about getting the correct betting odds in the first place. For some reason, people bite all kind of ridiculous bullets specifically in anthropic reasoning, and so I hoped that re-framing the issue as a recipe for purple paint may snap you out of it, which, apparently, failed to be the case.
But usually when people find themselves in a situation where only one theory out of two can be true, despite there being compelling reasons to believe in both of them, they treat it as a reason to re-examine these reasons, because at least one of these theories is clearly wrong.
And yeah, SIA is wrong. Clearly wrong. It’s so obviously wrong that even according to Carlsmith, who defends it in a series of posts, it implies telekinesis, and the main appeal that at least it’s not as bad as SSA. As I’ve previously commented on this topic:
By what standard do you judge some betting odds as “correct” here? If it’s ex ante optimality, I don’t see the motivation for that (as discussed in the post), and I’m unconvinced by just calling the verdict a “ridiculous bullet.” If it’s about matching the frequency of awakenings, I just don’t see why the decision should only count N once here — and there doesn’t seem to be a principled epistemology that guarantees you’ll count N exactly once if you use EDT, as I note in “Aside: Non-anthropically updating EDT sometimes ‘fails’ these cases.”
I gave independent epistemic arguments for anthropic updating at the end of the post, which you haven’t addressed, so I’m unconvinced by your insistence that SIA (and I presume you also mean to include max-RC-SSA?) is clearly wrong.
The same as always. Correct betting odds systematically lead to winning.
The motivation is that you don’t need to invent extraordinary ways to wiggle out from being dutch booked, of course.
Do you systematically use this kind of reasoning in regards to betting odds? If so, what is your reasons to endourse EDT in the first place?
This subsection is another example of “two wrongs make a right” reasoning. You pointing out at some problems of EDT not related to antropic updating and then conclude that then the fact that EDT with anthropic updating has similar problems is okay. This doesn’t make sense. If a theory has a flaw we need to fix the flaw, not treat it as a license to add more flaws to the theory.
I’m sorry but I don’t see any substance in your argument to address. This step renders all the chain of reasoning meaningless:
You are saying that there is a view 1. that has some merits, but it’s not obvious that it is true so… you just assume the view 2., instead. Why? Why would you do it? What’s the argument that you should assume that? You don’t give any. Just make an ungrounded assumption and go with your reasoning further.