Throughout your comment you’ve been saying a phrase “thirders odds”, apparently meaning odds 1:2, not specifying whether per awakening or per experiment. This is underspecified and confusing category which we should taboo.
Yeah, that was sloppy language, though I do like to think more in terms of bets than you do. One of my ways of thinking about these sorts of issues is in terms of “fair bets”—each person thinks a bet with payoffs that align with their assumptions about utility is “fair”, and a bet with payoffs that align with different assumptions about utility is “unfair”. Edit: to be clear, a “fair” bet for a person is one where the payoffs are such that the betting odds where they break even matches the probabilities that that person would assign.
I do not claim that. I say that in order to justify not betting differently, thirders have to retroactively change the utility of a bet already made:
I critique thirdism not for making different bets—as the first part of the post explains, the bets are the same, but for their utilities not actually behaving like utilities—constantly shifting back and forth during the experiment, including shifts backwards in time, in order to compensate for the fact that their probabilities are not behaving as probabilities—because they are not sound probabilities as explained in the previous post.
Wait, are you claiming that thirder Sleeping Beauty is supposed to always decline the initial per experiment bet—before the coin was tossed at 1:1 odds? This is wrong—both halfers and thirders are neutral towards such bets, though they appeal to different reasoning why.
OK, I was also being sloppy in the parts you are responding to.
Scenario 1: bet about a coin toss, nothing depending on the outcome (so payoff equal per coin toss outcome)
1:1
Scenario 2: bet about a Sleeping Beauty coin toss, payoff equal per awakening
2:1
Scenario 3: bet about a Sleeping Beauty coin toss, payoff equal per coin toss outcome
1:1
It doesn’t matter if it’s agreed to before or after the experiment, as long as the payoffs work out that way. Betting within the experiment is one way for the payoffs to more naturally line up on a per-awakening basis, but it’s only relevant (to bet choices) to the extent that it affects the payoffs.
Now, the conventional Thirder position (as I understand it) consistently applies equal utilities per awakening when considered from a position within the experiment.
I don’t actually know what the Thirder position is supposed to be from a standpoint from before the experiment, but I see no contradiction in assigning equal utilities per awakening from the before-experiment perspective as well.
As I see it, Thirders will only regret a bet (in the sense of considering it a bad choice to enter into ex ante given their current utilities) if you do some kind of bait and switch where you don’t make it clear what the payoffs were going to be up front.
But what I’m pointing at, is that thirdism naturally fails to develop an optimal strategy for per experiment bet in technicolor problem, falsly assuming that it’s isomorphic to regular sleeping beauty.
Speculation; have you actually asked Thirders and Halfers to solve the problem? (while making clear the reward structure? - note that if you don’t make clear what the reward structure is, Thirders are more likely to misunderstand the question asked if, as in this case, the reward structure is “fair” from the Halfer perspective and “unfair” from the Thirder perspective).
Technicolor and Rare Event problems highlight the issue that I explain in Utility Instability under Thirdism—in order to make optimal bets thirders need to constantly keep track of not only probability changes but also utility changes, because their model keeps shifting both of them back and forth and this can be very confusing. Halfers, on the other hand, just need to keep track of probability changes, because their utility are stable. Basically thirdism is strictly more complicated without any benefits and we can discard it on the grounds of Occam’s razor, if we haven’t already discarded it because of its theoretical unsoundness, explained in the previous post.
A Halfer has to discount their utility based on how many of them there are, a Thirder doesn’t. It seems to me, on the contrary to your perspective, that Thirder utility is more stable.
Halfer model correctly highlights the rule how to determine which cases these are and how to develop the correct strategy for betting. Thirder model just keeps answering 1⁄3 as a broken clock.
… and I in my hasty reading and response I misread the conditions of the experiment (it’s a “Halfer” reward structure again). (As I’ve mentioned before in a comment on another of your posts, I think Sleeping Beauty is unusually ambiguous so both Halfer and Thirder perspectives are viable. But, I lean toward the general perspectives of Thirders on other problems (e.g. SIA seems much more sensible (edit: in most situations) to me than SSA), so Thirderism seems more intuitive to me).
Thirders can adapt to different reward structures but need to actually notice what the reward structure is!
What do you still feel that is unresolved?
the things mentioned in this comment chain. Which actually doesn’t feel like all that much, it feels like there’s maybe one or two differences in philosophical assumptions that are creating this disagreement (though maybe we aren’t getting at the key assumptions).
Edited to add: The criterion I mainly use to evaluate probability/utility splits is typical reward structure—you should assign probabilities/utilities such that a typical reward structure seems “fair”, so you don’t wind up having to adjust for different utilities when the rewards have the typical structure (you do have to adjust if the reward structure is atypical, and thus seems “unfair”).
This results in me agreeing with SIA in a lot of cases. An example of an exception is Boltzmann brains. A typical reward structure would give no reward for correctly believing that you are a Boltzmann brain. So you should always bet in realistic bets as if you aren’t a Boltzmann brain, and for this to be “fair”, I set P=0 instead of SIA’s U=0. I find people believing silly things about Boltzmann brains like taking it to be evidence against a theory if that theory proposes that there exists a lot of Boltzmann brains. I think more acceptance of the setting of P=0 instead of U=0 here would cut that nonsense off. To be clear, normal SIA does handle this case fine (that a theory predicting Boltzmann brains is not evidence against it), but setting P=0 would make it more obvious to people’s intuitions.
In the case of Sleeping Beauty, this is a highly artificial situation that has been pared down of context to the point that it’s ambiguous what would be a typical reward structure, which is why I consider it ambiguous.
One of my ways of thinking about these sorts of issues is in terms of “fair bets”
Well, as you may see it’s also is not helpful. Halfers and thirders disagree on which bets they consider “fair” but still agree on which bets to make, whether they call them fair or not. The extra category of a “fair bet” just adds another semantic disagreement between halfers and thirders. Once we specify whether we are talking per experiment or per awakening bet and on which, odds both theories are supposed to agree.
I don’t actually know what the Thirder position is supposed to be from a standpoint from before the experiment, but I see no contradiction in assigning equal utilities per awakening from the before-experiment perspective as well.
Thirders tend to agree with halfers that P(Heads|Sunday) = P(Heads|Wednesday) = 1⁄2. Likewise, because they make the same bets as the halfers, they have to agree on utilities. So it means that thirders utilities go back and forth which is weird and confusing behavior.
A Halfer has to discount their utility based on how many of them there are, a Thirder doesn’t. It seems to me, on the contrary to your perspective, that Thirder utility is more stable
You mean how many awakenings? That if there was not two awakenings on tails, but, for instance, ten, halfers will have to think that U(Heads) has to be ten times as much as U(Tails) for a utility neutral per awakening bet?
Sure, but it’s a completely normal behavior. It’s fine to have different utility estimates for different problems and different payout schemes—such things always happen. Sleeping Beauty with ten awakenings on Tails is a different problem than Sleeping Beauty with only two so there is no reason to expect that utilities of the events has to be the same. The point is that as long as we specified the experiment and a betting scheme, then the utilities has to be stable.
And thirder utilities are modified duringthe experiment. They are not just specified by a betting scheme, they go back and forth based on the knowledge state of the participant—behave the way probabilities are supposed to behave. And that’s because they are partially probabilities—a result of incorrect factorization of E(X).
Speculation; have you actually asked Thirders and Halfers to solve the problem? (while making clear the reward structure?
I’m asking it right in the post, explicitly stating that the bet is per experiment and recommending to think about the question more. What did you yourself answer?
My initial state that thirders model confuses them about this per experiment bet is based on the fact that a pro-thirder paper which introduced the technicolor sleeping beauty problem totally fails to understand why halfers scoring rule updates in it. I may be putting to much weight on the views of Rachael Briggs in particular, but it apparently was peer reviewed and so on, so it seems to be decent evidence.
… and I in my hasty reading and response I misread the conditions of the experiment
Well, I guess that answers my question.
Thirders can adapt to different reward structures but need to actually notice what the reward structure is!
Probably, but I’ve yet to see one actually derive the correct answer on their own, not post hoc after it was already spoiled or after consulting the correct model. I suppose I should have asked the question beforehand, and then publish the answer, oh well. Maybe I can still do it and ask nicely not to look.
The criterion I mainly use to evaluate probability/utility splits is typical reward structure
Well, if every other thirder reason like this, that would indeed explain the issue.
You can’t base the definition of probability on your intuitions about fairness. Or, rather, you can, but then you are risking contradicting the math. Probability is a mathematical concept with very specific properties. In my previous post I talk about it specifically and show that thirder probabilities for Sleeping Beauty are ill-defined.
My reasoning explicitly puts instrumental rationality ahead of epistemic. I hold this view precisely to the degree which I do in fact think it is helpful.
The extra category of a “fair bet” just adds another semantic disagreement between halfers and thirders.
It’s just a criterion by which to assess disagreements, not adding something more complicated to a model.
Regarding your remarks on these particular experiments:
If someone thinks the typical reward structure is some reward structure, then they’ll by default guess that a proposed experiment has that reward structure.
This reasonably can be expected to apply to halfers or thirders.
If you convince me that halfer reward structure is typical, I go halfer. (As previously stated since I favour the typical reward structure). To the extent that it’s not what I would guess by default, that’s precisely because I don’t intuitively feel that it’s typical and feel more that you are presenting a weird, atypical reward structure!
And thirder utilities are modified duringthe experiment. They are not just specified by a betting scheme, they go back and forth based on the knowledge state of the participant—behave the way probabilities are supposed to behave. And that’s because they are partially probabilities—a result of incorrect factorization of E(X).
Probability is a mathematical concept with very specific properties. In my previous post I talk about it specifically and show that thirder probabilities for Sleeping Beauty are ill-defined.
I’ve previously shown that some of your previous posts incorrectly model the Thirder perspective, but I haven’t carefully reviewed and critiqued all of your posts. Can you specify exactly what model of the Thirder viewpoint you are referencing here? (which will not only help me critique it but also help me determine what exactly you mean by the utilities changing in the first place, i.e. do you count Thirders evaluating the total utility of a possibility branch more highly when there are more of them as a “modification” or not (I would not consider this a “modification”).
Yeah, that was sloppy language, though I do like to think more in terms of bets than you do. One of my ways of thinking about these sorts of issues is in terms of “fair bets”—
each person thinks a bet with payoffs that align with their assumptions about utility is “fair”, and a bet with payoffs that align with different assumptions about utility is “unfair”.Edit: to be clear, a “fair” bet for a person is one where the payoffs are such that the betting odds where they break even matches the probabilities that that person would assign.OK, I was also being sloppy in the parts you are responding to.
Scenario 1: bet about a coin toss, nothing depending on the outcome (so payoff equal per coin toss outcome)
1:1
Scenario 2: bet about a Sleeping Beauty coin toss, payoff equal per awakening
2:1
Scenario 3: bet about a Sleeping Beauty coin toss, payoff equal per coin toss outcome
1:1
It doesn’t matter if it’s agreed to before or after the experiment, as long as the payoffs work out that way. Betting within the experiment is one way for the payoffs to more naturally line up on a per-awakening basis, but it’s only relevant (to bet choices) to the extent that it affects the payoffs.
Now, the conventional Thirder position (as I understand it) consistently applies equal utilities per awakening when considered from a position within the experiment.
I don’t actually know what the Thirder position is supposed to be from a standpoint from before the experiment, but I see no contradiction in assigning equal utilities per awakening from the before-experiment perspective as well.
As I see it, Thirders will only regret a bet (in the sense of considering it a bad choice to enter into ex ante given their current utilities) if you do some kind of bait and switch where you don’t make it clear what the payoffs were going to be up front.
Speculation; have you actually asked Thirders and Halfers to solve the problem? (while making clear the reward structure? - note that if you don’t make clear what the reward structure is, Thirders are more likely to misunderstand the question asked if, as in this case, the reward structure is “fair” from the Halfer perspective and “unfair” from the Thirder perspective).
A Halfer has to discount their utility based on how many of them there are, a Thirder doesn’t. It seems to me, on the contrary to your perspective, that Thirder utility is more stable.
… and I in my hasty reading and response I misread the conditions of the experiment (it’s a “Halfer” reward structure again). (As I’ve mentioned before in a comment on another of your posts, I think Sleeping Beauty is unusually ambiguous so both Halfer and Thirder perspectives are viable. But, I lean toward the general perspectives of Thirders on other problems (e.g. SIA seems much more sensible (edit: in most situations) to me than SSA), so Thirderism seems more intuitive to me).
Thirders can adapt to different reward structures but need to actually notice what the reward structure is!
the things mentioned in this comment chain. Which actually doesn’t feel like all that much, it feels like there’s maybe one or two differences in philosophical assumptions that are creating this disagreement (though maybe we aren’t getting at the key assumptions).
Edited to add: The criterion I mainly use to evaluate probability/utility splits is typical reward structure—you should assign probabilities/utilities such that a typical reward structure seems “fair”, so you don’t wind up having to adjust for different utilities when the rewards have the typical structure (you do have to adjust if the reward structure is atypical, and thus seems “unfair”).
This results in me agreeing with SIA in a lot of cases. An example of an exception is Boltzmann brains. A typical reward structure would give no reward for correctly believing that you are a Boltzmann brain. So you should always bet in realistic bets as if you aren’t a Boltzmann brain, and for this to be “fair”, I set P=0 instead of SIA’s U=0. I find people believing silly things about Boltzmann brains like taking it to be evidence against a theory if that theory proposes that there exists a lot of Boltzmann brains. I think more acceptance of the setting of P=0 instead of U=0 here would cut that nonsense off. To be clear, normal SIA does handle this case fine (that a theory predicting Boltzmann brains is not evidence against it), but setting P=0 would make it more obvious to people’s intuitions.
In the case of Sleeping Beauty, this is a highly artificial situation that has been pared down of context to the point that it’s ambiguous what would be a typical reward structure, which is why I consider it ambiguous.
Well, as you may see it’s also is not helpful. Halfers and thirders disagree on which bets they consider “fair” but still agree on which bets to make, whether they call them fair or not. The extra category of a “fair bet” just adds another semantic disagreement between halfers and thirders. Once we specify whether we are talking per experiment or per awakening bet and on which, odds both theories are supposed to agree.
Thirders tend to agree with halfers that P(Heads|Sunday) = P(Heads|Wednesday) = 1⁄2. Likewise, because they make the same bets as the halfers, they have to agree on utilities. So it means that thirders utilities go back and forth which is weird and confusing behavior.
You mean how many awakenings? That if there was not two awakenings on tails, but, for instance, ten, halfers will have to think that U(Heads) has to be ten times as much as U(Tails) for a utility neutral per awakening bet?
Sure, but it’s a completely normal behavior. It’s fine to have different utility estimates for different problems and different payout schemes—such things always happen. Sleeping Beauty with ten awakenings on Tails is a different problem than Sleeping Beauty with only two so there is no reason to expect that utilities of the events has to be the same. The point is that as long as we specified the experiment and a betting scheme, then the utilities has to be stable.
And thirder utilities are modified during the experiment. They are not just specified by a betting scheme, they go back and forth based on the knowledge state of the participant—behave the way probabilities are supposed to behave. And that’s because they are partially probabilities—a result of incorrect factorization of E(X).
I’m asking it right in the post, explicitly stating that the bet is per experiment and recommending to think about the question more. What did you yourself answer?
My initial state that thirders model confuses them about this per experiment bet is based on the fact that a pro-thirder paper which introduced the technicolor sleeping beauty problem totally fails to understand why halfers scoring rule updates in it. I may be putting to much weight on the views of Rachael Briggs in particular, but it apparently was peer reviewed and so on, so it seems to be decent evidence.
Well, I guess that answers my question.
Probably, but I’ve yet to see one actually derive the correct answer on their own, not post hoc after it was already spoiled or after consulting the correct model. I suppose I should have asked the question beforehand, and then publish the answer, oh well. Maybe I can still do it and ask nicely not to look.
Well, if every other thirder reason like this, that would indeed explain the issue.
You can’t base the definition of probability on your intuitions about fairness. Or, rather, you can, but then you are risking contradicting the math. Probability is a mathematical concept with very specific properties. In my previous post I talk about it specifically and show that thirder probabilities for Sleeping Beauty are ill-defined.
My reasoning explicitly puts instrumental rationality ahead of epistemic. I hold this view precisely to the degree which I do in fact think it is helpful.
It’s just a criterion by which to assess disagreements, not adding something more complicated to a model.
Regarding your remarks on these particular experiments:
If someone thinks the typical reward structure is some reward structure, then they’ll by default guess that a proposed experiment has that reward structure.
This reasonably can be expected to apply to halfers or thirders.
If you convince me that halfer reward structure is typical, I go halfer. (As previously stated since I favour the typical reward structure). To the extent that it’s not what I would guess by default, that’s precisely because I don’t intuitively feel that it’s typical and feel more that you are presenting a weird, atypical reward structure!
I’ve previously shown that some of your previous posts incorrectly model the Thirder perspective, but I haven’t carefully reviewed and critiqued all of your posts. Can you specify exactly what model of the Thirder viewpoint you are referencing here? (which will not only help me critique it but also help me determine what exactly you mean by the utilities changing in the first place, i.e. do you count Thirders evaluating the total utility of a possibility branch more highly when there are more of them as a “modification” or not (I would not consider this a “modification”).