I think that Boltzmann brains in particular are probably very low measure though, at least if you use Solomonoff induction. If you think that weighting observer moments within a Universe by their description complexity is crazy (which I kind of feel), then you need to come up with a different measure on observer moments, but I expect that if we find a satisfying measure, Boltzmann brains will be low measure in that too.
I agree that there’s no real answer to “where you are”, you are a superposition of beings across the multiverse, sure. But I think probabilities are kind of real, if you make up some definition of what beings are sufficiently similar to you that you consider them “you”, then you can have a probability distribution over where those beings are, and it’s a fair equivalent rephrasing to say “I’m in this type of situation with this probability”. (This is what I do in the post. Very unclear though why you’d ever want to estimate that, that’s why I say that probabilities are cursed.)
I think expected utilities are still reasonable. When you make a decision, you can estimate who are the beings whose decision correlate with this one, and what is the impact of each of their decisions, and calculate the sum of all that. I think it’s fair to call this sum expected utility. It’s possible that you don’t want to optimize for the direct sum, but for something determined by “coalition dynamics”, I don’t understand the details well enough to really have an opinion.
(My guess is we don’t have real disagreement here and it’s just a question of phrasing, but tell me if you think we disagree in a deeper way.)
Hmmm, uncertain if we disagree. You keep saying that these concepts are cursed and yet phrasing your claims in terms of them anyway (e.g. “probably very low measure”), which suggests that there’s some aspect of my response you don’t fully believe.
In particular, in order for your definition of “what beings are sufficiently similar to you” to not be cursed, you have to be making claims not just about the beings themselves (since many Boltzmann brains are identical to your brain) but rather about the universes that they’re in. But this is kinda what I mean by coalitional dynamics: a bunch of different copies of you become more central parts of the “coalition” of your identity based on e.g. the types of impact that they’re able to have on the world around them. I think describing this as a metric of similarity is going to be pretty confusing/misleading.
you can estimate who are the beings whose decision correlate with this one, and what is the impact of each of their decisions, and calculate the sum of all that
You still need a prior over worlds to calculate impacts, which is the cursed part.
Hm, probably we disagree on something. I’m very confused how to mesh epistemic uncertainty with these “distribution over different Universes” types of probability. When I say “Boltzmann brains are probably very low measure”, I mean “I think Boltzmann brains are very low measure, but this is a confusing topic and there might be considerations I haven’t thought of and I might be totally mistaken”. I think this epistemic uncertainty is distinct from the type of “objective probabilities” I talk about in my post, and I don’t really know how to use language without referring to degrees of my epistemic uncertainty.
You still need a prior over worlds to calculate impacts, which is the cursed part.
Maybe we have some deeper disagreement here. It feels plausible to me that there is a measure of “realness” in the Multiverse that is an objective fact about the world, and we might be able to figure it out. When I say probabilities are cursed, I just mean that even if an objective prior over worlds and moments exist (like the Solomonoff prior), your probabilities of where you are are still hackable by simulations, so you shouldn’t rely on raw probabilities for decision-making, like the people using the Oracle do. Meanwhile, expected values are not hackable in the same way, because if they recreate you in a tiny simulation, you don’t care about that, and if they recreate you in a big simulation or promise you things in the outside world (like in my other post), then that’s not hacking your decision making, but a fair deal, and you should in fact let that influence your decisions.
Is your position that the problem is deeper than this, and there is no objective prior over worlds, it’s just a thing like ethics that we choose for ourselves, and then later can bargain and trade with other beings who have a different prior of realness?
I think this epistemic uncertainty is distinct from the type of “objective probabilities” I talk about in my post, and I don’t really know how to use language without referring to degrees of my epistemic uncertainty.
The part I was gesturing at wasn’t the “probably” but the “low measure” part.
Is your position that the problem is deeper than this, and there is no objective prior over worlds, it’s just a thing like ethics that we choose for ourselves, and then later can bargain and trade with other beings who have a different prior of realness?
Yes, that’s a good summary of my position—except that I think that, like with ethics, there will be a bunch of highly-suggestive logical/mathematical facts which make it much more intuitive to choose some priors over others. So the choice of prior will be somewhat arbitrary but not totally arbitrary.
I don’t think this is a fully satisfactory position yet, it hasn’t really dissolved the confusion about why subjective anticipation feels so real, but it feels directionally correct.
This part IMO is a crux, in that I don’t truly believe an objective measure/magical reality fluid can exist in the multiverse, if we allow the concept to be sufficiently general, ruining both probability and expected value/utility theory in the process.
Heck, in the most general cases, I don’t believe any coherent measure exists at all, which basically ruins probability and expected utility theory at the same time.
Maybe we have some deeper disagreement here. It feels plausible to me that there is a measure of “realness” in the Multiverse that is an objective fact about the world, and we might be able to figure it out.
I like your poem on Twitter.
I think that Boltzmann brains in particular are probably very low measure though, at least if you use Solomonoff induction. If you think that weighting observer moments within a Universe by their description complexity is crazy (which I kind of feel), then you need to come up with a different measure on observer moments, but I expect that if we find a satisfying measure, Boltzmann brains will be low measure in that too.
I agree that there’s no real answer to “where you are”, you are a superposition of beings across the multiverse, sure. But I think probabilities are kind of real, if you make up some definition of what beings are sufficiently similar to you that you consider them “you”, then you can have a probability distribution over where those beings are, and it’s a fair equivalent rephrasing to say “I’m in this type of situation with this probability”. (This is what I do in the post. Very unclear though why you’d ever want to estimate that, that’s why I say that probabilities are cursed.)
I think expected utilities are still reasonable. When you make a decision, you can estimate who are the beings whose decision correlate with this one, and what is the impact of each of their decisions, and calculate the sum of all that. I think it’s fair to call this sum expected utility. It’s possible that you don’t want to optimize for the direct sum, but for something determined by “coalition dynamics”, I don’t understand the details well enough to really have an opinion.
(My guess is we don’t have real disagreement here and it’s just a question of phrasing, but tell me if you think we disagree in a deeper way.)
Hmmm, uncertain if we disagree. You keep saying that these concepts are cursed and yet phrasing your claims in terms of them anyway (e.g. “probably very low measure”), which suggests that there’s some aspect of my response you don’t fully believe.
In particular, in order for your definition of “what beings are sufficiently similar to you” to not be cursed, you have to be making claims not just about the beings themselves (since many Boltzmann brains are identical to your brain) but rather about the universes that they’re in. But this is kinda what I mean by coalitional dynamics: a bunch of different copies of you become more central parts of the “coalition” of your identity based on e.g. the types of impact that they’re able to have on the world around them. I think describing this as a metric of similarity is going to be pretty confusing/misleading.
You still need a prior over worlds to calculate impacts, which is the cursed part.
Hm, probably we disagree on something. I’m very confused how to mesh epistemic uncertainty with these “distribution over different Universes” types of probability. When I say “Boltzmann brains are probably very low measure”, I mean “I think Boltzmann brains are very low measure, but this is a confusing topic and there might be considerations I haven’t thought of and I might be totally mistaken”. I think this epistemic uncertainty is distinct from the type of “objective probabilities” I talk about in my post, and I don’t really know how to use language without referring to degrees of my epistemic uncertainty.
Maybe we have some deeper disagreement here. It feels plausible to me that there is a measure of “realness” in the Multiverse that is an objective fact about the world, and we might be able to figure it out. When I say probabilities are cursed, I just mean that even if an objective prior over worlds and moments exist (like the Solomonoff prior), your probabilities of where you are are still hackable by simulations, so you shouldn’t rely on raw probabilities for decision-making, like the people using the Oracle do. Meanwhile, expected values are not hackable in the same way, because if they recreate you in a tiny simulation, you don’t care about that, and if they recreate you in a big simulation or promise you things in the outside world (like in my other post), then that’s not hacking your decision making, but a fair deal, and you should in fact let that influence your decisions.
Is your position that the problem is deeper than this, and there is no objective prior over worlds, it’s just a thing like ethics that we choose for ourselves, and then later can bargain and trade with other beings who have a different prior of realness?
The part I was gesturing at wasn’t the “probably” but the “low measure” part.
Yes, that’s a good summary of my position—except that I think that, like with ethics, there will be a bunch of highly-suggestive logical/mathematical facts which make it much more intuitive to choose some priors over others. So the choice of prior will be somewhat arbitrary but not totally arbitrary.
I don’t think this is a fully satisfactory position yet, it hasn’t really dissolved the confusion about why subjective anticipation feels so real, but it feels directionally correct.
This part IMO is a crux, in that I don’t truly believe an objective measure/magical reality fluid can exist in the multiverse, if we allow the concept to be sufficiently general, ruining both probability and expected value/utility theory in the process.
Heck, in the most general cases, I don’t believe any coherent measure exists at all, which basically ruins probability and expected utility theory at the same time.