In Many Worlds, there is no such thing as “what I will experience”, there are just future people descended from me who experience different things.
Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes “actually exist” is meaningless, though, it’s all just models.
It’s to come up with the best model of reality that includes the experiences I’m having right now.
Why? You’ll end up with many models which fit the data, some of which are simpler, but why is any one of those the “best”?
Experiences are physical processes in the world.
Disagree that this statement is cognitively meaningful.
Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes “actually exist” is meaningless, though, it’s all just models.
So the idea is that you’re taking a percentage of the yous that exist across all possible models consistent with the data? Why? And how? I can sort of understand the idea that claims about the external world are meaningless in so far as they don’t constrain expectations. But now this thing we’ve been calling expectations is being identified with a structure inside the models whose whole purpose is to constrain our expectations. It seems circular to me.
It didn’t have to be this way, that the best way to predict experiences was by constructing models of an external world. There are other algorithms that could have turned out to be useful for this. Some people even think that it didn’t turn out this way, and that quantum mechanics is a good example.
Why? You’ll end up with many models which fit the data, some of which are simpler, but why is any one of those the “best”?
I care about finding the truth. I think I have experiences, and my job is to use this fact to find more truths. The easiest hypotheses for me to think of that incorporate my data make ontological claims. My priors tell me that something like simplicity is a virtue, and if we’re talking about ontological claims, that means simpler ontological claims are more likely. I manage to build up a really large and intricate system of ontological claims that I hold to be true. At the margins, some ontological distinctions feel odd to maintain, but by and large it feels pretty natural.
Now, suppose I came to realize that ontological claims were in fact meaningless. Then I wouldn’t give up my goal of finding truths, I would just look elsewhere, maybe at logical, or mathematical, or maybe even moral truths. These truths don’t seem adequate to explain my data, but maybe I’m wrong. They are also just as suspicious to me as ontological truths. I might also look for new kinds of truths. I think it’s definitely worth it too try and look at the world (sorry) non-ontologically.
The how is Solomonoff induction, the why is because it’s historically been useful for prediction.
I don’t believe programs used in Solomonoff are “models of an external world”, they’re just models.
Re simplicity, you’re conflating a mathematical treatment of simplicity that justifies the prior and where ontological claims aren’t simple, with a folk understanding of simplicity in which they are. Or at least you’re promoting the folk understanding over the mathematical understanding.
If you understand how Solomonoff works, are you willing to defend the folk understanding over that?
Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes “actually exist” is meaningless, though, it’s all just models.
Why? You’ll end up with many models which fit the data, some of which are simpler, but why is any one of those the “best”?
Disagree that this statement is cognitively meaningful.
So the idea is that you’re taking a percentage of the yous that exist across all possible models consistent with the data? Why? And how? I can sort of understand the idea that claims about the external world are meaningless in so far as they don’t constrain expectations. But now this thing we’ve been calling expectations is being identified with a structure inside the models whose whole purpose is to constrain our expectations. It seems circular to me.
It didn’t have to be this way, that the best way to predict experiences was by constructing models of an external world. There are other algorithms that could have turned out to be useful for this. Some people even think that it didn’t turn out this way, and that quantum mechanics is a good example.
I care about finding the truth. I think I have experiences, and my job is to use this fact to find more truths. The easiest hypotheses for me to think of that incorporate my data make ontological claims. My priors tell me that something like simplicity is a virtue, and if we’re talking about ontological claims, that means simpler ontological claims are more likely. I manage to build up a really large and intricate system of ontological claims that I hold to be true. At the margins, some ontological distinctions feel odd to maintain, but by and large it feels pretty natural.
Now, suppose I came to realize that ontological claims were in fact meaningless. Then I wouldn’t give up my goal of finding truths, I would just look elsewhere, maybe at logical, or mathematical, or maybe even moral truths. These truths don’t seem adequate to explain my data, but maybe I’m wrong. They are also just as suspicious to me as ontological truths. I might also look for new kinds of truths. I think it’s definitely worth it too try and look at the world (sorry) non-ontologically.
The how is Solomonoff induction, the why is because it’s historically been useful for prediction.
I don’t believe programs used in Solomonoff are “models of an external world”, they’re just models.
Re simplicity, you’re conflating a mathematical treatment of simplicity that justifies the prior and where ontological claims aren’t simple, with a folk understanding of simplicity in which they are. Or at least you’re promoting the folk understanding over the mathematical understanding.
If you understand how Solomonoff works, are you willing to defend the folk understanding over that?