I think of explanations as being prior to predictions. The goal of (epistemic) rationality, for me, is not to accurately predict what future experiences I will have. It’s to come up with the best model of reality that includes the experiences I’m having right now.
I’ve even lately come to be skeptical of the notion of anticipated experience. In Many Worlds, there is no such thing as “what I will experience”, there are just future people descended from me who experience different things. There are substitute notions that play the role of beliefs about future experiences, but they aren’t the same thing.
Experiences are physical processes in the world. If you have beliefs about your future experiences, then you must either have beliefs about physical processes, or you have to be a dualist. If you don’t have beliefs about experiences, but instead just “anticipate” them, then idk.
There is an odd question that I think about sometimes - why does “exist” refer to actual real world existence, as opposed to something like “exists according to Max Tegmark”, or “exists inside the observable universe”? I’m taking for granted that there is a property of existence—the question is, how do we manage to pick it out? There are a couple ideas off the top of my head that could answer this question. Reference magnetism is the idea that some properties are so special that they magically cause our words to be about them. Existence is pretty special - it’s a property that everything has. Alternatively, maybe “exist” acquires its meaning somehow through our interaction with the physical world—existent things are those things that I can causally interact with. This option has the “benefit” of ruling out the possibility of
causally isolated parallel worlds.
In Many Worlds, there is no such thing as “what I will experience”, there are just future people descended from me who experience different things.
Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes “actually exist” is meaningless, though, it’s all just models.
It’s to come up with the best model of reality that includes the experiences I’m having right now.
Why? You’ll end up with many models which fit the data, some of which are simpler, but why is any one of those the “best”?
Experiences are physical processes in the world.
Disagree that this statement is cognitively meaningful.
Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes “actually exist” is meaningless, though, it’s all just models.
So the idea is that you’re taking a percentage of the yous that exist across all possible models consistent with the data? Why? And how? I can sort of understand the idea that claims about the external world are meaningless in so far as they don’t constrain expectations. But now this thing we’ve been calling expectations is being identified with a structure inside the models whose whole purpose is to constrain our expectations. It seems circular to me.
It didn’t have to be this way, that the best way to predict experiences was by constructing models of an external world. There are other algorithms that could have turned out to be useful for this. Some people even think that it didn’t turn out this way, and that quantum mechanics is a good example.
Why? You’ll end up with many models which fit the data, some of which are simpler, but why is any one of those the “best”?
I care about finding the truth. I think I have experiences, and my job is to use this fact to find more truths. The easiest hypotheses for me to think of that incorporate my data make ontological claims. My priors tell me that something like simplicity is a virtue, and if we’re talking about ontological claims, that means simpler ontological claims are more likely. I manage to build up a really large and intricate system of ontological claims that I hold to be true. At the margins, some ontological distinctions feel odd to maintain, but by and large it feels pretty natural.
Now, suppose I came to realize that ontological claims were in fact meaningless. Then I wouldn’t give up my goal of finding truths, I would just look elsewhere, maybe at logical, or mathematical, or maybe even moral truths. These truths don’t seem adequate to explain my data, but maybe I’m wrong. They are also just as suspicious to me as ontological truths. I might also look for new kinds of truths. I think it’s definitely worth it too try and look at the world (sorry) non-ontologically.
The how is Solomonoff induction, the why is because it’s historically been useful for prediction.
I don’t believe programs used in Solomonoff are “models of an external world”, they’re just models.
Re simplicity, you’re conflating a mathematical treatment of simplicity that justifies the prior and where ontological claims aren’t simple, with a folk understanding of simplicity in which they are. Or at least you’re promoting the folk understanding over the mathematical understanding.
If you understand how Solomonoff works, are you willing to defend the folk understanding over that?
A couple thoughts:
I think of explanations as being prior to predictions. The goal of (epistemic) rationality, for me, is not to accurately predict what future experiences I will have. It’s to come up with the best model of reality that includes the experiences I’m having right now.
I’ve even lately come to be skeptical of the notion of anticipated experience. In Many Worlds, there is no such thing as “what I will experience”, there are just future people descended from me who experience different things. There are substitute notions that play the role of beliefs about future experiences, but they aren’t the same thing.
Experiences are physical processes in the world. If you have beliefs about your future experiences, then you must either have beliefs about physical processes, or you have to be a dualist. If you don’t have beliefs about experiences, but instead just “anticipate” them, then idk.
There is an odd question that I think about sometimes - why does “exist” refer to actual real world existence, as opposed to something like “exists according to Max Tegmark”, or “exists inside the observable universe”? I’m taking for granted that there is a property of existence—the question is, how do we manage to pick it out? There are a couple ideas off the top of my head that could answer this question. Reference magnetism is the idea that some properties are so special that they magically cause our words to be about them. Existence is pretty special - it’s a property that everything has. Alternatively, maybe “exist” acquires its meaning somehow through our interaction with the physical world—existent things are those things that I can causally interact with. This option has the “benefit” of ruling out the possibility of causally isolated parallel worlds.
Anticipated experience is just my estimate for the percentage of future-mes with said experience. Whether any of those future-mes “actually exist” is meaningless, though, it’s all just models.
Why? You’ll end up with many models which fit the data, some of which are simpler, but why is any one of those the “best”?
Disagree that this statement is cognitively meaningful.
So the idea is that you’re taking a percentage of the yous that exist across all possible models consistent with the data? Why? And how? I can sort of understand the idea that claims about the external world are meaningless in so far as they don’t constrain expectations. But now this thing we’ve been calling expectations is being identified with a structure inside the models whose whole purpose is to constrain our expectations. It seems circular to me.
It didn’t have to be this way, that the best way to predict experiences was by constructing models of an external world. There are other algorithms that could have turned out to be useful for this. Some people even think that it didn’t turn out this way, and that quantum mechanics is a good example.
I care about finding the truth. I think I have experiences, and my job is to use this fact to find more truths. The easiest hypotheses for me to think of that incorporate my data make ontological claims. My priors tell me that something like simplicity is a virtue, and if we’re talking about ontological claims, that means simpler ontological claims are more likely. I manage to build up a really large and intricate system of ontological claims that I hold to be true. At the margins, some ontological distinctions feel odd to maintain, but by and large it feels pretty natural.
Now, suppose I came to realize that ontological claims were in fact meaningless. Then I wouldn’t give up my goal of finding truths, I would just look elsewhere, maybe at logical, or mathematical, or maybe even moral truths. These truths don’t seem adequate to explain my data, but maybe I’m wrong. They are also just as suspicious to me as ontological truths. I might also look for new kinds of truths. I think it’s definitely worth it too try and look at the world (sorry) non-ontologically.
The how is Solomonoff induction, the why is because it’s historically been useful for prediction.
I don’t believe programs used in Solomonoff are “models of an external world”, they’re just models.
Re simplicity, you’re conflating a mathematical treatment of simplicity that justifies the prior and where ontological claims aren’t simple, with a folk understanding of simplicity in which they are. Or at least you’re promoting the folk understanding over the mathematical understanding.
If you understand how Solomonoff works, are you willing to defend the folk understanding over that?