I don’t really know Solomonoff induction or MWI on a formal level, but… If I know that the universe seems to obey rule X everywhere, and I know what my local environment is like and how applying rule X to that local environment would affect it, isn’t that enough? Why would I need to include in my model a copy of the entire wavefunction that made up the universe, if having a model of my local environment is enough to predict how my local environment behaves? In other words, I don’t need to spend a lot of effort selecting the subjective me, because my model is small enough to mostly only include the subjective me in the first place.
(I acknowledge that I don’t know these topics well, and might just be talking nonsense.)
I don’t really know Solomonoff induction or MWI on a formal level
You know more about it than most of the people talking of it: you know you don’t know it. They don’t. That is the chief difference. (I also don’t know it all that well, but at least I can look at the argument that it favours something, and see if it favours the iterator over all possible worlds even more)
If I know that the universe seems to obey rule X everywhere, and I know what my local environment is like and how applying rule X to that local environment would affect it, isn’t that enough?
Formally, there’s no distinction between rules you know and the environment. You are to construct shortest self containing piece of code that will be predicting the experiment. You will have to include any local environment data as well.
If you follow this approach to the logical end, you get Copenhagen Interpretation, shut up and calculate form: you don’t need to predict all the outcomes that you’ll never see. So you are on the right track.
it doesn’t take any extra code to predict all the outcomes that you’ll never see. Just extra space/time. But those are not the minimized quantity. In fact, predicting all the outcomes that you’ll never see is exactly the sort of wasteful space/time usage that programmers engage in when they want to minimize code length—it’s hard to write code telling your processor to abandon certain threads of computation when they are no longer relevant.
you missed the point. you need code for picking some outcome that you see out of outcomes that you didn’t see, if you calculated those. It does take extra code to predict the outcome you did see if you actually calculated extra outcomes you didn’t see, and then it’s hard to tell what would require less code, one piece of code is not subset of the other and difference likely depends to encoding of programs.
I don’t really know Solomonoff induction or MWI on a formal level, but… If I know that the universe seems to obey rule X everywhere, and I know what my local environment is like and how applying rule X to that local environment would affect it, isn’t that enough? Why would I need to include in my model a copy of the entire wavefunction that made up the universe, if having a model of my local environment is enough to predict how my local environment behaves? In other words, I don’t need to spend a lot of effort selecting the subjective me, because my model is small enough to mostly only include the subjective me in the first place.
(I acknowledge that I don’t know these topics well, and might just be talking nonsense.)
You know more about it than most of the people talking of it: you know you don’t know it. They don’t. That is the chief difference. (I also don’t know it all that well, but at least I can look at the argument that it favours something, and see if it favours the iterator over all possible worlds even more)
Formally, there’s no distinction between rules you know and the environment. You are to construct shortest self containing piece of code that will be predicting the experiment. You will have to include any local environment data as well.
If you follow this approach to the logical end, you get Copenhagen Interpretation, shut up and calculate form: you don’t need to predict all the outcomes that you’ll never see. So you are on the right track.
it doesn’t take any extra code to predict all the outcomes that you’ll never see. Just extra space/time. But those are not the minimized quantity. In fact, predicting all the outcomes that you’ll never see is exactly the sort of wasteful space/time usage that programmers engage in when they want to minimize code length—it’s hard to write code telling your processor to abandon certain threads of computation when they are no longer relevant.
you missed the point. you need code for picking some outcome that you see out of outcomes that you didn’t see, if you calculated those. It does take extra code to predict the outcome you did see if you actually calculated extra outcomes you didn’t see, and then it’s hard to tell what would require less code, one piece of code is not subset of the other and difference likely depends to encoding of programs.