The problem of locating “the subjective you” seems to me to have two parts: first, to locate a world, and second, to locate an observer in that world. For the first part, see the grandparent; the second part seems to me to be the same across interpretations.
The point is, code of a theory has to produce output matching your personal subjective input. The objective view doesn’t suffice (and if you drop that requirement, you are back to square 1 because you can iterate all physical theories). The CI has that as part of theory, MWI doesn’t, you need extra code.
The complexity argument for MWI that was presented doesn’t favour MWI, it favours iteration over all possible physical theories, because that key requirement was omitted.
And my original point is not that MWI is false, or that MWI has higher complexity, or equal complexity. My point is that argument is flawed. I don’t care about MWI being false or true, I am using argument for MWI as an example of sloppiness SI should try not to have (hopefully without this kind of sloppiness they will also be far less sure that AIs are so dangerous).
The problem of locating “the subjective you” seems to me to have two parts: first, to locate a world, and second, to locate an observer in that world. For the first part, see the grandparent; the second part seems to me to be the same across interpretations.
The point is, code of a theory has to produce output matching your personal subjective input. The objective view doesn’t suffice (and if you drop that requirement, you are back to square 1 because you can iterate all physical theories). The CI has that as part of theory, MWI doesn’t, you need extra code.
The complexity argument for MWI that was presented doesn’t favour MWI, it favours iteration over all possible physical theories, because that key requirement was omitted.
And my original point is not that MWI is false, or that MWI has higher complexity, or equal complexity. My point is that argument is flawed. I don’t care about MWI being false or true, I am using argument for MWI as an example of sloppiness SI should try not to have (hopefully without this kind of sloppiness they will also be far less sure that AIs are so dangerous).