I assumed at least laws themselves don’t change in PBR so we still need some things to be deterministic in addition to indeterministic things. Not sure if it requeries additional ontology, but still seems to result in more complex theory?
Perspectives in MWI can be derived.
I guess “actions behaving according to wavefuntion” in PBR do replace “wavefunction”, but would’t then laws of behavior became more complex to include that translation from wavefunction to actions?
The results of the laws are indeterministic, but laws themselves are kinda not—you always get the same probabilities. So I figured you would need additional complexity to distinguish between deterministic and indeterministic parts of description of the universe.
Forgive my ignorance, but why do we need projection postulate in MWI?
Forgive my ignorance, but why do we need projection postulate in MWI
Because if this you are trying to calculate the probabilities of future events, you need to treat anything you have already observed as probability 1 .or equivalently , discard anything unobserved.
It sounds useful but I don’t see any reason to include the way I treat anything into ontology. That wavefunction is nearly zero in all regions where Born statistics fails is just consequence, not postulate. Similarly you can derive that following Bayes rule will result in largest amount of spicemeasure for states where you know something. Whether you want this or not is purely ethical question and ethics today is as arbitrary as it was yesterday. You might as well only track uncertainty about wavefunction and not specific decoherence-path and decide to minimize worst ignorance or something.
You would need a postulate only if you want there to be some fundamental point-knowledge but there are no point-states in reality—everything is just amplitudes.
It sounds useful but I don’t see any reason to include the way I treat anything into ontology.
I didn’t say it had anything to do with ontology, and in MWI it doesn’t . In MWI , you disregard results from other branches that you haven’t observed in order to predict future probabilities correctly , but you don’t regard them as non existent.
The results of the laws are indeterministic, but laws themselves are kinda not—you always get the same probabilities. So I figured you would need additional complexity to distinguish between deterministic and indeterministic parts of description of the universe
Under subjective interpretations like rQM, there arent deterministic and indeteministic parts of the universe. You can use Schrödinger’s equation to model a part of the universe , and that will work until it stops being isolated—until it interacts with something not in the model. All models are ultimately indeteministic because they are always based on incomplete information. And the process by which this becomes apparent, by which the limited model is invalidated, isn’t anything special .
Observer 01 can model system A just fine until it interacts with system B, which they don’t know anything about. If observer O2 is more knowledgeable , they might be able to model the AB interaction using the SWE.
I assumed at least laws themselves don’t change in PBR so we still need some things to be deterministic in addition to indeterministic things. Not sure if it requeries additional ontology, but still seems to result in more complex theory?
Perspectives in MWI can be derived.
I guess “actions behaving according to wavefuntion” in PBR do replace “wavefunction”, but would’t then laws of behavior became more complex to include that translation from wavefunction to actions?
I dont see what you mean. What are the deterministic quantum laws?
That would just be the projection postulate that everything else uses .
The results of the laws are indeterministic, but laws themselves are kinda not—you always get the same probabilities. So I figured you would need additional complexity to distinguish between deterministic and indeterministic parts of description of the universe.
Forgive my ignorance, but why do we need projection postulate in MWI?
Because if this you are trying to calculate the probabilities of future events, you need to treat anything you have already observed as probability 1 .or equivalently , discard anything unobserved.
It sounds useful but I don’t see any reason to include the way I treat anything into ontology. That wavefunction is nearly zero in all regions where Born statistics fails is just consequence, not postulate. Similarly you can derive that following Bayes rule will result in largest amount of
spicemeasure for states where you know something. Whether you want this or not is purely ethical question and ethics today is as arbitrary as it was yesterday. You might as well only track uncertainty about wavefunction and not specific decoherence-path and decide to minimize worst ignorance or something.You would need a postulate only if you want there to be some fundamental point-knowledge but there are no point-states in reality—everything is just amplitudes.
I didn’t say it had anything to do with ontology, and in MWI it doesn’t . In MWI , you disregard results from other branches that you haven’t observed in order to predict future probabilities correctly , but you don’t regard them as non existent.
Under subjective interpretations like rQM, there arent deterministic and indeteministic parts of the universe. You can use Schrödinger’s equation to model a part of the universe , and that will work until it stops being isolated—until it interacts with something not in the model. All models are ultimately indeteministic because they are always based on incomplete information. And the process by which this becomes apparent, by which the limited model is invalidated, isn’t anything special .
Observer 01 can model system A just fine until it interacts with system B, which they don’t know anything about. If observer O2 is more knowledgeable , they might be able to model the AB interaction using the SWE.