Logical uncertainty still gets probabilities just like they used to. Only indexical uncertainty gets pushed into the realm of values.
(At least for now, while I am thinking about the multiverse as tegmark 4. I am very open to the possibility that eventually I will believe even logically inconsistent universes exist, and then they would get the same fate as indexical uncertainty)
In one model I considered I put tegmark 4 as the one weighted according to my values, and called the set of different counterfactual universes other agents might care about as tegmark 5. This was mainly for the purpose of fiction where it filled a role as a social convention among agents with very different values of this type, but it’s an interesting idea of what the concept might look like.
These could by the way not necessarily be just quantitatively different weights over the same set of universes. For example we can imagine that it turns out humans and human derived agents are solomnoff induction like and only value things describable by turing machines computing them causally, but some other things value only the outputs of continuous functions.
Logical uncertainty still gets probabilities just like they used to. Only indexical uncertainty gets pushed into the realm of values.
(At least for now, while I am thinking about the multiverse as tegmark 4. I am very open to the possibility that eventually I will believe even logically inconsistent universes exist, and then they would get the same fate as indexical uncertainty)
In one model I considered I put tegmark 4 as the one weighted according to my values, and called the set of different counterfactual universes other agents might care about as tegmark 5. This was mainly for the purpose of fiction where it filled a role as a social convention among agents with very different values of this type, but it’s an interesting idea of what the concept might look like.
These could by the way not necessarily be just quantitatively different weights over the same set of universes. For example we can imagine that it turns out humans and human derived agents are solomnoff induction like and only value things describable by turing machines computing them causally, but some other things value only the outputs of continuous functions.