I think some confusion goes away if you get back to thinking about the details of the process that motivates these questions (i.e. thinking and decision-making performed using a brain), instead of reifying the informal concepts perceived during that process (e.g. “the world”, “experience”, etc.). What you have when thinking or making decisions, is a map, some kind of theory that might talk about events (e.g. specify/estimate their utility and probability). Decision-making and map-updating only have the map to work with. (When formalizing ideas that you are more or less able to reliably think about, semantic explanations can be used to capture and formulate the laws of that thinking, but they can be misleading when working on ideas that are too confused, trying to capture the laws of thinking that are not there.)
In this setting, “worlds” or programs that somehow describe them are unnecessary. Since different worlds with an agent that has the same map describing them will receive the same actions and map updates, it’s not useful to distinguish (or introduce) them when considering agent’s reasoning. (Separately, the mysterious “Kolmogorov complexity of worlds” is used without there being any clarity to what it means for a program to describe a world, so in avoiding its use we get rid of another mystery.)
If caring (probability) compares events in agent’s map, anticipated simplicity reflects the fact about how the map is updated (agent’s prior), giving “simple” events more probability. This is probably caused by how evolution built the map-updating algorithms, killing off various anti-inductive priors that would give more probability to weird events that are unlike related events of high probability (i.e. believing that something will most certainly happen because it never happened before, selected among such possibilities in some way). (When I point to evolution acting in simple worlds and selecting minds with simplicity-favoring priors, and not to magic acting in weird worlds and selecting minds with simplicity-favoring priors, I’m using my own mind’s simplicity-favoring prior to select that explanation.)
From this point of view, I don’t feel like the issues discussed in the post point to mysteries that are not accounted for. “Mathematical multiverse” corresponds to the language of agent’s map, perhaps only mentioning events (propositions) and not their probabilities/utilities (judgements maintained by a particular agent). “Reality fluid” or the prior/caring about worlds of the multiverse correspond to the probabilities (or something) that the map assigns/estimates for the events. These are “subjective” in that different agents have different maps, and “objective” in that they are normative for how that agent thinks (give an idealized map that agent’s thinking aspires to understand). (Measureless) mathematical multiverse could also be “more objective” than priors, if descriptions of events could be interpreted between maps of different agents, even if they are assigned different degrees of caring (this is analogous to how the same propositions of a logical language can be shared by many theories with different axioms, which disagree about truth of propositions, but talk about the same propositions).
I think some confusion goes away if you get back to thinking about the details of the process that motivates these questions (i.e. thinking and decision-making performed using a brain), instead of reifying the informal concepts perceived during that process (e.g. “the world”, “experience”, etc.). What you have when thinking or making decisions, is a map, some kind of theory that might talk about events (e.g. specify/estimate their utility and probability). Decision-making and map-updating only have the map to work with. (When formalizing ideas that you are more or less able to reliably think about, semantic explanations can be used to capture and formulate the laws of that thinking, but they can be misleading when working on ideas that are too confused, trying to capture the laws of thinking that are not there.)
In this setting, “worlds” or programs that somehow describe them are unnecessary. Since different worlds with an agent that has the same map describing them will receive the same actions and map updates, it’s not useful to distinguish (or introduce) them when considering agent’s reasoning. (Separately, the mysterious “Kolmogorov complexity of worlds” is used without there being any clarity to what it means for a program to describe a world, so in avoiding its use we get rid of another mystery.)
If caring (probability) compares events in agent’s map, anticipated simplicity reflects the fact about how the map is updated (agent’s prior), giving “simple” events more probability. This is probably caused by how evolution built the map-updating algorithms, killing off various anti-inductive priors that would give more probability to weird events that are unlike related events of high probability (i.e. believing that something will most certainly happen because it never happened before, selected among such possibilities in some way). (When I point to evolution acting in simple worlds and selecting minds with simplicity-favoring priors, and not to magic acting in weird worlds and selecting minds with simplicity-favoring priors, I’m using my own mind’s simplicity-favoring prior to select that explanation.)
From this point of view, I don’t feel like the issues discussed in the post point to mysteries that are not accounted for. “Mathematical multiverse” corresponds to the language of agent’s map, perhaps only mentioning events (propositions) and not their probabilities/utilities (judgements maintained by a particular agent). “Reality fluid” or the prior/caring about worlds of the multiverse correspond to the probabilities (or something) that the map assigns/estimates for the events. These are “subjective” in that different agents have different maps, and “objective” in that they are normative for how that agent thinks (give an idealized map that agent’s thinking aspires to understand). (Measureless) mathematical multiverse could also be “more objective” than priors, if descriptions of events could be interpreted between maps of different agents, even if they are assigned different degrees of caring (this is analogous to how the same propositions of a logical language can be shared by many theories with different axioms, which disagree about truth of propositions, but talk about the same propositions).