It’s true that probability and importance are interchangeable in an expected-utility calculation, but if you weight A twice as much as B because you care twice as much about it, that implies equal probabilities between the two. So if you use a Solomonoff-style prior based on how much you care, that implies a uniform prior on the worlds themselves. Or maybe you’re saying expected utility is the sum of caring times value, with no probabilities involved. But in that case your probability is just how much you care.
If we were in a complex world, it’s plausible you could make a bigger impact to your values by choosing actions that correlate with actions in the much more important simpler world rather than actions that have good consequences in this world. Computing which those are would take a lot of effort, though, so in practice, you’d be doing the same sorts of things in the short run (i.e., working toward better futures).
What it means to exist is one area of metaphysics that still confuses me, but the Tegmark Level IV picture seems to make sense. In that case, rather than measure being unimportant, measure is all that matters, because our actions help determine which possible worlds have more and less measure.
I am saying that expected utility is the sum of caring times value, with no probabilities involved. If there are going to be any probabilities involved at all, they will come from logical uncertainty, which is a separate issue.
This can be thought of as saying that your probability is just how much you care, which is how I think about it. However, this has some philosophical consequences. It means that probabilities really are completely subjective. It also means that trying to talk about tautologies outside of the context of a person’s beliefs is completely a mind projection fallacy. This explains some issues that I have with anthropics, by dismissing them as ill formed questions.
If we were in a complex world, it’s plausible you could make a bigger impact to your values by choosing actions that correlate with actions in the much more important simpler world rather than actions that have good consequences in this world. Computing which those are would take a lot of effort, though, so in practice, you’d be doing the same sorts of things in the short run (i.e., working toward better futures).
This is plausible, but I do not think it is likely to be possible, unless there is a simulation of you in the simple universe, in which case, you should some “probability” to being in that universe. Just choosing actions which correlate with the actions of simpler agents, just by the mechanism of you being similar to that agent. I do not think this will work, because you having the knowledge that your world is the more complex one is enough to make your decision procedures sufficiently different.
What it means to exist is one area of metaphysics that still confuses me, but the Tegmark Level IV picture seems to make sense. In that case, rather than measure being unimportant, measure is all that matters, because our actions help determine which possible worlds have more and less measure.
I might be wrong, but I believe that the Tegmark Level 4 universe does not by his definition come with a prior. When you say this, you are talking about Tegmark 4 with the Solomonoff prior, right?
This explains some issues that I have with anthropics, by dismissing them as ill formed questions.
I was going to ask how you handle anthropics, but then you answered it. Trippy stuff.
If probability is just degree of caring, why would we use Bayes’ rule to update? Or are you also proposing not to update?
I do not think this will work, because you having the knowledge that your world is the more complex one is enough to make your decision procedures sufficiently different.
It probably works sometimes for outputs that aren’t related to knowledge of how complex your world is. For instance: Consider a simulation of you at the neuronal level making some decision. It produces some output. Then there’s another simulation down to the molecular level. It produces some slightly more accurate output. Then another at the quantum level; it’s yet again slightly more accurate. If you’re the quantum-level one, your output correlates highly with the neuronal-level one, which is much simpler, so you care about it vastly more.
When you say this, you are talking about Tegmark 4 with the Solomonoff prior, right?
Not necessarily Solomonoff, especially since the multiverse of mathematical structures doesn’t conform to a Solomonoff distribution. The set of finite or infinite bitstrings is countably infinite, but the set of mathematically possible universes is uncountably infinite, e.g., universes where some parameter is set to each possible real number. I just meant some measure over the universes. If you restrict Tegmark 4 to bitstring universes, then Solomonoff could work.
If probability is just degree of caring, why would we use Bayes’ rule to update? Or are you also proposing not to update?
I do not update, but in the exact same sense that Updateless Decision Theory does not update. It adds up to the Bayes’ rule normality.
It probably works sometimes for outputs that aren’t related to knowledge of how complex your world is.
Again, plausible, but I am really not sure. Either way, I feel like we do not understand nearly enough to have this strategy of sacrificing our own world for other worlds be practical right now.
The set of finite or infinite bitstrings is countably infinite, but the set of mathematically possible universes is uncountably infinite, e.g., universes where some parameter is set to each possible real number.
This one of the beauties of my proposal, is that if we do not have to assign probabilities to possible universes, we don’t have to limit ourselves to an uncountable infinity. The collection of universes does not even have to be a set!
This one of the beauties of my proposal, is that if we do not have to assign probabilities to possible universes, we don’t have to limit ourselves to an uncountable infinity.
Hmm. Seems like your caring-about measure should still sum to 1. If you’re just comparing two universes, all you need to know is their relative importance, but if you want to evaluate policies over the whole set of universes, you’re going to want a set of weights whose sum is bounded.
However, even if the multiverse is infinite, or not even a set, I as a finite mind can only look at finite pieces of it. My caring function looks at a small piece of the multiverse, because it cannot comprehend the whole thing. This is sad. However, it does not feel arbitrary to me. The caring function has limitations of finiteness, or limitations of set theory, but that is MY limit function. There is a big difference to me between me having a limited caring function, and thinking that the universe has built in a limited probability function.
I see. :) You can do that, and it’s psychologically plausible.
I’m old-school and still believe there’s some fact of the matter about what the multiverse is. Presumably this fact of the matter is representable analytically (though not necessarily by human minds). If we found a better mathematical way to capture this, presumably your limitations would expand and you would then care about more than you do now.
I’m old-school and still believe there’s some fact of the matter about what the multiverse is.
I enjoyed this sentence.
If we found a better mathematical way to capture this, presumably your limitations would expand and you would then care about more than you do now.
Only to a point! Suppose Coscott has a policy for accepting new math which applies a critical eye and only accepts things based on a consistent criteria. That is: if he would accept X upon inspection, then, he would not have accepted not-X via his inspection.
Then consider the “completion” of Coscott’s views; that is, the (presumably uncomputable) system which is Coscott in the limit of being taught arbitrarily many new math techniques.
Now apply Tarski’s Undefinability. We can construct a more powerful math which Coscott could never accept.
Therefore, if Coscott is a sufficiently careful mathematician, then his math powers seem to have ultimate limits already, rather than mere current limits.
On the other hand, if there is no such limit, because Coscott could accept either X or not-X for some X depending on which is presented to him first, then there is hope! A “stronger” teacher could show Coscott the way to the more powerful math. Yet, Coscott is also at risk of being misled.
Here’s a riddle: according to Coscott, is there any way Coscott could be mislead? Coscott has established that physical existence is meaningless to him; but is there mathematical truth beyond provability? Is there a branch of the multiverse in which the axiom of choice is true, and one where it is false? (It seems clear that there is a subset of the universe where the axiom of choice is true, and one where it is false, but I don’t think that’s what I mean...)
However, even if the multiverse is infinite, or not even a set, I as a finite mind can only look at finite pieces of it. My caring function looks at a small piece of the multiverse, because it cannot comprehend the whole thing.
Your limitation does not inherently prohibit you from having a caring function that looks at the entire multiverse. The constraint is on how complex the pattern of evaluation of the features of the multiverse can be.
Nice dialogue.
It’s true that probability and importance are interchangeable in an expected-utility calculation, but if you weight A twice as much as B because you care twice as much about it, that implies equal probabilities between the two. So if you use a Solomonoff-style prior based on how much you care, that implies a uniform prior on the worlds themselves. Or maybe you’re saying expected utility is the sum of caring times value, with no probabilities involved. But in that case your probability is just how much you care.
If we were in a complex world, it’s plausible you could make a bigger impact to your values by choosing actions that correlate with actions in the much more important simpler world rather than actions that have good consequences in this world. Computing which those are would take a lot of effort, though, so in practice, you’d be doing the same sorts of things in the short run (i.e., working toward better futures).
What it means to exist is one area of metaphysics that still confuses me, but the Tegmark Level IV picture seems to make sense. In that case, rather than measure being unimportant, measure is all that matters, because our actions help determine which possible worlds have more and less measure.
Thanks!
I am saying that expected utility is the sum of caring times value, with no probabilities involved. If there are going to be any probabilities involved at all, they will come from logical uncertainty, which is a separate issue.
This can be thought of as saying that your probability is just how much you care, which is how I think about it. However, this has some philosophical consequences. It means that probabilities really are completely subjective. It also means that trying to talk about tautologies outside of the context of a person’s beliefs is completely a mind projection fallacy. This explains some issues that I have with anthropics, by dismissing them as ill formed questions.
This is plausible, but I do not think it is likely to be possible, unless there is a simulation of you in the simple universe, in which case, you should some “probability” to being in that universe. Just choosing actions which correlate with the actions of simpler agents, just by the mechanism of you being similar to that agent. I do not think this will work, because you having the knowledge that your world is the more complex one is enough to make your decision procedures sufficiently different.
I might be wrong, but I believe that the Tegmark Level 4 universe does not by his definition come with a prior. When you say this, you are talking about Tegmark 4 with the Solomonoff prior, right?
I was going to ask how you handle anthropics, but then you answered it. Trippy stuff.
If probability is just degree of caring, why would we use Bayes’ rule to update? Or are you also proposing not to update?
It probably works sometimes for outputs that aren’t related to knowledge of how complex your world is. For instance: Consider a simulation of you at the neuronal level making some decision. It produces some output. Then there’s another simulation down to the molecular level. It produces some slightly more accurate output. Then another at the quantum level; it’s yet again slightly more accurate. If you’re the quantum-level one, your output correlates highly with the neuronal-level one, which is much simpler, so you care about it vastly more.
Not necessarily Solomonoff, especially since the multiverse of mathematical structures doesn’t conform to a Solomonoff distribution. The set of finite or infinite bitstrings is countably infinite, but the set of mathematically possible universes is uncountably infinite, e.g., universes where some parameter is set to each possible real number. I just meant some measure over the universes. If you restrict Tegmark 4 to bitstring universes, then Solomonoff could work.
I do not update, but in the exact same sense that Updateless Decision Theory does not update. It adds up to the Bayes’ rule normality.
Again, plausible, but I am really not sure. Either way, I feel like we do not understand nearly enough to have this strategy of sacrificing our own world for other worlds be practical right now.
This one of the beauties of my proposal, is that if we do not have to assign probabilities to possible universes, we don’t have to limit ourselves to an uncountable infinity. The collection of universes does not even have to be a set!
Hmm. Seems like your caring-about measure should still sum to 1. If you’re just comparing two universes, all you need to know is their relative importance, but if you want to evaluate policies over the whole set of universes, you’re going to want a set of weights whose sum is bounded.
Great point.
However, even if the multiverse is infinite, or not even a set, I as a finite mind can only look at finite pieces of it. My caring function looks at a small piece of the multiverse, because it cannot comprehend the whole thing. This is sad. However, it does not feel arbitrary to me. The caring function has limitations of finiteness, or limitations of set theory, but that is MY limit function. There is a big difference to me between me having a limited caring function, and thinking that the universe has built in a limited probability function.
I see. :) You can do that, and it’s psychologically plausible.
I’m old-school and still believe there’s some fact of the matter about what the multiverse is. Presumably this fact of the matter is representable analytically (though not necessarily by human minds). If we found a better mathematical way to capture this, presumably your limitations would expand and you would then care about more than you do now.
I enjoyed this sentence.
Only to a point! Suppose Coscott has a policy for accepting new math which applies a critical eye and only accepts things based on a consistent criteria. That is: if he would accept X upon inspection, then, he would not have accepted not-X via his inspection.
Then consider the “completion” of Coscott’s views; that is, the (presumably uncomputable) system which is Coscott in the limit of being taught arbitrarily many new math techniques.
Now apply Tarski’s Undefinability. We can construct a more powerful math which Coscott could never accept.
Therefore, if Coscott is a sufficiently careful mathematician, then his math powers seem to have ultimate limits already, rather than mere current limits.
On the other hand, if there is no such limit, because Coscott could accept either X or not-X for some X depending on which is presented to him first, then there is hope! A “stronger” teacher could show Coscott the way to the more powerful math. Yet, Coscott is also at risk of being misled.
Here’s a riddle: according to Coscott, is there any way Coscott could be mislead? Coscott has established that physical existence is meaningless to him; but is there mathematical truth beyond provability? Is there a branch of the multiverse in which the axiom of choice is true, and one where it is false? (It seems clear that there is a subset of the universe where the axiom of choice is true, and one where it is false, but I don’t think that’s what I mean...)
What do you mean by “psychologically plausible?”
I mean it’s a plausible way to describe how people actually feel and make decisions when acting.
Your limitation does not inherently prohibit you from having a caring function that looks at the entire multiverse. The constraint is on how complex the pattern of evaluation of the features of the multiverse can be.
Fair enough. I was aware of that, but did not bother to write it out. Sorry.