Vladimir, many of these anthropic-sounding questions can also translate directly into “What should I expect to see happen to me, in situations where there are a billion X-potentially-mes and one Y-potentially-mes?” If X is a kind of me, I should almost certainly expect to see X; if not, I should expect to see Y. I cannot quite manage to bring myself to dispense with the question “What should I expect to see happen next?” or, even worse, “Why am I seeing something so orderly rather than chaotic?” For example, saying “I only care about people in orderly situations” does not cut it as an explanation—it doesn’t seem like a question that I could answer with a utility function.
I currently think a subjective point of view should be assumed only for a single decision, all the semantics preconfigured in the utility maximizer that makes the decision. No continuity of experience enters this picture, if agent operates continuously, it’s just a sequence of utility maximizer configurations, which are to be determined from each of the decision points to hold the best beliefs, and generally any kind of cognitive features (if it’s a sequence, then certain kinds of cognitive rituals become efficient). So, there is no future “me”, future “me” is a decision point that needs to be determined according to preferences of current decision, and it might be that there is no future “me” planned at all. This reduces expectation to both utility and probability, as you both have uncertain knowledge about your future version, and value associated with its possible states. So, you don’t plan to see something chaotic because you don’t predict something chaotic to happen.
I have not been able to dissolve “the amount of reality-fluid” without also dissolving my belief that most people-weight is in ordered universes and that most of my futures are in ordered universes, without which I have no explanation for why I find myself in an ordered universe and no expectation of a future that is ordered as well.
You predict the future to be ordered, and you are configured to know the environment to be ordered. An Occam’s razor-like prior is expected to converge on a true distribution, whatever that is, and so, being a general predictor, you weight possibilities this way.
In particular, I have not been able to dissolve reality-fluid into my utility function without concluding that, by virtue of caring only about copies of me who win the lottery, I could expect to win the lottery and actually see that as a result.
You can’t actually see that result, you may only expect your future state to see that result. If there is a point in preparing to winning/losing the lottery, and you only care about winning (that is, in case you don’t win, anything you’ve done won’t matter), you’ll make preparations for the winning option regardless of your chances, that is you’ll act as if you expect to win. If you include your thoughts, probability distribution and utility, in the domain of decisions, you might as well reconfigure yourself to believe that you’ll most certainly win. Not a realistically plausible situation, and changes the semantics of truth in representation, and hence counterintuitive, but delivers the same win.
Eliezer:
I currently think a subjective point of view should be assumed only for a single decision, all the semantics preconfigured in the utility maximizer that makes the decision. No continuity of experience enters this picture, if agent operates continuously, it’s just a sequence of utility maximizer configurations, which are to be determined from each of the decision points to hold the best beliefs, and generally any kind of cognitive features (if it’s a sequence, then certain kinds of cognitive rituals become efficient). So, there is no future “me”, future “me” is a decision point that needs to be determined according to preferences of current decision, and it might be that there is no future “me” planned at all. This reduces expectation to both utility and probability, as you both have uncertain knowledge about your future version, and value associated with its possible states. So, you don’t plan to see something chaotic because you don’t predict something chaotic to happen. You predict the future to be ordered, and you are configured to know the environment to be ordered. An Occam’s razor-like prior is expected to converge on a true distribution, whatever that is, and so, being a general predictor, you weight possibilities this way. You can’t actually see that result, you may only expect your future state to see that result. If there is a point in preparing to winning/losing the lottery, and you only care about winning (that is, in case you don’t win, anything you’ve done won’t matter), you’ll make preparations for the winning option regardless of your chances, that is you’ll act as if you expect to win. If you include your thoughts, probability distribution and utility, in the domain of decisions, you might as well reconfigure yourself to believe that you’ll most certainly win. Not a realistically plausible situation, and changes the semantics of truth in representation, and hence counterintuitive, but delivers the same win.