You conflate two very different things here, as I see.
First, there are the preferences for simpler physical laws or simpler mathematical constructions. I don’t doubt that they are real amongst humans; after all, there is an evolutionary advantage to using simpler models ceteris paribus since they are easier to memorize and easier to reason about. Such evolved preferences probably contribute to a matemathician’s sense of elegance.
Second, there are preferences about the concrete evolutionarily relevant environment and the relevant agents in it. Naturally, this includes our fellow humans. Note here that we might also care about animals, uploads, AIs or aliens because of our evolved preferences and intuitions regarding humans. Of course, we don’t care about aliens because of a direct evolutionary reason. Rather, we simply execute the adaptations that underlie our intuitions. For instance, we might disprefer animal suffering because it is similar enough to human suffering.
This second level has very little to do with the complexity of the underlying physics. Monkeys have no conception of cellular automata; you could run them on cellular automata of vastly differing complexity and they wouldn’t care. They care about the kind of simplicity that is relevant to their day-to-day environment. Humans also care about this kind of simplicity; it’s just that they can generalize this preference to more abstract domains.
(On a somewhat unrelated note, you mentioned bacteria. I think your point is a red herring; you can build agents with an assumption for the underlying physics, but that doesn’t mean that the agent itself necessarily has any conception of the underlying physics, or even that the agent is consequentialist in any sense).
So, what I’m trying to get at: you might prefer simple physics and you might care about people, but it makes little sense to care less about people because they run on uglier physics. People are not physics; they are really high-level constructs, and a vast range of different universes could contain (more or less identical) instances of people whom I care about, or even simulations of those people.
If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less “measure” to them. But you rejected this kind of reasoning in your post, and I can’t exactly come to grips with the “physics racism” that seems to logically follow from that.
If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less “measure” to them. But you rejected this kind of reasoning in your post, and I can’t exactly come to grips with the “physics racism” that seem to logically follow from that.
Suppose I wanted to be fair to all, i.e., avoid “physics racism” and care about everyone equally, how would I go about that? It seems that I can only care about dynamical processes, since I can’t influence static objects, and to a first approximation dynamical processes are equivalent to computations (i.e., ignoring uncomputable things for now). But how do I care about all computations equally, if there’s an infinite number of them? The most obvious answer is to use the uniform distribution: take an appropriate universal Turing machine, and divide my “care” in half between programs (input tapes) that start with 0 and those that start with 1, then divide my “care” in half again based on the second bit, and so on. With some further filling in the details (how does one translate this idea into a formal utility function?), it seems plausible it could “add up to normality” (i.e., be roughly equivalent to the continuous version of Solomonoff Induction).
It sounds like this solution is (a) a version of Solomonoff Induction, and (b) similarly suffering from the arbitrary language problem—depending on which language you use to code up the programs. Right?
To clarify my point, I meant that Solomonoff induction can justify caring less about some agents (and I’m largely aware of the scheme you described), but simultaneously rejecting Solomonoff and caring less about agents running on more complex physics is not justified.
I think I understood your point, but maybe didn’t make my own clear. What I’m saying is that to recover “normality” you don’t have to care about some agents less, but can instead care about everyone equally, and just consider that there are more copies of some than others. I.e., in the continuous version of Solomonoff Induction, programs are infinite binary strings, and you could say there are more copies of simple/lawful universes because a bigger fraction of all possible infinite binary strings compute them. And this may be more palatable for some than saying that some universes have more magical reality fluid than others or that we should care about some agents more than others.
I agree with this, but I am not sure if you are trying to make this argument within my hypothesis that existence is meaningless. I use the same justification within my system, but I would not use phrases like “there are more copies,” because there is no such measure besides the one I that I assign.
Yeah, I think what I said isn’t strictly within your system. In your system, where does “the measure that I assign” come from? I mean, if I was already a UDT agent, I would already have such a measure, but I’m not already a UDT agent so I’d have to come up with a measure if I want to become a UDT agent (assuming that’s the right thing to do). But what do I based it on, and why? BTW, have you read my post http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/ where option 4 is similar to your system? I wasn’t sure option 4 is the right answer back then, and I’m still in the same basic position now.
Well in mind space, there will be many agents basing their measures on different things. For me, it is based my intuition about “caring about everyone equally,” and looking at programs as infinite binary strings as you describe. That does not feel like a satisfactory answer to me, but it seems just as good as any answer I have seen to the question “Where does your utility function come from?”
I have read that post, and of course, I agree with your reasons to prefer 4.
You conflate two very different things here, as I see.
First, there are the preferences for simpler physical laws or simpler mathematical constructions. I don’t doubt that they are real amongst humans; after all, there is an evolutionary advantage to using simpler models ceteris paribus since they are easier to memorize and easier to reason about. Such evolved preferences probably contribute to a matemathician’s sense of elegance.
Second, there are preferences about the concrete evolutionarily relevant environment and the relevant agents in it. Naturally, this includes our fellow humans. Note here that we might also care about animals, uploads, AIs or aliens because of our evolved preferences and intuitions regarding humans. Of course, we don’t care about aliens because of a direct evolutionary reason. Rather, we simply execute the adaptations that underlie our intuitions. For instance, we might disprefer animal suffering because it is similar enough to human suffering.
This second level has very little to do with the complexity of the underlying physics. Monkeys have no conception of cellular automata; you could run them on cellular automata of vastly differing complexity and they wouldn’t care. They care about the kind of simplicity that is relevant to their day-to-day environment. Humans also care about this kind of simplicity; it’s just that they can generalize this preference to more abstract domains.
(On a somewhat unrelated note, you mentioned bacteria. I think your point is a red herring; you can build agents with an assumption for the underlying physics, but that doesn’t mean that the agent itself necessarily has any conception of the underlying physics, or even that the agent is consequentialist in any sense).
So, what I’m trying to get at: you might prefer simple physics and you might care about people, but it makes little sense to care less about people because they run on uglier physics. People are not physics; they are really high-level constructs, and a vast range of different universes could contain (more or less identical) instances of people whom I care about, or even simulations of those people.
If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less “measure” to them. But you rejected this kind of reasoning in your post, and I can’t exactly come to grips with the “physics racism” that seems to logically follow from that.
Suppose I wanted to be fair to all, i.e., avoid “physics racism” and care about everyone equally, how would I go about that? It seems that I can only care about dynamical processes, since I can’t influence static objects, and to a first approximation dynamical processes are equivalent to computations (i.e., ignoring uncomputable things for now). But how do I care about all computations equally, if there’s an infinite number of them? The most obvious answer is to use the uniform distribution: take an appropriate universal Turing machine, and divide my “care” in half between programs (input tapes) that start with 0 and those that start with 1, then divide my “care” in half again based on the second bit, and so on. With some further filling in the details (how does one translate this idea into a formal utility function?), it seems plausible it could “add up to normality” (i.e., be roughly equivalent to the continuous version of Solomonoff Induction).
It sounds like this solution is (a) a version of Solomonoff Induction, and (b) similarly suffering from the arbitrary language problem—depending on which language you use to code up the programs. Right?
To clarify my point, I meant that Solomonoff induction can justify caring less about some agents (and I’m largely aware of the scheme you described), but simultaneously rejecting Solomonoff and caring less about agents running on more complex physics is not justified.
I think I understood your point, but maybe didn’t make my own clear. What I’m saying is that to recover “normality” you don’t have to care about some agents less, but can instead care about everyone equally, and just consider that there are more copies of some than others. I.e., in the continuous version of Solomonoff Induction, programs are infinite binary strings, and you could say there are more copies of simple/lawful universes because a bigger fraction of all possible infinite binary strings compute them. And this may be more palatable for some than saying that some universes have more magical reality fluid than others or that we should care about some agents more than others.
I agree with this, but I am not sure if you are trying to make this argument within my hypothesis that existence is meaningless. I use the same justification within my system, but I would not use phrases like “there are more copies,” because there is no such measure besides the one I that I assign.
Yeah, I think what I said isn’t strictly within your system. In your system, where does “the measure that I assign” come from? I mean, if I was already a UDT agent, I would already have such a measure, but I’m not already a UDT agent so I’d have to come up with a measure if I want to become a UDT agent (assuming that’s the right thing to do). But what do I based it on, and why? BTW, have you read my post http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/ where option 4 is similar to your system? I wasn’t sure option 4 is the right answer back then, and I’m still in the same basic position now.
Well in mind space, there will be many agents basing their measures on different things. For me, it is based my intuition about “caring about everyone equally,” and looking at programs as infinite binary strings as you describe. That does not feel like a satisfactory answer to me, but it seems just as good as any answer I have seen to the question “Where does your utility function come from?”
I have read that post, and of course, I agree with your reasons to prefer 4.
I address this “physics racism” concern here:
http://lesswrong.com/lw/jn2/preferences_without_existence/aj4w