I’ve heard that point of view from several people. It’s a natural extension of LW-style beliefs, but I’m not sure I buy it yet. There are several lines of attack, the most obvious one is trying to argue that coinflips still behave as coinflips even when the person betting on them is really stupid and always bets on heads. But we’ve already explored that line a little bit, so I’m gonna try a different one:
Are you saying that evolution has equipped our minds with a measure of caring about all possible worlds according to simplicity? If yes, can you guess which of our ancestor organisms were already equipped with that measure, and which ones weren’t? Monkeys, fishes, bacteria?
(As an alternative, there could be some kind of law of nature saying that all minds must care about possible worlds according to simplicity. But I’m not sure how that could be true, given that you can build a UDT agent with any measure of caring.)
Evolution has equipped the minds in the worlds where thinking in terms of simplicity works to think according to simplicity. (Ass well as worlds where thinking in terms of simplicity works up to a certain point in time when the rules become complex)
In some sense, even bacteria are equipped with that, they work under the assumption that chemistry does not change over time.
You conflate two very different things here, as I see.
First, there are the preferences for simpler physical laws or simpler mathematical constructions. I don’t doubt that they are real amongst humans; after all, there is an evolutionary advantage to using simpler models ceteris paribus since they are easier to memorize and easier to reason about. Such evolved preferences probably contribute to a matemathician’s sense of elegance.
Second, there are preferences about the concrete evolutionarily relevant environment and the relevant agents in it. Naturally, this includes our fellow humans. Note here that we might also care about animals, uploads, AIs or aliens because of our evolved preferences and intuitions regarding humans. Of course, we don’t care about aliens because of a direct evolutionary reason. Rather, we simply execute the adaptations that underlie our intuitions. For instance, we might disprefer animal suffering because it is similar enough to human suffering.
This second level has very little to do with the complexity of the underlying physics. Monkeys have no conception of cellular automata; you could run them on cellular automata of vastly differing complexity and they wouldn’t care. They care about the kind of simplicity that is relevant to their day-to-day environment. Humans also care about this kind of simplicity; it’s just that they can generalize this preference to more abstract domains.
(On a somewhat unrelated note, you mentioned bacteria. I think your point is a red herring; you can build agents with an assumption for the underlying physics, but that doesn’t mean that the agent itself necessarily has any conception of the underlying physics, or even that the agent is consequentialist in any sense).
So, what I’m trying to get at: you might prefer simple physics and you might care about people, but it makes little sense to care less about people because they run on uglier physics. People are not physics; they are really high-level constructs, and a vast range of different universes could contain (more or less identical) instances of people whom I care about, or even simulations of those people.
If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less “measure” to them. But you rejected this kind of reasoning in your post, and I can’t exactly come to grips with the “physics racism” that seems to logically follow from that.
If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less “measure” to them. But you rejected this kind of reasoning in your post, and I can’t exactly come to grips with the “physics racism” that seem to logically follow from that.
Suppose I wanted to be fair to all, i.e., avoid “physics racism” and care about everyone equally, how would I go about that? It seems that I can only care about dynamical processes, since I can’t influence static objects, and to a first approximation dynamical processes are equivalent to computations (i.e., ignoring uncomputable things for now). But how do I care about all computations equally, if there’s an infinite number of them? The most obvious answer is to use the uniform distribution: take an appropriate universal Turing machine, and divide my “care” in half between programs (input tapes) that start with 0 and those that start with 1, then divide my “care” in half again based on the second bit, and so on. With some further filling in the details (how does one translate this idea into a formal utility function?), it seems plausible it could “add up to normality” (i.e., be roughly equivalent to the continuous version of Solomonoff Induction).
It sounds like this solution is (a) a version of Solomonoff Induction, and (b) similarly suffering from the arbitrary language problem—depending on which language you use to code up the programs. Right?
To clarify my point, I meant that Solomonoff induction can justify caring less about some agents (and I’m largely aware of the scheme you described), but simultaneously rejecting Solomonoff and caring less about agents running on more complex physics is not justified.
I think I understood your point, but maybe didn’t make my own clear. What I’m saying is that to recover “normality” you don’t have to care about some agents less, but can instead care about everyone equally, and just consider that there are more copies of some than others. I.e., in the continuous version of Solomonoff Induction, programs are infinite binary strings, and you could say there are more copies of simple/lawful universes because a bigger fraction of all possible infinite binary strings compute them. And this may be more palatable for some than saying that some universes have more magical reality fluid than others or that we should care about some agents more than others.
I agree with this, but I am not sure if you are trying to make this argument within my hypothesis that existence is meaningless. I use the same justification within my system, but I would not use phrases like “there are more copies,” because there is no such measure besides the one I that I assign.
Yeah, I think what I said isn’t strictly within your system. In your system, where does “the measure that I assign” come from? I mean, if I was already a UDT agent, I would already have such a measure, but I’m not already a UDT agent so I’d have to come up with a measure if I want to become a UDT agent (assuming that’s the right thing to do). But what do I based it on, and why? BTW, have you read my post http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/ where option 4 is similar to your system? I wasn’t sure option 4 is the right answer back then, and I’m still in the same basic position now.
Well in mind space, there will be many agents basing their measures on different things. For me, it is based my intuition about “caring about everyone equally,” and looking at programs as infinite binary strings as you describe. That does not feel like a satisfactory answer to me, but it seems just as good as any answer I have seen to the question “Where does your utility function come from?”
I have read that post, and of course, I agree with your reasons to prefer 4.
It seems to me that bacteria are adapted to their environment, not a mix of all possible environments based on simplicity. You can view evolution as a learning process that absorbs knowledge about the world and updates a “prior” to a “posterior”. (Shalizi has a nice post connecting Bayesian updating with replicator dynamics, it’s only slightly relevant here, but still very interesting.) Even if the prior was simplicity-based at the start, once evolution has observed the first few bits of a sequence, there’s no more reason for it to create a mind that starts from the prior all over again. Using the posterior instead would probably make the mind much more efficient.
So if you say your preferences are simplicity-based, I don’t understand how you got such preferences.
Would you still have this complaint if I instead said that I care about the subset of universes which contain me following simplicity based preferences?
To me, at least as I see it now, there is no difference between saying that I care about all universes weighted by simplicity and saying that I care about all universes containing me weighted by simplicity, since my actions to not change the universes that do not contain me.
I am saying that my decision procedure is independent of those preferences, so there is no evolutionary disadvantage to having them. Does that address your issue?
I’ve heard that point of view from several people. It’s a natural extension of LW-style beliefs, but I’m not sure I buy it yet. There are several lines of attack, the most obvious one is trying to argue that coinflips still behave as coinflips even when the person betting on them is really stupid and always bets on heads. But we’ve already explored that line a little bit, so I’m gonna try a different one:
Are you saying that evolution has equipped our minds with a measure of caring about all possible worlds according to simplicity? If yes, can you guess which of our ancestor organisms were already equipped with that measure, and which ones weren’t? Monkeys, fishes, bacteria?
(As an alternative, there could be some kind of law of nature saying that all minds must care about possible worlds according to simplicity. But I’m not sure how that could be true, given that you can build a UDT agent with any measure of caring.)
Evolution has equipped the minds in the worlds where thinking in terms of simplicity works to think according to simplicity. (Ass well as worlds where thinking in terms of simplicity works up to a certain point in time when the rules become complex)
In some sense, even bacteria are equipped with that, they work under the assumption that chemistry does not change over time.
I do not see the point of your question yet.
You conflate two very different things here, as I see.
First, there are the preferences for simpler physical laws or simpler mathematical constructions. I don’t doubt that they are real amongst humans; after all, there is an evolutionary advantage to using simpler models ceteris paribus since they are easier to memorize and easier to reason about. Such evolved preferences probably contribute to a matemathician’s sense of elegance.
Second, there are preferences about the concrete evolutionarily relevant environment and the relevant agents in it. Naturally, this includes our fellow humans. Note here that we might also care about animals, uploads, AIs or aliens because of our evolved preferences and intuitions regarding humans. Of course, we don’t care about aliens because of a direct evolutionary reason. Rather, we simply execute the adaptations that underlie our intuitions. For instance, we might disprefer animal suffering because it is similar enough to human suffering.
This second level has very little to do with the complexity of the underlying physics. Monkeys have no conception of cellular automata; you could run them on cellular automata of vastly differing complexity and they wouldn’t care. They care about the kind of simplicity that is relevant to their day-to-day environment. Humans also care about this kind of simplicity; it’s just that they can generalize this preference to more abstract domains.
(On a somewhat unrelated note, you mentioned bacteria. I think your point is a red herring; you can build agents with an assumption for the underlying physics, but that doesn’t mean that the agent itself necessarily has any conception of the underlying physics, or even that the agent is consequentialist in any sense).
So, what I’m trying to get at: you might prefer simple physics and you might care about people, but it makes little sense to care less about people because they run on uglier physics. People are not physics; they are really high-level constructs, and a vast range of different universes could contain (more or less identical) instances of people whom I care about, or even simulations of those people.
If I assume Solomonoff induction, then it is in a way reasonable to care less about people running on convoluted physics, because then I would have to assign less “measure” to them. But you rejected this kind of reasoning in your post, and I can’t exactly come to grips with the “physics racism” that seems to logically follow from that.
Suppose I wanted to be fair to all, i.e., avoid “physics racism” and care about everyone equally, how would I go about that? It seems that I can only care about dynamical processes, since I can’t influence static objects, and to a first approximation dynamical processes are equivalent to computations (i.e., ignoring uncomputable things for now). But how do I care about all computations equally, if there’s an infinite number of them? The most obvious answer is to use the uniform distribution: take an appropriate universal Turing machine, and divide my “care” in half between programs (input tapes) that start with 0 and those that start with 1, then divide my “care” in half again based on the second bit, and so on. With some further filling in the details (how does one translate this idea into a formal utility function?), it seems plausible it could “add up to normality” (i.e., be roughly equivalent to the continuous version of Solomonoff Induction).
It sounds like this solution is (a) a version of Solomonoff Induction, and (b) similarly suffering from the arbitrary language problem—depending on which language you use to code up the programs. Right?
To clarify my point, I meant that Solomonoff induction can justify caring less about some agents (and I’m largely aware of the scheme you described), but simultaneously rejecting Solomonoff and caring less about agents running on more complex physics is not justified.
I think I understood your point, but maybe didn’t make my own clear. What I’m saying is that to recover “normality” you don’t have to care about some agents less, but can instead care about everyone equally, and just consider that there are more copies of some than others. I.e., in the continuous version of Solomonoff Induction, programs are infinite binary strings, and you could say there are more copies of simple/lawful universes because a bigger fraction of all possible infinite binary strings compute them. And this may be more palatable for some than saying that some universes have more magical reality fluid than others or that we should care about some agents more than others.
I agree with this, but I am not sure if you are trying to make this argument within my hypothesis that existence is meaningless. I use the same justification within my system, but I would not use phrases like “there are more copies,” because there is no such measure besides the one I that I assign.
Yeah, I think what I said isn’t strictly within your system. In your system, where does “the measure that I assign” come from? I mean, if I was already a UDT agent, I would already have such a measure, but I’m not already a UDT agent so I’d have to come up with a measure if I want to become a UDT agent (assuming that’s the right thing to do). But what do I based it on, and why? BTW, have you read my post http://lesswrong.com/lw/1iy/what_are_probabilities_anyway/ where option 4 is similar to your system? I wasn’t sure option 4 is the right answer back then, and I’m still in the same basic position now.
Well in mind space, there will be many agents basing their measures on different things. For me, it is based my intuition about “caring about everyone equally,” and looking at programs as infinite binary strings as you describe. That does not feel like a satisfactory answer to me, but it seems just as good as any answer I have seen to the question “Where does your utility function come from?”
I have read that post, and of course, I agree with your reasons to prefer 4.
I address this “physics racism” concern here:
http://lesswrong.com/lw/jn2/preferences_without_existence/aj4w
It seems to me that bacteria are adapted to their environment, not a mix of all possible environments based on simplicity. You can view evolution as a learning process that absorbs knowledge about the world and updates a “prior” to a “posterior”. (Shalizi has a nice post connecting Bayesian updating with replicator dynamics, it’s only slightly relevant here, but still very interesting.) Even if the prior was simplicity-based at the start, once evolution has observed the first few bits of a sequence, there’s no more reason for it to create a mind that starts from the prior all over again. Using the posterior instead would probably make the mind much more efficient.
So if you say your preferences are simplicity-based, I don’t understand how you got such preferences.
Would you still have this complaint if I instead said that I care about the subset of universes which contain me following simplicity based preferences?
To me, at least as I see it now, there is no difference between saying that I care about all universes weighted by simplicity and saying that I care about all universes containing me weighted by simplicity, since my actions to not change the universes that do not contain me.
In the post you described two universes that don’t contain you, and said you cared about the simpler one more. Or am I missing something?
I am saying that my decision procedure is independent of those preferences, so there is no evolutionary disadvantage to having them. Does that address your issue?