Yeah, this whole line of reasoning fails if you can get to 3^^^3 utilons without creating ~3^^^3 sentients to distribute them among.
Overall I’m having a really surprising amount of difficulty thinking up an example where you have a lot of causal importance but no anthropic counter-evidence.
I’m not sure what you mean. If you use an anthropic theory like what Eliezer is using here (e.g. SSA, UDASSA) then an amount of causal importance that is large compared to the rest of your reference class implies few similar members of the reference class, which is anthropic counter-evidence, so of course it would be impossible to think of an example. Even if nonsentients can contribute to utility, if I can create 3^^^3 utilons using nonsentients, than some other people probably can to, so I don’t have a lot of causal importance compared to them.
Anyway, does “anthropic” even really have anything to do with qualia? The way people talk about it it clearly does, but I’m not sure it even shows up in the definition—a non-sentient optimizer could totally make anthropic updates.
This is the contrapositive of the grandparent. I was saying that if we assume that the reference class is sentients, then nonsentients need to reason using different rules i.e. a different reference class. You are saying that if nonsentients should reason using the same rules, then the reference class cannot comprise only sentients. I actually agree with the latter much more strongly, and I only brought up the former because it seemed similar to the argument you were trying to remember.
There are really two separate questions here, that of how to reason anthropically and that of how magic reality-fluid is distributed. Confusing these is common, since the same sort of considerations affect both of them and since they are both badly understood, though I would say that due to UDT/ADT, we now understand the former much better, while acknowledging the possibility of unknown unknowns. (Our current state of knowledge where we confuse these actually feels a lot like people who have never learnt to separate the descriptive and the normative.)
The way Eliezer presented things in the post, it is not entirely clear which of the two he meant to be responsible for the leverage penalty. It seems like he meant for it to be an epistemic consideration due to anthropic reasoning, but this seems obviously wrong given UDT. In the Tegmark IV model that he describes, the leverage penalty is caused by reality-fluid, but it seems like he only intended that as an analogy. It seems a lot more probable to me though, and it is possible that Eliezer would express uncertainty as to whether the leverage penalty is actually caused by reality-fluid, so that it is a bit more than an analogy. There is also a third mathematically equivalent possibility where the leverage penalty is about values, and we just care less about individual people when there are more of them, but Eliezer obviously does not hold that view.
I’m not sure what you mean. If you use an anthropic theory like what Eliezer is using here (e.g. SSA, UDASSA)
A comment: it is not clear to me that Eliezer is intending to use SSA or UDASSA here. The “magic reality fluid” measure looks more like SIA, but with a prior based on Levin complexity rather than Kolmogorov complexity—see my comment here. Or—in an equivalent formulation—he’s using Kolmogorov + SSA but with an extremely broad “reference class” (the class of all causal nodes, most of which aren’t observers in any anthropic sense). This is still not UDASSA.
To get something like UDASSA, we shouldn’t distribute the weight 2^-#p of each program p uniformly among its execution steps. Instead we should consider using another program q to pick out an execution step or a sequence of steps (i.e. a sub-program s) from p, and then give the combination of q,p a weight 2^-(#p+#q). This means each sub-program s will get a total prior weight of Sum {p, q: q(p) = s & s is a sub-program of p} 2^-(#p + #q).
When updating on your evidence E, consider the class S(E) of all sub-programs which correspond to an AI program having that evidence, and normalize. The posterior probability you are in a particular universe p’ then becomes proportional to Sum {q: q(p’) is a sub-program of p’ and a member of S(E)} 2^-(#p’ + #q).
This looks rather different to what I discussed in my other comment, and it maybe handles anthropic problems a bit better. I can’t see there is any shift either towards very big universes (no presumptuous philosopher) or towards dense computronium universes, where we are simulations. There does appear to be a Great Filter or “Doomsday” shift, since it is still a form of SSA, but this is mitigated by the consideration that we may be part of a reference class (program q) which preferentially selects pre-AI biological observers, as opposed to any old observers.
I agree with this; the ‘e.g.’ was meant to point toward the most similar theories that have names, not pin down exactly what Eliezer is doing here. I though that it would be better to refer to the class of similar theories here since there is enough uncertainty that we don’t really have details.
Yeah, this whole line of reasoning fails if you can get to 3^^^3 utilons without creating ~3^^^3 sentients to distribute them among.
I’m not sure what you mean. If you use an anthropic theory like what Eliezer is using here (e.g. SSA, UDASSA) then an amount of causal importance that is large compared to the rest of your reference class implies few similar members of the reference class, which is anthropic counter-evidence, so of course it would be impossible to think of an example. Even if nonsentients can contribute to utility, if I can create 3^^^3 utilons using nonsentients, than some other people probably can to, so I don’t have a lot of causal importance compared to them.
This is the contrapositive of the grandparent. I was saying that if we assume that the reference class is sentients, then nonsentients need to reason using different rules i.e. a different reference class. You are saying that if nonsentients should reason using the same rules, then the reference class cannot comprise only sentients. I actually agree with the latter much more strongly, and I only brought up the former because it seemed similar to the argument you were trying to remember.
There are really two separate questions here, that of how to reason anthropically and that of how magic reality-fluid is distributed. Confusing these is common, since the same sort of considerations affect both of them and since they are both badly understood, though I would say that due to UDT/ADT, we now understand the former much better, while acknowledging the possibility of unknown unknowns. (Our current state of knowledge where we confuse these actually feels a lot like people who have never learnt to separate the descriptive and the normative.)
The way Eliezer presented things in the post, it is not entirely clear which of the two he meant to be responsible for the leverage penalty. It seems like he meant for it to be an epistemic consideration due to anthropic reasoning, but this seems obviously wrong given UDT. In the Tegmark IV model that he describes, the leverage penalty is caused by reality-fluid, but it seems like he only intended that as an analogy. It seems a lot more probable to me though, and it is possible that Eliezer would express uncertainty as to whether the leverage penalty is actually caused by reality-fluid, so that it is a bit more than an analogy. There is also a third mathematically equivalent possibility where the leverage penalty is about values, and we just care less about individual people when there are more of them, but Eliezer obviously does not hold that view.
A comment: it is not clear to me that Eliezer is intending to use SSA or UDASSA here. The “magic reality fluid” measure looks more like SIA, but with a prior based on Levin complexity rather than Kolmogorov complexity—see my comment here. Or—in an equivalent formulation—he’s using Kolmogorov + SSA but with an extremely broad “reference class” (the class of all causal nodes, most of which aren’t observers in any anthropic sense). This is still not UDASSA.
To get something like UDASSA, we shouldn’t distribute the weight 2^-#p of each program p uniformly among its execution steps. Instead we should consider using another program q to pick out an execution step or a sequence of steps (i.e. a sub-program s) from p, and then give the combination of q,p a weight 2^-(#p+#q). This means each sub-program s will get a total prior weight of Sum {p, q: q(p) = s & s is a sub-program of p} 2^-(#p + #q).
When updating on your evidence E, consider the class S(E) of all sub-programs which correspond to an AI program having that evidence, and normalize. The posterior probability you are in a particular universe p’ then becomes proportional to Sum {q: q(p’) is a sub-program of p’ and a member of S(E)} 2^-(#p’ + #q).
This looks rather different to what I discussed in my other comment, and it maybe handles anthropic problems a bit better. I can’t see there is any shift either towards very big universes (no presumptuous philosopher) or towards dense computronium universes, where we are simulations. There does appear to be a Great Filter or “Doomsday” shift, since it is still a form of SSA, but this is mitigated by the consideration that we may be part of a reference class (program q) which preferentially selects pre-AI biological observers, as opposed to any old observers.
I agree with this; the ‘e.g.’ was meant to point toward the most similar theories that have names, not pin down exactly what Eliezer is doing here. I though that it would be better to refer to the class of similar theories here since there is enough uncertainty that we don’t really have details.