Thank you for bringing attention to this issue— I think it’s an under-appreciated problem. I agree with you that the “force” measure is untenable, and the “pattern” view, while better, probably can’t work either.
Count-based measures seem to fail because they rely on drawing hard boundaries between minds. Also, there are going to be cases where it’s not even clear whether a system counts as a mind or not, and if we take the “count” view we will probably be forced to make definitive decisions in those cases.
Mass/energy-based measures seem better because they allow you to treat anthropic measure as the continuous variable that it is, but I also don’t think they can be the answer. In particular, they seem to imply that more efficient implementations of a mind (in terms of component size or power consumption or whatever) would have lower measure than less efficient ones, even if they have all the same experiences.
This is debatable, but it strikes me that anthropic measure and “degree of consciousness” are closely related concepts. Fundamentally, for a system to have any anthropic measure at all, it needs to be able to count as an “observer” or an “experiencer” which seems pretty close to saying that it’s conscious on some level.
If we equate consciousness with a kind of information processing, then anthropic measure could be a function of “information throughput” or something like that. If a System A can “process” more bits of information per unit time than System B, then it can have more experiences than System B, and arguably should be given more anthropic measure. In other words, if you identify “yourself” with the set of experiences you’re having in a given moment, then it’s more likely that those experiences are being realized in a system with more computing power, more ability to have more experiences, than a system with less compute. Note that, on this view, the information that’s being processed doesn’t have to be compressed/deduplicated in any way; systems running the same computation on many threads in parallel would still have more measure than single-threaded systems ceteris paribus.
There’s a lot that needs to be fleshed out with this “computational theory of anthropic measure” but it seems like the truth has to be something in this general direction.
Update: I don’t think I agree with this anymore, after listening to what Vanessa Kosoy said about anthropics and infra-Bayesianism during her recent AXRP interview. Her basic idea is that the idea of “number of copies” of an agent, which I take to be closely related to anthropic measure, is sort of incoherent and not definable in the general case. Instead you’re just supposed to ask, given some hypothesis H, what is the probability that the computation corresponding to my experience is running somewhere, anywhere?
If we assume that you start out with full Knightian uncertainty over which of the two brains you are, then infra-Bayesianism would (I think) tell you to act as if you’re the brain whose future you believe to have the lowest expected utility, since that way you avoid the worst possible outcome in expectation.
sort of incoherent and not definable in the general case
Why? Solomonoff inducting, producing an estimate of the measure of my existence (the rate of the occurrence of the experience I’m currently having) across all possible universe-generators weighted inversely to their complexity seems totally coherent to me. (It’s about 0.1^10^10^10^10)
infra-Bayesianism would (I think) tell you to act as if you’re the brain whose future you believe to have the lowest expected utility
I haven’t listened to that one yet, but … wasn’t it a bit hard to swallow as a decision rule? What if all of the worlds with the lowest EU are completely bizarre (like, boltzmann brains, or worlds that have somehow fallen under the rule of fantastical devils with literally no supporters). This seems to make expected utility even more prone to not converging under sustained thought, than the longtermist cluelessness we were already dealing with.
What if all of the worlds with the lowest EU are completely bizarre (like, boltzmann brains, or worlds that have somehow fallen under the rule of fantastical devils with literally no supporters).
The Boltzmann brain issue is addressed in infra-Bayesian physicalism with a “fairness” condition that excludes worlds from the EU calculation where you are run with fake memories or the history of your actions is inconsistent with what your policy says you would actually do. Vanessa talks about this in AXRP episode 14. The “worlds that have somehow fallen under the rule of fantastical devils” thing is only a problem if that world is actually assigned high measure in one of the sa-measures (fancy affine-transformed probability distributions) in your prior. The maximin rule is only used to select the sa-measure in your convex set with lowest EU, and then you maximize EU given that distribution. You don’t pick the literal worst conceivable world.
Notably, if you don’t like the maximin rule, it’s been shown in Section 4 of this post that infra-Bayesian logic still works with optimism in the face of Knightian uncertainty, it’s just that you don’t get worst-case guarantees anymore. I’d suspect that you could also get away with something like “maximize 10th percentile EU” to get more tempered risk-averse behavior.
Solomonoff inducting, producing an estimate of the measure of my existence (the rate of the occurrence of the experience I’m currently having) across all possible universe-generators weighted inversely to their complexity seems totally coherent to me. (It’s about 0.1^10^10^10^10)
I’m not sure I follow your argument. I thought your view was that minds implemented in more places, perhaps with more matter/energy, have more anthropic measure? The Kolmogorov complexity of the mind seems like an orthogonal issue.
Maybe you’re already familiar with it, but I think Stuart Armstrong’s Anthropic Decision Theory paper (along with some of his LW posts on anthropics) do a good job of “deflating” anthropic probabilities and shifting the focus to your values and decision theory.
Thank you for bringing attention to this issue— I think it’s an under-appreciated problem. I agree with you that the “force” measure is untenable, and the “pattern” view, while better, probably can’t work either.
Count-based measures seem to fail because they rely on drawing hard boundaries between minds. Also, there are going to be cases where it’s not even clear whether a system counts as a mind or not, and if we take the “count” view we will probably be forced to make definitive decisions in those cases.
Mass/energy-based measures seem better because they allow you to treat anthropic measure as the continuous variable that it is, but I also don’t think they can be the answer. In particular, they seem to imply that more efficient implementations of a mind (in terms of component size or power consumption or whatever) would have lower measure than less efficient ones, even if they have all the same experiences.
This is debatable, but it strikes me that anthropic measure and “degree of consciousness” are closely related concepts. Fundamentally, for a system to have any anthropic measure at all, it needs to be able to count as an “observer” or an “experiencer” which seems pretty close to saying that it’s conscious on some level.
If we equate consciousness with a kind of information processing, then anthropic measure could be a function of “information throughput” or something like that. If a System A can “process” more bits of information per unit time than System B, then it can have more experiences than System B, and arguably should be given more anthropic measure. In other words, if you identify “yourself” with the set of experiences you’re having in a given moment, then it’s more likely that those experiences are being realized in a system with more computing power, more ability to have more experiences, than a system with less compute. Note that, on this view, the information that’s being processed doesn’t have to be compressed/deduplicated in any way; systems running the same computation on many threads in parallel would still have more measure than single-threaded systems ceteris paribus.
There’s a lot that needs to be fleshed out with this “computational theory of anthropic measure” but it seems like the truth has to be something in this general direction.
Update: I don’t think I agree with this anymore, after listening to what Vanessa Kosoy said about anthropics and infra-Bayesianism during her recent AXRP interview. Her basic idea is that the idea of “number of copies” of an agent, which I take to be closely related to anthropic measure, is sort of incoherent and not definable in the general case. Instead you’re just supposed to ask, given some hypothesis H, what is the probability that the computation corresponding to my experience is running somewhere, anywhere?
If we assume that you start out with full Knightian uncertainty over which of the two brains you are, then infra-Bayesianism would (I think) tell you to act as if you’re the brain whose future you believe to have the lowest expected utility, since that way you avoid the worst possible outcome in expectation.
Why? Solomonoff inducting, producing an estimate of the measure of my existence (the rate of the occurrence of the experience I’m currently having) across all possible universe-generators weighted inversely to their complexity seems totally coherent to me. (It’s about 0.1^10^10^10^10)
I haven’t listened to that one yet, but … wasn’t it a bit hard to swallow as a decision rule?
What if all of the worlds with the lowest EU are completely bizarre (like, boltzmann brains, or worlds that have somehow fallen under the rule of fantastical devils with literally no supporters). This seems to make expected utility even more prone to not converging under sustained thought, than the longtermist cluelessness we were already dealing with.
I’ll address your points in reverse order.
The Boltzmann brain issue is addressed in infra-Bayesian physicalism with a “fairness” condition that excludes worlds from the EU calculation where you are run with fake memories or the history of your actions is inconsistent with what your policy says you would actually do. Vanessa talks about this in AXRP episode 14. The “worlds that have somehow fallen under the rule of fantastical devils” thing is only a problem if that world is actually assigned high measure in one of the sa-measures (fancy affine-transformed probability distributions) in your prior. The maximin rule is only used to select the sa-measure in your convex set with lowest EU, and then you maximize EU given that distribution. You don’t pick the literal worst conceivable world.
Notably, if you don’t like the maximin rule, it’s been shown in Section 4 of this post that infra-Bayesian logic still works with optimism in the face of Knightian uncertainty, it’s just that you don’t get worst-case guarantees anymore. I’d suspect that you could also get away with something like “maximize 10th percentile EU” to get more tempered risk-averse behavior.
I’m not sure I follow your argument. I thought your view was that minds implemented in more places, perhaps with more matter/energy, have more anthropic measure? The Kolmogorov complexity of the mind seems like an orthogonal issue.
Maybe you’re already familiar with it, but I think Stuart Armstrong’s Anthropic Decision Theory paper (along with some of his LW posts on anthropics) do a good job of “deflating” anthropic probabilities and shifting the focus to your values and decision theory.