I’ve always maintained that in order to solve this issue we must first solve the question of, what does it even mean to say that a physical system is implementing a particular algorithm? Does it make sense to say that an algorithm is only approximately implemented? What if the algorithm is something very chaotic such as prime-checking, where approximation is not possible?
An algorithm should be a box that you can feed any input into, but in the real, causal world, there is no such choice, any impression that you “could” input anything into your pocket calculator is due to the counterfactuals your brain can consider purely because it has some uncertainty about the world (an omniscient being could not make any choice at all! -- assuming complete omniscience is possible, which I don’t think it is, but let us imagine the universe as an omniscient being or something).
This leads me to believe that “anthropic binding” cannot be some kind of metaphysical primitive, since for it to be well-defined it needs to be considered by an embedded agent! Indeed, I claimed that recognizing algorithms “in the wild” requires the use of counterfactuals, and omniscient beings (such as “the universe”) cannot use counterfactuals. Therefore I do not see how there could be a “correct” answer to the problem of anthropic binding.
Hmm. Are you getting at something like: How can there possibly be an objective way of associating an experiential reference class with a system of matter.. when the reference class is an algorithm, and algorithms only exist as abstractions, and there are various reasons the multiverse can’t be an abstraction-considerer, so anthropic binding couldn’t be a real metaphysical effect and must just be a construct of agents?
There are some accounts of anthropic binding that allow for it to just be a construct.
I removed this from the post, because it was very speculative and conflicted with some other stuff and I wanted the post to be fairly evergreen, but it was kind of interesting, so here’s some doubts I had about whether I should really dismiss the force theory:
I’m not completely certain that sort of self-reference is coherent as a utility function. That’s one of the assumptions we could consider throwing out to escape the problem, this assumption that utility functions should be able to refer to “I”, rather than being restricted to talking about the state of the physical world. If they couldn’t have an “I” in the utility function, then it seems like their expected probability of being one or the other should no longer factor into their decisions, IIRC a similar thing happens in some variants of the sleeping beauty problem: The beauty has a probability about which day’s beauty she is, but if she’s able to report any probability she chooses, as a deliberate bet, she bets according to a policy designed to maximize some final total across all days, which totally ignores her estimates about which day it is. Similarly, our agents, shorn of “I”, would cooperate in service of whatever entities their theory of cosmological measure says are most important. It would boil down to cosmological measure, though cosmological measure is also full of weird open problems, perhaps there are fewer.
Yes, I am arguing against the ontological realism of anthropic binding. Beyond that, I feel like there ought to be some way of comparing physical systems and having a (subjective) measure of how similar they are, though I don’t know how to formalize it.
It is for example clear that I can relate to a dolphin, even though I am not a penguin. Meaning that the penguin and I probably share some similar subsystems, and therefore if I care about the anthropic measure of my subsystems then I should care about penguins, too.
I’ve always maintained that in order to solve this issue we must first solve the question of, what does it even mean to say that a physical system is implementing a particular algorithm? Does it make sense to say that an algorithm is only approximately implemented? What if the algorithm is something very chaotic such as prime-checking, where approximation is not possible?
An algorithm should be a box that you can feed any input into, but in the real, causal world, there is no such choice, any impression that you “could” input anything into your pocket calculator is due to the counterfactuals your brain can consider purely because it has some uncertainty about the world (an omniscient being could not make any choice at all! -- assuming complete omniscience is possible, which I don’t think it is, but let us imagine the universe as an omniscient being or something).
This leads me to believe that “anthropic binding” cannot be some kind of metaphysical primitive, since for it to be well-defined it needs to be considered by an embedded agent! Indeed, I claimed that recognizing algorithms “in the wild” requires the use of counterfactuals, and omniscient beings (such as “the universe”) cannot use counterfactuals. Therefore I do not see how there could be a “correct” answer to the problem of anthropic binding.
Hmm. Are you getting at something like: How can there possibly be an objective way of associating an experiential reference class with a system of matter.. when the reference class is an algorithm, and algorithms only exist as abstractions, and there are various reasons the multiverse can’t be an abstraction-considerer, so anthropic binding couldn’t be a real metaphysical effect and must just be a construct of agents?
There are some accounts of anthropic binding that allow for it to just be a construct.
I removed this from the post, because it was very speculative and conflicted with some other stuff and I wanted the post to be fairly evergreen, but it was kind of interesting, so here’s some doubts I had about whether I should really dismiss the force theory:
Yes, I am arguing against the ontological realism of anthropic binding. Beyond that, I feel like there ought to be some way of comparing physical systems and having a (subjective) measure of how similar they are, though I don’t know how to formalize it.
It is for example clear that I can relate to a dolphin, even though I am not a penguin. Meaning that the penguin and I probably share some similar subsystems, and therefore if I care about the anthropic measure of my subsystems then I should care about penguins, too.