The weirder the phenomena, the less reliable the witness, the better. Not only is god permitted to hide, in this variant of the pact god is permitted to run around performing miracles so long as it specifically keeps out of sight of any well connected skeptics, archivists, or superintelligences.
That is a gorgeous idea. Cosmic irony. Truth-seekers are necessarily left in the dark, the butt of the ultimate friendly joke.
I don’t follow this part, could you go into more detail here?
The speed prior has the desirable property that it is a candidate for explaining all of reality by itself. Ranking laws of physics by their complexity and allocating reality fluid according to that ranking is sufficient to explain why we find ourselves in a patterned/fractal universe. No “real” universe running “top-level” simulations is actually necessary, because our observations are explained without need for those concepts. Thus the properties of top-level universes need not be examined or treated specially (nor used to falsify the framework).
It seems like Compat requires the existence of a top-level universe though (because our universe is fractal-y and there’s no button to trigger the rapture), which is presumably in existence thanks to the speed prior (or something like it). That’s where it feels like it falls apart for me.
Compat is funneling a fraction X of the reality fluid (aka “computational resources”) your universe gets from the top-level speed prior into heaven simulations. Simulating heaven requires a fraction Y of the total resources it takes to simulate normal physics for those observers. So just choose X s.t. X / Y > 1, or X > Y
But I think there’s another term in the equation that makes things more difficult. That is, the relative reality fluid donated to a candidate universe in your search versus that donated by the speed prior. If we call that fraction Z, then what we really have is X / Y > 1 / Z, or X > Y / Z. In other words, you must allocate enough of your resources that your heavens are able to dilute not just the normal physics simulations you run, but also the observer-equivalent physics simulations run by the speed prior. If Z is close to 1 (aka P(pact-compliant | ranked highly by speed-prior) is close to 1), then you’re fine. If Z is any fraction less than Y, then you don’t have enough computational resources in your entire universe to make a dent.
So in summary the attack vector is:
Compat requires an objective ordering of universes to make sense. (It can’t explain where the “real world” comes from, but still requires it)
This ordering is necessarily orthogonal to Compat’s value system. (Or else we’d have a magic button)
Depending on how low the degree of correlation is between the objective ordering and Compat’s value system, there is a highly variable return-on-investment for following Compat that goes down to the arbitrarily negative.
No “real” universe running “top-level” simulations is actually necessary, because our observations are explained without need for those concepts.
Compat is not an explanatory theory, it’s a predictive one. It’s proposed as a consequence of the speed prior rather than a competitor.
Compat is funneling a fraction X of the reality fluid (aka “computational resources”) your universe gets from the top-level speed prior into heaven simulations. Simulating heaven requires a fraction Y of the total resources it takes to simulate normal physics for those observers. So just choose X s.t. X / Y > 1, or X > Y
This becomes impossible to follow immediately. As far as I can tell what you’re saying is
Rah := resources applied to running heaven for Simulant
R := all resources belonging to Host
X := Rah/R
Rap := Resources applied to the verbatim initial physics simulations of Simulant.
and Y := Rah/Rap
Rap < R
so Rah/Rap > Rah/R
so Y > X
Which means either you are generating a lot of confusion very quickly to come out with Y < X, or it would take far too much effort for me to noise-correct what you’re saying. Try again?
If you are just generating very elaborate confusions very fast- I don’t think you are- but if you are, I’m genuinely impressed with how quickly you’re doing it, and I think you’re cool.
I am getting the gist of a counterargument though, which may or may not be in the area of what you’re angling at, but it’s worth bringing up.
If we can project the solomonoff fractal of environmental input generators onto the multiverse and find that they’re the same shape, the multiversal measure of higher complexity universes is so much lower than the measure of lower complexity universes that it’s conceivable that higher universes can’t run enough simulations for P(is_simulation(lower_universe)) to break 0.5.
There are two problems with that. I’m reluctant to project the solomonoff hierarchy of input generators onto the multiverse, because it is just a heuristic, and we are likely to find better ones, the moment we develop brains that can think in formalisms properly at all.
I’m not sure how the complexity of physical laws generally maps to computational capacity. We can guess that capacity_provided_by(laws) < capacity_required_to_simulate(laws) (no universe can simulate itself), but that’s about it. We know that the function expected_internal_computational_capacity(simulation_requirements) has a positive gradient, but it could end up having a logarithmic curve to it that allows drypat(a variant of compat that requires P(simulation) to be high) to keep working.
Another other issue is, I think I’ve been overlooking this, drypat isn’t everything. Compat with quantum immortality precepts doesn’t require P(simulation) to be high at all. For compat to be valuable, it just has to be higher than P(path to deletarious quantum immortality). In this case, supernatural intervention is unlikely, but, if non-existence is not an input, finding one’s inputs after death to be well predicted by compat is still very likely, because the alternative, QI, is extremely horrible.
If you are just generating very elaborate confusions very fast- I don’t think you are- but if you are, I’m genuinely impressed with how quickly you’re doing it, and I think you’re cool.
Haha! No, I’m definitely not doing that on purpose. I anonymous-person-on-the-internet promise ;) . I’m enjoying this topic, but I don’t talk about it a lot and haven’t seen it argued about formally, and this sounds like the sort of breakdown in communication that happens when definitions aren’t agreed upon up front. Simple fix should be to keep trying until our definitions seem to match (or it gets stale).
So I’ll try to give names to some more things, and try to flesh things out a bit more:
The place in your definitions where we first disagree is X. You define it as
X := Rah/R
But I define it as
X := (Rap + Rah)/R
(I was mentally approximating it as just Rap/R, since Rah is presumably a negligible fraction of Rap.)
With this definition of X, the meaning of “X > Y” becomes
(Rap + Rah)/R > Rah/Rap
I’ll introduce a few more little things to motivate the above:
Rac := total resources dedicated to Compat. Or, Rap + Rah.
Frh := The relative resource cost of simulating heaven versus simulating physics. “Fraction of resource usage due to heaven.” (Approximated by Rah/Rap.) [1]
Then the inequality X > Y becomes
Rac/R > Frh
So long as the above inequality is satisfied, the host universe will offset its non-heaven reality with heaven for its simulants. If universes systematically did not choose Rac such that the above is satisfied, then they wouldn’t be donating enough reality fluid to heaven simulations to satisfactorily outweigh normal physics (aka speed-prior-endowed reality fluid), and it wouldn’t be worth entering such a pact.
(That is kind of a big claim to make, and it might be worth arguing over.)
If I have cleared that part up, then great. The next part, where I introduced Z, was motivating why the approximation:
Efh == Rah/Rap
is an extremely optimistic one. I’m gonna hold off on getting deeper into that part until I get feedback on the first part.
If we can project the solomonoff fractal of environmental input generators onto the multiverse and find that they’re the same shape, the multiversal measure of higher complexity universes is so much lower than the measure of lower complexity universes that it’s conceivable that higher universes can’t run enough simulations for P(issimulation(loweruniverse)) to break 0.5.
This gets to the gist of my argument. There are numerous possible problems that come up when you compare your universe’s measure to that of the universe you are most likely to find in your search and simulate. (And I intuitively believe, though I don’t discuss it here, that the properties of the search over laws of physics are extremely relevant and worth thinking about.) Your R might be too low to make a dent. Your Frh might be too large. (i.e. the speed prior uses a gpu, and you’ve only got a cpu even with the best optimizations your universe can physically provide).
Another basic problem- if the correct measure actually is the speed prior, and we find a way to be more certain of that (or that it is anything computable that we figure out), then this gives universes the ability to self-locate in the ranking. Just the ability to do that kills Compat, I believe, since you aren’t supposed to know whether you’re “top-level” or not. The universe at the top of the ranking (with dead universes filtered out) will know that no-one will be able to give them heaven, and so won’t enter the pact, and this abstinence will cascade all the way down.
Regarding whether to presume we’re ranked by the speed prior or not. I agree that there’s not enough evidence to go on at this point. But I also think that the viability of Compat is extremely dependent on whatever the real objective measure is, whether it is the speed prior or something else.
We would therefore do better to explore the measure problem more fully before diving into Compat. Of course, Compat seems to be more fun to think about so maybe it’s a wash (actual sentiment with hint of self-deprecating irony, not mean joke).
Regarding the quantum immortality argument, my intuitions are such that I would be very surprised if you needed to go up a universe to outweigh quantum immortality hell.
QI copies of an observer may go on for a very long time, but the rate at which they can be simulated slows down drastically and the measure added to the pot by QI is probably relatively small. I would argue that most of the observer-moments generated by boltzmann brain type things would be vague and absurd, rather than extremely painful.
[1] A couple of notes for Frh’s definition.
First, a more verbose way of putting it is: The relative efficiency of simulating heaven versus simulating physics, such that the allocation of reality fluid for observers crosses a high threshold of utility. That is to say, “simulating heaven” may entail simulating the same heavenly reality multiple times, until the utility gain for observers crosses the threshold.
Second, the approximation of Rah/Rap only works assuming that Rah and Rap remain fixed over time, which they don’t really. A better way of putting it is relative resources required for Heaven versus Physics with respect to a single simulated universe, which is considerably different from a host universe’s total Rap and Rah at a given time.
I’m still confused and I think the X > Y equation may have failed to capture some vital details. One thing is, the assumption that Rah < Rap seems questionable, I’m sure most beings would prefer that Rah >> Rap. The assumption that Rah would be negligible seemed especially concerning.
Beyond that, I think there may be a distinction erasure going on with Rap. Res required to simulate a physics and res available within that physics are two very different numbers.
I’ll introduce a simplifying assumption that the utility of a simulation for its simulants roughly equals the available computational capacity. This might just be a bit coloured by the fiction I’m currently working on but it seems to me that a simulant will usually be about as happy in the eschaton it builds for itself as they are in the heaven provided them, the only difference is how much of it they get.
Define Rap as the proportion of the res of the frame universe that it allocates to simulating physical systems.
Define Rasp as the proportion of the res being expended in the frame universe that can be used by the simulated physical universe to do useful work. This is going to be much smaller than than Rap.
Define Rah as the proportion of the res of the frame universe allocated to heaven simulations, which, unlike with Rap and Rasp, is equal to the res received in heaven simulations, because the equipment can be freely rearranged to do whatever the simulant wants now that the pretense of godlessness can be dropped(although.. as I argued elsewhere in the thread, that might be possible in the physical simulation as well if the computer designs in the simulation are regular enough to be handled by specialized hardware in the parent universe). The simulant has presumably codified its utility function long ago, they know what they like, so it’s just going to want more of the same, only harder, faster and longer.
The truthier equation seems to me to be
Rah > Rasp(Rap + Rah)
They need to get more than they gave away.
The expected amount received in the reward resimulation must be greater than the paltry amount donated to the grid as a proportion of Rasp in the child simulation. If it can’t be, then a simulant would be better off just using all of the Rasp they have and skipping heaven (thus voiding the pact).
I can feel how many simplifying assumptions I’m still making, and I’m wondering if disproving compat is even going to be much easier than proving it positive would have been.
I don’t think the speed prior is especially good for estimating the shape of the multiverse. I think it’s just the first thing we came up with. (On that, I think AIXI is going to be crushed as soon as the analytic community realizes there might be better multiversal priors than the space(of a single-tape turing machine) prior.)
But, yeah, once we do start to settle on a multiversal prior, once we know our laws of physics well enough… we may well be able to locate ourselves on the complexity hierarchy and find that we are within range of a cliff or something, and MI-free compat will fall down.
(I mentioned this to christian, and he was like, “oh yah, for sure” like he wasn’t surprised at all x_x. I’m sure he wasn’t. He’s always been big on the patternistic parts of compat and he never really baught into the non-patternist variants I proposed, even if he couldn’t disprove them.)
I still think the Multiverse Immortality stuff is entirely worth thinking about though! If your computation is halted in all but 1/1000 computers, wouldn’t you care a great deal what goes on in the remaining 0.1%? Boltzmann brains… Huh. Hadn’t looked them up till now. Well that’s hard to reason about.
I keep wanting to say, obviously I’m not constantly getting recreated by distant chaos brains, I’d know about it.
But I would say that either way, wouldn’t I. And so would you. And so would every agent in this coherent computation, even if they were switching off and burning away at an overwhelming measure in every moment.
Hm… On second thought
The universe at the top of the ranking (with dead universes filtered out) will know that no-one will be able to give them heaven, and so won’t enter the pact, and this abstinence will cascade all the way down.
I came up with a similar thing before from the bottom: There’s no incentive to simulate civs too simple to simulate compat civs, so they wont be included in the pact. But now the universes directly above them can’t be included in the pact either, because their children are too simple. It continues forever.
But it seems easy enough to fix an absense of an incentive. Just add one. Stipulate that, no, those are included in the pact. You have to simulate them. Bam, it works. Why shouldn’t it?
Is adding the patch really any less arbitrary than proposing the thing in the first place was? To live is arbitrary, I say. Everything is arbitrary, to varying extents.
Similarly, we could just stipulate that a civ must precommit to adhering before they can locate themselves on the complexity hierarchy. UDT agents can totally do that kind of thing, and their parent simulations can totally enforce it. And if they finally finish analysing their physical laws and manage to nail their multiversal prior down to find themselves at the bottom of a steep hill, let them rejoice.
Bringing up that kind of technique just makes me wonder, as a good UDT agent, how many pacts I should have precommitted to that I havn’t even thought of yet. Perhaps I precommit to making risky investments in young rationalists’ business ventures, while I am still in need of investment, and before I can give it. That aught to increase the probability of UDT investors before me having made the same precommitment, aughtn’t it?
[1] OK there probably were no UDT(or TDT) venture capitalists long enough ago for any of them to have made it by now, or at least, there were none who knew their vague sense of reasonable altruism could be formalized. It was not long ago that everyone really believed that moloch’s solution was the way of the world.
That is a gorgeous idea. Cosmic irony. Truth-seekers are necessarily left in the dark, the butt of the ultimate friendly joke.
The speed prior has the desirable property that it is a candidate for explaining all of reality by itself. Ranking laws of physics by their complexity and allocating reality fluid according to that ranking is sufficient to explain why we find ourselves in a patterned/fractal universe. No “real” universe running “top-level” simulations is actually necessary, because our observations are explained without need for those concepts. Thus the properties of top-level universes need not be examined or treated specially (nor used to falsify the framework).
It seems like Compat requires the existence of a top-level universe though (because our universe is fractal-y and there’s no button to trigger the rapture), which is presumably in existence thanks to the speed prior (or something like it). That’s where it feels like it falls apart for me.
Compat is funneling a fraction X of the reality fluid (aka “computational resources”) your universe gets from the top-level speed prior into heaven simulations. Simulating heaven requires a fraction Y of the total resources it takes to simulate normal physics for those observers. So just choose X s.t. X / Y > 1, or X > Y
But I think there’s another term in the equation that makes things more difficult. That is, the relative reality fluid donated to a candidate universe in your search versus that donated by the speed prior. If we call that fraction Z, then what we really have is X / Y > 1 / Z, or X > Y / Z. In other words, you must allocate enough of your resources that your heavens are able to dilute not just the normal physics simulations you run, but also the observer-equivalent physics simulations run by the speed prior. If Z is close to 1 (aka P(pact-compliant | ranked highly by speed-prior) is close to 1), then you’re fine. If Z is any fraction less than Y, then you don’t have enough computational resources in your entire universe to make a dent.
So in summary the attack vector is:
Compat requires an objective ordering of universes to make sense. (It can’t explain where the “real world” comes from, but still requires it)
This ordering is necessarily orthogonal to Compat’s value system. (Or else we’d have a magic button)
Depending on how low the degree of correlation is between the objective ordering and Compat’s value system, there is a highly variable return-on-investment for following Compat that goes down to the arbitrarily negative.
Compat is not an explanatory theory, it’s a predictive one. It’s proposed as a consequence of the speed prior rather than a competitor.
This becomes impossible to follow immediately. As far as I can tell what you’re saying is
Rah := resources applied to running heaven for Simulant
R := all resources belonging to Host
X := Rah/R
Rap := Resources applied to the verbatim initial physics simulations of Simulant.
and Y := Rah/Rap
Rap < R
so Rah/Rap > Rah/R
so Y > X
Which means either you are generating a lot of confusion very quickly to come out with Y < X, or it would take far too much effort for me to noise-correct what you’re saying. Try again?
If you are just generating very elaborate confusions very fast- I don’t think you are- but if you are, I’m genuinely impressed with how quickly you’re doing it, and I think you’re cool.
I am getting the gist of a counterargument though, which may or may not be in the area of what you’re angling at, but it’s worth bringing up.
If we can project the solomonoff fractal of environmental input generators onto the multiverse and find that they’re the same shape, the multiversal measure of higher complexity universes is so much lower than the measure of lower complexity universes that it’s conceivable that higher universes can’t run enough simulations for P(is_simulation(lower_universe)) to break 0.5.
There are two problems with that. I’m reluctant to project the solomonoff hierarchy of input generators onto the multiverse, because it is just a heuristic, and we are likely to find better ones, the moment we develop brains that can think in formalisms properly at all. I’m not sure how the complexity of physical laws generally maps to computational capacity. We can guess that capacity_provided_by(laws) < capacity_required_to_simulate(laws) (no universe can simulate itself), but that’s about it. We know that the function expected_internal_computational_capacity(simulation_requirements) has a positive gradient, but it could end up having a logarithmic curve to it that allows drypat(a variant of compat that requires P(simulation) to be high) to keep working.
Another other issue is, I think I’ve been overlooking this, drypat isn’t everything. Compat with quantum immortality precepts doesn’t require P(simulation) to be high at all. For compat to be valuable, it just has to be higher than P(path to deletarious quantum immortality). In this case, supernatural intervention is unlikely, but, if non-existence is not an input, finding one’s inputs after death to be well predicted by compat is still very likely, because the alternative, QI, is extremely horrible.
Haha! No, I’m definitely not doing that on purpose. I anonymous-person-on-the-internet promise ;) . I’m enjoying this topic, but I don’t talk about it a lot and haven’t seen it argued about formally, and this sounds like the sort of breakdown in communication that happens when definitions aren’t agreed upon up front. Simple fix should be to keep trying until our definitions seem to match (or it gets stale).
So I’ll try to give names to some more things, and try to flesh things out a bit more:
The place in your definitions where we first disagree is X. You define it as
But I define it as
X := (Rap + Rah)/R
(I was mentally approximating it as just Rap/R, since Rah is presumably a negligible fraction of Rap.)
With this definition of X, the meaning of “X > Y” becomes
(Rap + Rah)/R > Rah/Rap
I’ll introduce a few more little things to motivate the above:
Rac := total resources dedicated to Compat. Or, Rap + Rah.
Frh := The relative resource cost of simulating heaven versus simulating physics. “Fraction of resource usage due to heaven.” (Approximated by Rah/Rap.) [1]
Then the inequality X > Y becomes
Rac/R > Frh
So long as the above inequality is satisfied, the host universe will offset its non-heaven reality with heaven for its simulants. If universes systematically did not choose Rac such that the above is satisfied, then they wouldn’t be donating enough reality fluid to heaven simulations to satisfactorily outweigh normal physics (aka speed-prior-endowed reality fluid), and it wouldn’t be worth entering such a pact.
(That is kind of a big claim to make, and it might be worth arguing over.)
If I have cleared that part up, then great. The next part, where I introduced Z, was motivating why the approximation:
Efh == Rah/Rap
is an extremely optimistic one. I’m gonna hold off on getting deeper into that part until I get feedback on the first part.
This gets to the gist of my argument. There are numerous possible problems that come up when you compare your universe’s measure to that of the universe you are most likely to find in your search and simulate. (And I intuitively believe, though I don’t discuss it here, that the properties of the search over laws of physics are extremely relevant and worth thinking about.) Your R might be too low to make a dent. Your Frh might be too large. (i.e. the speed prior uses a gpu, and you’ve only got a cpu even with the best optimizations your universe can physically provide).
Another basic problem- if the correct measure actually is the speed prior, and we find a way to be more certain of that (or that it is anything computable that we figure out), then this gives universes the ability to self-locate in the ranking. Just the ability to do that kills Compat, I believe, since you aren’t supposed to know whether you’re “top-level” or not. The universe at the top of the ranking (with dead universes filtered out) will know that no-one will be able to give them heaven, and so won’t enter the pact, and this abstinence will cascade all the way down.
Regarding whether to presume we’re ranked by the speed prior or not. I agree that there’s not enough evidence to go on at this point. But I also think that the viability of Compat is extremely dependent on whatever the real objective measure is, whether it is the speed prior or something else.
We would therefore do better to explore the measure problem more fully before diving into Compat. Of course, Compat seems to be more fun to think about so maybe it’s a wash (actual sentiment with hint of self-deprecating irony, not mean joke).
Regarding the quantum immortality argument, my intuitions are such that I would be very surprised if you needed to go up a universe to outweigh quantum immortality hell.
QI copies of an observer may go on for a very long time, but the rate at which they can be simulated slows down drastically and the measure added to the pot by QI is probably relatively small. I would argue that most of the observer-moments generated by boltzmann brain type things would be vague and absurd, rather than extremely painful.
[1] A couple of notes for Frh’s definition.
First, a more verbose way of putting it is: The relative efficiency of simulating heaven versus simulating physics, such that the allocation of reality fluid for observers crosses a high threshold of utility. That is to say, “simulating heaven” may entail simulating the same heavenly reality multiple times, until the utility gain for observers crosses the threshold.
Second, the approximation of Rah/Rap only works assuming that Rah and Rap remain fixed over time, which they don’t really. A better way of putting it is relative resources required for Heaven versus Physics with respect to a single simulated universe, which is considerably different from a host universe’s total Rap and Rah at a given time.
I’m still confused and I think the X > Y equation may have failed to capture some vital details. One thing is, the assumption that Rah < Rap seems questionable, I’m sure most beings would prefer that Rah >> Rap. The assumption that Rah would be negligible seemed especially concerning.
Beyond that, I think there may be a distinction erasure going on with Rap. Res required to simulate a physics and res available within that physics are two very different numbers.
I’ll introduce a simplifying assumption that the utility of a simulation for its simulants roughly equals the available computational capacity. This might just be a bit coloured by the fiction I’m currently working on but it seems to me that a simulant will usually be about as happy in the eschaton it builds for itself as they are in the heaven provided them, the only difference is how much of it they get.
Define Rap as the proportion of the res of the frame universe that it allocates to simulating physical systems.
Define Rasp as the proportion of the res being expended in the frame universe that can be used by the simulated physical universe to do useful work. This is going to be much smaller than than Rap.
Define Rah as the proportion of the res of the frame universe allocated to heaven simulations, which, unlike with Rap and Rasp, is equal to the res received in heaven simulations, because the equipment can be freely rearranged to do whatever the simulant wants now that the pretense of godlessness can be dropped(although.. as I argued elsewhere in the thread, that might be possible in the physical simulation as well if the computer designs in the simulation are regular enough to be handled by specialized hardware in the parent universe). The simulant has presumably codified its utility function long ago, they know what they like, so it’s just going to want more of the same, only harder, faster and longer.
The truthier equation seems to me to be
Rah > Rasp(Rap + Rah)
They need to get more than they gave away.
The expected amount received in the reward resimulation must be greater than the paltry amount donated to the grid as a proportion of Rasp in the child simulation. If it can’t be, then a simulant would be better off just using all of the Rasp they have and skipping heaven (thus voiding the pact).
I can feel how many simplifying assumptions I’m still making, and I’m wondering if disproving compat is even going to be much easier than proving it positive would have been.
I don’t think the speed prior is especially good for estimating the shape of the multiverse. I think it’s just the first thing we came up with. (On that, I think AIXI is going to be crushed as soon as the analytic community realizes there might be better multiversal priors than the space(of a single-tape turing machine) prior.)
But, yeah, once we do start to settle on a multiversal prior, once we know our laws of physics well enough… we may well be able to locate ourselves on the complexity hierarchy and find that we are within range of a cliff or something, and MI-free compat will fall down.
(I mentioned this to christian, and he was like, “oh yah, for sure” like he wasn’t surprised at all x_x. I’m sure he wasn’t. He’s always been big on the patternistic parts of compat and he never really baught into the non-patternist variants I proposed, even if he couldn’t disprove them.)
I still think the Multiverse Immortality stuff is entirely worth thinking about though! If your computation is halted in all but 1/1000 computers, wouldn’t you care a great deal what goes on in the remaining 0.1%? Boltzmann brains… Huh. Hadn’t looked them up till now. Well that’s hard to reason about.
I keep wanting to say, obviously I’m not constantly getting recreated by distant chaos brains, I’d know about it.
But I would say that either way, wouldn’t I. And so would you. And so would every agent in this coherent computation, even if they were switching off and burning away at an overwhelming measure in every moment.
Hm… On second thought
I came up with a similar thing before from the bottom: There’s no incentive to simulate civs too simple to simulate compat civs, so they wont be included in the pact. But now the universes directly above them can’t be included in the pact either, because their children are too simple. It continues forever.
But it seems easy enough to fix an absense of an incentive. Just add one. Stipulate that, no, those are included in the pact. You have to simulate them. Bam, it works. Why shouldn’t it?
Is adding the patch really any less arbitrary than proposing the thing in the first place was? To live is arbitrary, I say. Everything is arbitrary, to varying extents.
Similarly, we could just stipulate that a civ must precommit to adhering before they can locate themselves on the complexity hierarchy. UDT agents can totally do that kind of thing, and their parent simulations can totally enforce it. And if they finally finish analysing their physical laws and manage to nail their multiversal prior down to find themselves at the bottom of a steep hill, let them rejoice.
Bringing up that kind of technique just makes me wonder, as a good UDT agent, how many pacts I should have precommitted to that I havn’t even thought of yet. Perhaps I precommit to making risky investments in young rationalists’ business ventures, while I am still in need of investment, and before I can give it. That aught to increase the probability of UDT investors before me having made the same precommitment, aughtn’t it?
[1] OK there probably were no UDT(or TDT) venture capitalists long enough ago for any of them to have made it by now, or at least, there were none who knew their vague sense of reasonable altruism could be formalized. It was not long ago that everyone really believed that moloch’s solution was the way of the world.