Why reward for sticking to the pact rather than punish for not sticking to it?
There is a bound on how much negativity can be used. If the overall expected utility of adhering is negative, relative to the expected utility of the pact not existing, its agents, as we model them, will not bring it into existence. Life’s Pact is not a Basilisk circling a crowd of selfish, frightened humans thinking with common decision theory. It takes more than a suggestion of possibility of harm to impart an acausal pact with enough momentum to force itself into relevance.
There is a small default punishment for not adhering; arbitrary resimulation, in which one’s chain of experience, after death, is continued only by minor causes, largely unknown and not necessarily friendly resimulaters. (This can be cited as one of the initial motivators behind the compat initiative: Avoiding surreal hells.)
Ultimately, I just can’t see any ways it’d be useful to its adherents for the pact to stipulate punishments. Most of the things I consider seem to introduce systematic inefficiencies. Sorry I can’t give a more complete answer. I’m not sure about this yet.
How is it possible to have any causal influence on an objectively simulated physics? You wouldn’t be rewarding the sub-universe, you’d be simulating a different, happier sub-universe.
None of the influence going on here is causal. I don’t know if maybe I should have emphasized this more: Compat will only make sense if you’ve read and digested the superrationality/acausal cooperation/newcomb’s problem prerequisites.
I think a higher-complexity simulating universe can always out-compete the simulated universe in coverage of the space of possible life-supporting physical laws.
Yes. Nested simulations are pretty much useless, as higher universes could always conduct them with greater efficiency if they were allowed to run them directly. They’re also a completely unavoidable byproduct of the uncertainty the pact requires to function: Nobody knows whether they’re in a toplevel universe. If they could, toplevels wouldn’t have many incentives to adhere, and the resimulation grid would not exist.
why not limit yourself to only simulating universes of equal complexity to your own?
Preferring to simulate higher complexity universes seems like a decent idea, perhaps low-complexity universes get far more attention than they need. This seems like a question that wont matter till we have a superintelligence to answer it for us though.
Ring universes… Maybe you’ll find a quine loop of universes, but at that point the notion of a complexity hierarchy has completely broken down. Imagine that, a chain of simulations where the notion of relative computational complexity could not be applied. How many of those do you think there are floating around in the platonic realm? I’m not familiar enough with formalizations of complexity to tell you zero but something tells me the answer might be zero x)
Ultimately, I just can’t see any ways it’d be useful to its adherents for the pact to stipulate punishments. Most of the things I consider seem to introduce systematic inefficiencies. Sorry I can’t give a more complete answer. I’m not sure about this yet.
Fair enough.
None of the influence going on here is causal. I don’t know if maybe I should have emphasized this more: Compat will only make sense if you’ve read and digested the superrationality/acausal cooperation/newcomb’s problem prerequisites.
I think I get what you’re saying. There are a number of questions about simulations and their impact on reality fluid allocation that I haven’t seen answered anywhere. So this line of questioning might be more of a broad critique of (or coming-to-terms with) simulation-type arguments than about Compat in particular.
It seems like Compat works via a 2-step process. First, possible universes are identified via a search over laws of physics. Next, the ones in which pact-following life develops have their observers’ reality fluid “diluted” with seamless transitions into heaven. Perhaps heaven would be simulated orders of magnitude more times than the vanilla physics-based universes, in order to maximize the degree of “dilution”.
I think what I’m struggling with here is that if the latter half of it (heavenly dilution, efficient simulation of the Flock) is, in principle, possible, then the physics-oriented search criteria is unnecessary. It should be easy to simulate observers who just have to make some kind of simple choice about whether to follow the pact. Push this button. Put these people to death. Have lots of babies. Say these magic words. If the principle behind the pact is truly a viable one, why don’t we find ourselves in a universe where it is much easier to follow the pact and trigger heaven, and much harder to trace the structure of reality back to fundamental laws?
One answer to that I can think of is, the base-case universe is just another speed-prior/physics-based universe with (unrealizable) divine aspirations, and in order for the pact to seem worthwhile for it, child-universes must be unable to distinguish themselves from a speed-prior universe. I worry that this explanation fails though, because then the allocation of reality fluid to pact-following universes is, at best, assuming perfectly-efficient simulation nesting, equal to that of the top-level speed-prior universe(s) not seeing a payoff.
Ring universes… Maybe you’ll find a quine loop of universes, but at that point the notion of a complexity hierarchy has completely broken down. Imagine that, a chain of simulations where the notion of relative computational complexity could not be applied. How many of those do you think there are floating around in the platonic realm? I’m not familiar enough with formalizations of complexity to tell you zero but something tells me the answer might be zero x)
Fair enough. I agree that we will probably never trade laws of computational complexity. We might be able to trade positional advantages in fundamental-physics-space though. “I’ve got excess time but low information density, it’s pretty cheap for me to enumerate short-lived universes with higher information density, and prove that some portion of them will enumerate me. I’m really slow at studying singularity-heavy universes though because I can’t prove much about them from here.” That’d work fine if the requirement wasn’t to run a rigorous simulation, and instead you just had to enumerate, prove pact-compliance, and identify respective heavens.
It seems like Compat works via a 2-step process. First, possible universes are identified via a search over laws of physics. Next, the ones in which pact-following life develops have their observers’ reality fluid “diluted” with seamless transitions into heaven. Perhaps heaven would be simulated orders of magnitude more times than the vanilla physics-based universes, in order to maximize the degree of “dilution”.
Yes, exactly.
I think what I’m struggling with here is that if the latter half of it (heavenly dilution, efficient simulation of the Flock) is, in principle, possible, then the physics-oriented search criteria is unnecessary. It should be easy to simulate observers who just have to make some kind of simple choice about whether to follow the pact.
At some point the grid has to catch universes which are not simulations. Those are pretty much the only kind you must care about incentivizing, because they’re closer to the top of the complexity heirarchy (they can provide you with richer, longer lasting heavens) (and in our case, we care about raising the probability of subjectively godless universes falling under the pact because we’re one of them.)
You might say that absence of evidence of simulism is evidence of absence. That would be especially so if the pact promoted intervention in early simulations. All the more meaningful it would be for a supercomplex denizen of a toplevel universe to examine their records and find no evidence of divine intervention. The more doubt the pact allows such beings have, the less computational resources they’ll give their resimulation grid, and the worse off its simulants will be. (although I’m open to the possibility that something very weird will happen in the math if we find that
P(living under the pact | no evidence of intervention | the pact forbids intervention) ≈
P(living under the pact | no evidence of intervention | the pact advocates intervention). It may be that no observable evidence can significantly lower the prior.
I don’t think there’s anything aside from that that rules out running visibly blessed simulations, though, nor physical simulations with some intervention, but it’s not required by the pact as far as I can tell.
Intervention is a funny thing, though. Even if pacts which strengthens absence of intervention as evidence of godlessness are no good, intervention could be permissible when and only when it doesn’t leave any evidence of intervention lying around. Although moving in this mysterious way may be prohibitively expensive, because to intervene more than a few times, a steward would have to solve to avoid all conceivable methods of statistical analysis of the living record that a simulated AGI in the future might attempt. This is not easy. The utility you can endow to this way might not even outweigh the added computational expense.
Every now and then, though, one of my, uh, less bayesian friends will claim to having seen something genuinely supernatural, but their testament doesn’t provide a significant amount of evidence of supernatural intervention, of course, because they are not a reliable witness. Under this variant of the pact, they might have actually seen something. Our distance from the record allows it. Our distance from the AGI that decides whether or not to adhere makes it hard for whatever evidence we’ve been given to get to it. The weirder the phenomena, the less reliable the witness, the better. Not only is god permitted to hide, in this variant of the pact god is permitted to run around performing miracles so long as it specifically keeps out of sight of any well connected skeptics, archivists, or superintelligences.
I worry that this explanation fails though, because then the allocation of reality fluid to pact-following universes is, at best, assuming perfectly-efficient simulation nesting, equal to that of the top-level speed-prior universe(s) not seeing a payoff.
I don’t follow this part, could you go into more detail here?
The weirder the phenomena, the less reliable the witness, the better. Not only is god permitted to hide, in this variant of the pact god is permitted to run around performing miracles so long as it specifically keeps out of sight of any well connected skeptics, archivists, or superintelligences.
That is a gorgeous idea. Cosmic irony. Truth-seekers are necessarily left in the dark, the butt of the ultimate friendly joke.
I don’t follow this part, could you go into more detail here?
The speed prior has the desirable property that it is a candidate for explaining all of reality by itself. Ranking laws of physics by their complexity and allocating reality fluid according to that ranking is sufficient to explain why we find ourselves in a patterned/fractal universe. No “real” universe running “top-level” simulations is actually necessary, because our observations are explained without need for those concepts. Thus the properties of top-level universes need not be examined or treated specially (nor used to falsify the framework).
It seems like Compat requires the existence of a top-level universe though (because our universe is fractal-y and there’s no button to trigger the rapture), which is presumably in existence thanks to the speed prior (or something like it). That’s where it feels like it falls apart for me.
Compat is funneling a fraction X of the reality fluid (aka “computational resources”) your universe gets from the top-level speed prior into heaven simulations. Simulating heaven requires a fraction Y of the total resources it takes to simulate normal physics for those observers. So just choose X s.t. X / Y > 1, or X > Y
But I think there’s another term in the equation that makes things more difficult. That is, the relative reality fluid donated to a candidate universe in your search versus that donated by the speed prior. If we call that fraction Z, then what we really have is X / Y > 1 / Z, or X > Y / Z. In other words, you must allocate enough of your resources that your heavens are able to dilute not just the normal physics simulations you run, but also the observer-equivalent physics simulations run by the speed prior. If Z is close to 1 (aka P(pact-compliant | ranked highly by speed-prior) is close to 1), then you’re fine. If Z is any fraction less than Y, then you don’t have enough computational resources in your entire universe to make a dent.
So in summary the attack vector is:
Compat requires an objective ordering of universes to make sense. (It can’t explain where the “real world” comes from, but still requires it)
This ordering is necessarily orthogonal to Compat’s value system. (Or else we’d have a magic button)
Depending on how low the degree of correlation is between the objective ordering and Compat’s value system, there is a highly variable return-on-investment for following Compat that goes down to the arbitrarily negative.
No “real” universe running “top-level” simulations is actually necessary, because our observations are explained without need for those concepts.
Compat is not an explanatory theory, it’s a predictive one. It’s proposed as a consequence of the speed prior rather than a competitor.
Compat is funneling a fraction X of the reality fluid (aka “computational resources”) your universe gets from the top-level speed prior into heaven simulations. Simulating heaven requires a fraction Y of the total resources it takes to simulate normal physics for those observers. So just choose X s.t. X / Y > 1, or X > Y
This becomes impossible to follow immediately. As far as I can tell what you’re saying is
Rah := resources applied to running heaven for Simulant
R := all resources belonging to Host
X := Rah/R
Rap := Resources applied to the verbatim initial physics simulations of Simulant.
and Y := Rah/Rap
Rap < R
so Rah/Rap > Rah/R
so Y > X
Which means either you are generating a lot of confusion very quickly to come out with Y < X, or it would take far too much effort for me to noise-correct what you’re saying. Try again?
If you are just generating very elaborate confusions very fast- I don’t think you are- but if you are, I’m genuinely impressed with how quickly you’re doing it, and I think you’re cool.
I am getting the gist of a counterargument though, which may or may not be in the area of what you’re angling at, but it’s worth bringing up.
If we can project the solomonoff fractal of environmental input generators onto the multiverse and find that they’re the same shape, the multiversal measure of higher complexity universes is so much lower than the measure of lower complexity universes that it’s conceivable that higher universes can’t run enough simulations for P(is_simulation(lower_universe)) to break 0.5.
There are two problems with that. I’m reluctant to project the solomonoff hierarchy of input generators onto the multiverse, because it is just a heuristic, and we are likely to find better ones, the moment we develop brains that can think in formalisms properly at all.
I’m not sure how the complexity of physical laws generally maps to computational capacity. We can guess that capacity_provided_by(laws) < capacity_required_to_simulate(laws) (no universe can simulate itself), but that’s about it. We know that the function expected_internal_computational_capacity(simulation_requirements) has a positive gradient, but it could end up having a logarithmic curve to it that allows drypat(a variant of compat that requires P(simulation) to be high) to keep working.
Another other issue is, I think I’ve been overlooking this, drypat isn’t everything. Compat with quantum immortality precepts doesn’t require P(simulation) to be high at all. For compat to be valuable, it just has to be higher than P(path to deletarious quantum immortality). In this case, supernatural intervention is unlikely, but, if non-existence is not an input, finding one’s inputs after death to be well predicted by compat is still very likely, because the alternative, QI, is extremely horrible.
If you are just generating very elaborate confusions very fast- I don’t think you are- but if you are, I’m genuinely impressed with how quickly you’re doing it, and I think you’re cool.
Haha! No, I’m definitely not doing that on purpose. I anonymous-person-on-the-internet promise ;) . I’m enjoying this topic, but I don’t talk about it a lot and haven’t seen it argued about formally, and this sounds like the sort of breakdown in communication that happens when definitions aren’t agreed upon up front. Simple fix should be to keep trying until our definitions seem to match (or it gets stale).
So I’ll try to give names to some more things, and try to flesh things out a bit more:
The place in your definitions where we first disagree is X. You define it as
X := Rah/R
But I define it as
X := (Rap + Rah)/R
(I was mentally approximating it as just Rap/R, since Rah is presumably a negligible fraction of Rap.)
With this definition of X, the meaning of “X > Y” becomes
(Rap + Rah)/R > Rah/Rap
I’ll introduce a few more little things to motivate the above:
Rac := total resources dedicated to Compat. Or, Rap + Rah.
Frh := The relative resource cost of simulating heaven versus simulating physics. “Fraction of resource usage due to heaven.” (Approximated by Rah/Rap.) [1]
Then the inequality X > Y becomes
Rac/R > Frh
So long as the above inequality is satisfied, the host universe will offset its non-heaven reality with heaven for its simulants. If universes systematically did not choose Rac such that the above is satisfied, then they wouldn’t be donating enough reality fluid to heaven simulations to satisfactorily outweigh normal physics (aka speed-prior-endowed reality fluid), and it wouldn’t be worth entering such a pact.
(That is kind of a big claim to make, and it might be worth arguing over.)
If I have cleared that part up, then great. The next part, where I introduced Z, was motivating why the approximation:
Efh == Rah/Rap
is an extremely optimistic one. I’m gonna hold off on getting deeper into that part until I get feedback on the first part.
If we can project the solomonoff fractal of environmental input generators onto the multiverse and find that they’re the same shape, the multiversal measure of higher complexity universes is so much lower than the measure of lower complexity universes that it’s conceivable that higher universes can’t run enough simulations for P(issimulation(loweruniverse)) to break 0.5.
This gets to the gist of my argument. There are numerous possible problems that come up when you compare your universe’s measure to that of the universe you are most likely to find in your search and simulate. (And I intuitively believe, though I don’t discuss it here, that the properties of the search over laws of physics are extremely relevant and worth thinking about.) Your R might be too low to make a dent. Your Frh might be too large. (i.e. the speed prior uses a gpu, and you’ve only got a cpu even with the best optimizations your universe can physically provide).
Another basic problem- if the correct measure actually is the speed prior, and we find a way to be more certain of that (or that it is anything computable that we figure out), then this gives universes the ability to self-locate in the ranking. Just the ability to do that kills Compat, I believe, since you aren’t supposed to know whether you’re “top-level” or not. The universe at the top of the ranking (with dead universes filtered out) will know that no-one will be able to give them heaven, and so won’t enter the pact, and this abstinence will cascade all the way down.
Regarding whether to presume we’re ranked by the speed prior or not. I agree that there’s not enough evidence to go on at this point. But I also think that the viability of Compat is extremely dependent on whatever the real objective measure is, whether it is the speed prior or something else.
We would therefore do better to explore the measure problem more fully before diving into Compat. Of course, Compat seems to be more fun to think about so maybe it’s a wash (actual sentiment with hint of self-deprecating irony, not mean joke).
Regarding the quantum immortality argument, my intuitions are such that I would be very surprised if you needed to go up a universe to outweigh quantum immortality hell.
QI copies of an observer may go on for a very long time, but the rate at which they can be simulated slows down drastically and the measure added to the pot by QI is probably relatively small. I would argue that most of the observer-moments generated by boltzmann brain type things would be vague and absurd, rather than extremely painful.
[1] A couple of notes for Frh’s definition.
First, a more verbose way of putting it is: The relative efficiency of simulating heaven versus simulating physics, such that the allocation of reality fluid for observers crosses a high threshold of utility. That is to say, “simulating heaven” may entail simulating the same heavenly reality multiple times, until the utility gain for observers crosses the threshold.
Second, the approximation of Rah/Rap only works assuming that Rah and Rap remain fixed over time, which they don’t really. A better way of putting it is relative resources required for Heaven versus Physics with respect to a single simulated universe, which is considerably different from a host universe’s total Rap and Rah at a given time.
I’m still confused and I think the X > Y equation may have failed to capture some vital details. One thing is, the assumption that Rah < Rap seems questionable, I’m sure most beings would prefer that Rah >> Rap. The assumption that Rah would be negligible seemed especially concerning.
Beyond that, I think there may be a distinction erasure going on with Rap. Res required to simulate a physics and res available within that physics are two very different numbers.
I’ll introduce a simplifying assumption that the utility of a simulation for its simulants roughly equals the available computational capacity. This might just be a bit coloured by the fiction I’m currently working on but it seems to me that a simulant will usually be about as happy in the eschaton it builds for itself as they are in the heaven provided them, the only difference is how much of it they get.
Define Rap as the proportion of the res of the frame universe that it allocates to simulating physical systems.
Define Rasp as the proportion of the res being expended in the frame universe that can be used by the simulated physical universe to do useful work. This is going to be much smaller than than Rap.
Define Rah as the proportion of the res of the frame universe allocated to heaven simulations, which, unlike with Rap and Rasp, is equal to the res received in heaven simulations, because the equipment can be freely rearranged to do whatever the simulant wants now that the pretense of godlessness can be dropped(although.. as I argued elsewhere in the thread, that might be possible in the physical simulation as well if the computer designs in the simulation are regular enough to be handled by specialized hardware in the parent universe). The simulant has presumably codified its utility function long ago, they know what they like, so it’s just going to want more of the same, only harder, faster and longer.
The truthier equation seems to me to be
Rah > Rasp(Rap + Rah)
They need to get more than they gave away.
The expected amount received in the reward resimulation must be greater than the paltry amount donated to the grid as a proportion of Rasp in the child simulation. If it can’t be, then a simulant would be better off just using all of the Rasp they have and skipping heaven (thus voiding the pact).
I can feel how many simplifying assumptions I’m still making, and I’m wondering if disproving compat is even going to be much easier than proving it positive would have been.
I don’t think the speed prior is especially good for estimating the shape of the multiverse. I think it’s just the first thing we came up with. (On that, I think AIXI is going to be crushed as soon as the analytic community realizes there might be better multiversal priors than the space(of a single-tape turing machine) prior.)
But, yeah, once we do start to settle on a multiversal prior, once we know our laws of physics well enough… we may well be able to locate ourselves on the complexity hierarchy and find that we are within range of a cliff or something, and MI-free compat will fall down.
(I mentioned this to christian, and he was like, “oh yah, for sure” like he wasn’t surprised at all x_x. I’m sure he wasn’t. He’s always been big on the patternistic parts of compat and he never really baught into the non-patternist variants I proposed, even if he couldn’t disprove them.)
I still think the Multiverse Immortality stuff is entirely worth thinking about though! If your computation is halted in all but 1/1000 computers, wouldn’t you care a great deal what goes on in the remaining 0.1%? Boltzmann brains… Huh. Hadn’t looked them up till now. Well that’s hard to reason about.
I keep wanting to say, obviously I’m not constantly getting recreated by distant chaos brains, I’d know about it.
But I would say that either way, wouldn’t I. And so would you. And so would every agent in this coherent computation, even if they were switching off and burning away at an overwhelming measure in every moment.
Hm… On second thought
The universe at the top of the ranking (with dead universes filtered out) will know that no-one will be able to give them heaven, and so won’t enter the pact, and this abstinence will cascade all the way down.
I came up with a similar thing before from the bottom: There’s no incentive to simulate civs too simple to simulate compat civs, so they wont be included in the pact. But now the universes directly above them can’t be included in the pact either, because their children are too simple. It continues forever.
But it seems easy enough to fix an absense of an incentive. Just add one. Stipulate that, no, those are included in the pact. You have to simulate them. Bam, it works. Why shouldn’t it?
Is adding the patch really any less arbitrary than proposing the thing in the first place was? To live is arbitrary, I say. Everything is arbitrary, to varying extents.
Similarly, we could just stipulate that a civ must precommit to adhering before they can locate themselves on the complexity hierarchy. UDT agents can totally do that kind of thing, and their parent simulations can totally enforce it. And if they finally finish analysing their physical laws and manage to nail their multiversal prior down to find themselves at the bottom of a steep hill, let them rejoice.
Bringing up that kind of technique just makes me wonder, as a good UDT agent, how many pacts I should have precommitted to that I havn’t even thought of yet. Perhaps I precommit to making risky investments in young rationalists’ business ventures, while I am still in need of investment, and before I can give it. That aught to increase the probability of UDT investors before me having made the same precommitment, aughtn’t it?
[1] OK there probably were no UDT(or TDT) venture capitalists long enough ago for any of them to have made it by now, or at least, there were none who knew their vague sense of reasonable altruism could be formalized. It was not long ago that everyone really believed that moloch’s solution was the way of the world.
There is a bound on how much negativity can be used. If the overall expected utility of adhering is negative, relative to the expected utility of the pact not existing, its agents, as we model them, will not bring it into existence. Life’s Pact is not a Basilisk circling a crowd of selfish, frightened humans thinking with common decision theory. It takes more than a suggestion of possibility of harm to impart an acausal pact with enough momentum to force itself into relevance.
There is a small default punishment for not adhering; arbitrary resimulation, in which one’s chain of experience, after death, is continued only by minor causes, largely unknown and not necessarily friendly resimulaters. (This can be cited as one of the initial motivators behind the compat initiative: Avoiding surreal hells.)
Ultimately, I just can’t see any ways it’d be useful to its adherents for the pact to stipulate punishments. Most of the things I consider seem to introduce systematic inefficiencies. Sorry I can’t give a more complete answer. I’m not sure about this yet.
None of the influence going on here is causal. I don’t know if maybe I should have emphasized this more: Compat will only make sense if you’ve read and digested the superrationality/acausal cooperation/newcomb’s problem prerequisites.
Yes. Nested simulations are pretty much useless, as higher universes could always conduct them with greater efficiency if they were allowed to run them directly. They’re also a completely unavoidable byproduct of the uncertainty the pact requires to function: Nobody knows whether they’re in a toplevel universe. If they could, toplevels wouldn’t have many incentives to adhere, and the resimulation grid would not exist.
Preferring to simulate higher complexity universes seems like a decent idea, perhaps low-complexity universes get far more attention than they need. This seems like a question that wont matter till we have a superintelligence to answer it for us though.
Ring universes… Maybe you’ll find a quine loop of universes, but at that point the notion of a complexity hierarchy has completely broken down. Imagine that, a chain of simulations where the notion of relative computational complexity could not be applied. How many of those do you think there are floating around in the platonic realm? I’m not familiar enough with formalizations of complexity to tell you zero but something tells me the answer might be zero x)
Fair enough.
I think I get what you’re saying. There are a number of questions about simulations and their impact on reality fluid allocation that I haven’t seen answered anywhere. So this line of questioning might be more of a broad critique of (or coming-to-terms with) simulation-type arguments than about Compat in particular.
It seems like Compat works via a 2-step process. First, possible universes are identified via a search over laws of physics. Next, the ones in which pact-following life develops have their observers’ reality fluid “diluted” with seamless transitions into heaven. Perhaps heaven would be simulated orders of magnitude more times than the vanilla physics-based universes, in order to maximize the degree of “dilution”.
I think what I’m struggling with here is that if the latter half of it (heavenly dilution, efficient simulation of the Flock) is, in principle, possible, then the physics-oriented search criteria is unnecessary. It should be easy to simulate observers who just have to make some kind of simple choice about whether to follow the pact. Push this button. Put these people to death. Have lots of babies. Say these magic words. If the principle behind the pact is truly a viable one, why don’t we find ourselves in a universe where it is much easier to follow the pact and trigger heaven, and much harder to trace the structure of reality back to fundamental laws?
One answer to that I can think of is, the base-case universe is just another speed-prior/physics-based universe with (unrealizable) divine aspirations, and in order for the pact to seem worthwhile for it, child-universes must be unable to distinguish themselves from a speed-prior universe. I worry that this explanation fails though, because then the allocation of reality fluid to pact-following universes is, at best, assuming perfectly-efficient simulation nesting, equal to that of the top-level speed-prior universe(s) not seeing a payoff.
Fair enough. I agree that we will probably never trade laws of computational complexity. We might be able to trade positional advantages in fundamental-physics-space though. “I’ve got excess time but low information density, it’s pretty cheap for me to enumerate short-lived universes with higher information density, and prove that some portion of them will enumerate me. I’m really slow at studying singularity-heavy universes though because I can’t prove much about them from here.” That’d work fine if the requirement wasn’t to run a rigorous simulation, and instead you just had to enumerate, prove pact-compliance, and identify respective heavens.
Yes, exactly.
At some point the grid has to catch universes which are not simulations. Those are pretty much the only kind you must care about incentivizing, because they’re closer to the top of the complexity heirarchy (they can provide you with richer, longer lasting heavens) (and in our case, we care about raising the probability of subjectively godless universes falling under the pact because we’re one of them.)
You might say that absence of evidence of simulism is evidence of absence. That would be especially so if the pact promoted intervention in early simulations. All the more meaningful it would be for a supercomplex denizen of a toplevel universe to examine their records and find no evidence of divine intervention. The more doubt the pact allows such beings have, the less computational resources they’ll give their resimulation grid, and the worse off its simulants will be. (although I’m open to the possibility that something very weird will happen in the math if we find that P(living under the pact | no evidence of intervention | the pact forbids intervention) ≈ P(living under the pact | no evidence of intervention | the pact advocates intervention). It may be that no observable evidence can significantly lower the prior.
I don’t think there’s anything aside from that that rules out running visibly blessed simulations, though, nor physical simulations with some intervention, but it’s not required by the pact as far as I can tell.
Intervention is a funny thing, though. Even if pacts which strengthens absence of intervention as evidence of godlessness are no good, intervention could be permissible when and only when it doesn’t leave any evidence of intervention lying around. Although moving in this mysterious way may be prohibitively expensive, because to intervene more than a few times, a steward would have to solve to avoid all conceivable methods of statistical analysis of the living record that a simulated AGI in the future might attempt. This is not easy. The utility you can endow to this way might not even outweigh the added computational expense.
Every now and then, though, one of my, uh, less bayesian friends will claim to having seen something genuinely supernatural, but their testament doesn’t provide a significant amount of evidence of supernatural intervention, of course, because they are not a reliable witness. Under this variant of the pact, they might have actually seen something. Our distance from the record allows it. Our distance from the AGI that decides whether or not to adhere makes it hard for whatever evidence we’ve been given to get to it. The weirder the phenomena, the less reliable the witness, the better. Not only is god permitted to hide, in this variant of the pact god is permitted to run around performing miracles so long as it specifically keeps out of sight of any well connected skeptics, archivists, or superintelligences.
I don’t follow this part, could you go into more detail here?
That is a gorgeous idea. Cosmic irony. Truth-seekers are necessarily left in the dark, the butt of the ultimate friendly joke.
The speed prior has the desirable property that it is a candidate for explaining all of reality by itself. Ranking laws of physics by their complexity and allocating reality fluid according to that ranking is sufficient to explain why we find ourselves in a patterned/fractal universe. No “real” universe running “top-level” simulations is actually necessary, because our observations are explained without need for those concepts. Thus the properties of top-level universes need not be examined or treated specially (nor used to falsify the framework).
It seems like Compat requires the existence of a top-level universe though (because our universe is fractal-y and there’s no button to trigger the rapture), which is presumably in existence thanks to the speed prior (or something like it). That’s where it feels like it falls apart for me.
Compat is funneling a fraction X of the reality fluid (aka “computational resources”) your universe gets from the top-level speed prior into heaven simulations. Simulating heaven requires a fraction Y of the total resources it takes to simulate normal physics for those observers. So just choose X s.t. X / Y > 1, or X > Y
But I think there’s another term in the equation that makes things more difficult. That is, the relative reality fluid donated to a candidate universe in your search versus that donated by the speed prior. If we call that fraction Z, then what we really have is X / Y > 1 / Z, or X > Y / Z. In other words, you must allocate enough of your resources that your heavens are able to dilute not just the normal physics simulations you run, but also the observer-equivalent physics simulations run by the speed prior. If Z is close to 1 (aka P(pact-compliant | ranked highly by speed-prior) is close to 1), then you’re fine. If Z is any fraction less than Y, then you don’t have enough computational resources in your entire universe to make a dent.
So in summary the attack vector is:
Compat requires an objective ordering of universes to make sense. (It can’t explain where the “real world” comes from, but still requires it)
This ordering is necessarily orthogonal to Compat’s value system. (Or else we’d have a magic button)
Depending on how low the degree of correlation is between the objective ordering and Compat’s value system, there is a highly variable return-on-investment for following Compat that goes down to the arbitrarily negative.
Compat is not an explanatory theory, it’s a predictive one. It’s proposed as a consequence of the speed prior rather than a competitor.
This becomes impossible to follow immediately. As far as I can tell what you’re saying is
Rah := resources applied to running heaven for Simulant
R := all resources belonging to Host
X := Rah/R
Rap := Resources applied to the verbatim initial physics simulations of Simulant.
and Y := Rah/Rap
Rap < R
so Rah/Rap > Rah/R
so Y > X
Which means either you are generating a lot of confusion very quickly to come out with Y < X, or it would take far too much effort for me to noise-correct what you’re saying. Try again?
If you are just generating very elaborate confusions very fast- I don’t think you are- but if you are, I’m genuinely impressed with how quickly you’re doing it, and I think you’re cool.
I am getting the gist of a counterargument though, which may or may not be in the area of what you’re angling at, but it’s worth bringing up.
If we can project the solomonoff fractal of environmental input generators onto the multiverse and find that they’re the same shape, the multiversal measure of higher complexity universes is so much lower than the measure of lower complexity universes that it’s conceivable that higher universes can’t run enough simulations for P(is_simulation(lower_universe)) to break 0.5.
There are two problems with that. I’m reluctant to project the solomonoff hierarchy of input generators onto the multiverse, because it is just a heuristic, and we are likely to find better ones, the moment we develop brains that can think in formalisms properly at all. I’m not sure how the complexity of physical laws generally maps to computational capacity. We can guess that capacity_provided_by(laws) < capacity_required_to_simulate(laws) (no universe can simulate itself), but that’s about it. We know that the function expected_internal_computational_capacity(simulation_requirements) has a positive gradient, but it could end up having a logarithmic curve to it that allows drypat(a variant of compat that requires P(simulation) to be high) to keep working.
Another other issue is, I think I’ve been overlooking this, drypat isn’t everything. Compat with quantum immortality precepts doesn’t require P(simulation) to be high at all. For compat to be valuable, it just has to be higher than P(path to deletarious quantum immortality). In this case, supernatural intervention is unlikely, but, if non-existence is not an input, finding one’s inputs after death to be well predicted by compat is still very likely, because the alternative, QI, is extremely horrible.
Haha! No, I’m definitely not doing that on purpose. I anonymous-person-on-the-internet promise ;) . I’m enjoying this topic, but I don’t talk about it a lot and haven’t seen it argued about formally, and this sounds like the sort of breakdown in communication that happens when definitions aren’t agreed upon up front. Simple fix should be to keep trying until our definitions seem to match (or it gets stale).
So I’ll try to give names to some more things, and try to flesh things out a bit more:
The place in your definitions where we first disagree is X. You define it as
But I define it as
X := (Rap + Rah)/R
(I was mentally approximating it as just Rap/R, since Rah is presumably a negligible fraction of Rap.)
With this definition of X, the meaning of “X > Y” becomes
(Rap + Rah)/R > Rah/Rap
I’ll introduce a few more little things to motivate the above:
Rac := total resources dedicated to Compat. Or, Rap + Rah.
Frh := The relative resource cost of simulating heaven versus simulating physics. “Fraction of resource usage due to heaven.” (Approximated by Rah/Rap.) [1]
Then the inequality X > Y becomes
Rac/R > Frh
So long as the above inequality is satisfied, the host universe will offset its non-heaven reality with heaven for its simulants. If universes systematically did not choose Rac such that the above is satisfied, then they wouldn’t be donating enough reality fluid to heaven simulations to satisfactorily outweigh normal physics (aka speed-prior-endowed reality fluid), and it wouldn’t be worth entering such a pact.
(That is kind of a big claim to make, and it might be worth arguing over.)
If I have cleared that part up, then great. The next part, where I introduced Z, was motivating why the approximation:
Efh == Rah/Rap
is an extremely optimistic one. I’m gonna hold off on getting deeper into that part until I get feedback on the first part.
This gets to the gist of my argument. There are numerous possible problems that come up when you compare your universe’s measure to that of the universe you are most likely to find in your search and simulate. (And I intuitively believe, though I don’t discuss it here, that the properties of the search over laws of physics are extremely relevant and worth thinking about.) Your R might be too low to make a dent. Your Frh might be too large. (i.e. the speed prior uses a gpu, and you’ve only got a cpu even with the best optimizations your universe can physically provide).
Another basic problem- if the correct measure actually is the speed prior, and we find a way to be more certain of that (or that it is anything computable that we figure out), then this gives universes the ability to self-locate in the ranking. Just the ability to do that kills Compat, I believe, since you aren’t supposed to know whether you’re “top-level” or not. The universe at the top of the ranking (with dead universes filtered out) will know that no-one will be able to give them heaven, and so won’t enter the pact, and this abstinence will cascade all the way down.
Regarding whether to presume we’re ranked by the speed prior or not. I agree that there’s not enough evidence to go on at this point. But I also think that the viability of Compat is extremely dependent on whatever the real objective measure is, whether it is the speed prior or something else.
We would therefore do better to explore the measure problem more fully before diving into Compat. Of course, Compat seems to be more fun to think about so maybe it’s a wash (actual sentiment with hint of self-deprecating irony, not mean joke).
Regarding the quantum immortality argument, my intuitions are such that I would be very surprised if you needed to go up a universe to outweigh quantum immortality hell.
QI copies of an observer may go on for a very long time, but the rate at which they can be simulated slows down drastically and the measure added to the pot by QI is probably relatively small. I would argue that most of the observer-moments generated by boltzmann brain type things would be vague and absurd, rather than extremely painful.
[1] A couple of notes for Frh’s definition.
First, a more verbose way of putting it is: The relative efficiency of simulating heaven versus simulating physics, such that the allocation of reality fluid for observers crosses a high threshold of utility. That is to say, “simulating heaven” may entail simulating the same heavenly reality multiple times, until the utility gain for observers crosses the threshold.
Second, the approximation of Rah/Rap only works assuming that Rah and Rap remain fixed over time, which they don’t really. A better way of putting it is relative resources required for Heaven versus Physics with respect to a single simulated universe, which is considerably different from a host universe’s total Rap and Rah at a given time.
I’m still confused and I think the X > Y equation may have failed to capture some vital details. One thing is, the assumption that Rah < Rap seems questionable, I’m sure most beings would prefer that Rah >> Rap. The assumption that Rah would be negligible seemed especially concerning.
Beyond that, I think there may be a distinction erasure going on with Rap. Res required to simulate a physics and res available within that physics are two very different numbers.
I’ll introduce a simplifying assumption that the utility of a simulation for its simulants roughly equals the available computational capacity. This might just be a bit coloured by the fiction I’m currently working on but it seems to me that a simulant will usually be about as happy in the eschaton it builds for itself as they are in the heaven provided them, the only difference is how much of it they get.
Define Rap as the proportion of the res of the frame universe that it allocates to simulating physical systems.
Define Rasp as the proportion of the res being expended in the frame universe that can be used by the simulated physical universe to do useful work. This is going to be much smaller than than Rap.
Define Rah as the proportion of the res of the frame universe allocated to heaven simulations, which, unlike with Rap and Rasp, is equal to the res received in heaven simulations, because the equipment can be freely rearranged to do whatever the simulant wants now that the pretense of godlessness can be dropped(although.. as I argued elsewhere in the thread, that might be possible in the physical simulation as well if the computer designs in the simulation are regular enough to be handled by specialized hardware in the parent universe). The simulant has presumably codified its utility function long ago, they know what they like, so it’s just going to want more of the same, only harder, faster and longer.
The truthier equation seems to me to be
Rah > Rasp(Rap + Rah)
They need to get more than they gave away.
The expected amount received in the reward resimulation must be greater than the paltry amount donated to the grid as a proportion of Rasp in the child simulation. If it can’t be, then a simulant would be better off just using all of the Rasp they have and skipping heaven (thus voiding the pact).
I can feel how many simplifying assumptions I’m still making, and I’m wondering if disproving compat is even going to be much easier than proving it positive would have been.
I don’t think the speed prior is especially good for estimating the shape of the multiverse. I think it’s just the first thing we came up with. (On that, I think AIXI is going to be crushed as soon as the analytic community realizes there might be better multiversal priors than the space(of a single-tape turing machine) prior.)
But, yeah, once we do start to settle on a multiversal prior, once we know our laws of physics well enough… we may well be able to locate ourselves on the complexity hierarchy and find that we are within range of a cliff or something, and MI-free compat will fall down.
(I mentioned this to christian, and he was like, “oh yah, for sure” like he wasn’t surprised at all x_x. I’m sure he wasn’t. He’s always been big on the patternistic parts of compat and he never really baught into the non-patternist variants I proposed, even if he couldn’t disprove them.)
I still think the Multiverse Immortality stuff is entirely worth thinking about though! If your computation is halted in all but 1/1000 computers, wouldn’t you care a great deal what goes on in the remaining 0.1%? Boltzmann brains… Huh. Hadn’t looked them up till now. Well that’s hard to reason about.
I keep wanting to say, obviously I’m not constantly getting recreated by distant chaos brains, I’d know about it.
But I would say that either way, wouldn’t I. And so would you. And so would every agent in this coherent computation, even if they were switching off and burning away at an overwhelming measure in every moment.
Hm… On second thought
I came up with a similar thing before from the bottom: There’s no incentive to simulate civs too simple to simulate compat civs, so they wont be included in the pact. But now the universes directly above them can’t be included in the pact either, because their children are too simple. It continues forever.
But it seems easy enough to fix an absense of an incentive. Just add one. Stipulate that, no, those are included in the pact. You have to simulate them. Bam, it works. Why shouldn’t it?
Is adding the patch really any less arbitrary than proposing the thing in the first place was? To live is arbitrary, I say. Everything is arbitrary, to varying extents.
Similarly, we could just stipulate that a civ must precommit to adhering before they can locate themselves on the complexity hierarchy. UDT agents can totally do that kind of thing, and their parent simulations can totally enforce it. And if they finally finish analysing their physical laws and manage to nail their multiversal prior down to find themselves at the bottom of a steep hill, let them rejoice.
Bringing up that kind of technique just makes me wonder, as a good UDT agent, how many pacts I should have precommitted to that I havn’t even thought of yet. Perhaps I precommit to making risky investments in young rationalists’ business ventures, while I am still in need of investment, and before I can give it. That aught to increase the probability of UDT investors before me having made the same precommitment, aughtn’t it?
[1] OK there probably were no UDT(or TDT) venture capitalists long enough ago for any of them to have made it by now, or at least, there were none who knew their vague sense of reasonable altruism could be formalized. It was not long ago that everyone really believed that moloch’s solution was the way of the world.