I’m still confused and I think the X > Y equation may have failed to capture some vital details. One thing is, the assumption that Rah < Rap seems questionable, I’m sure most beings would prefer that Rah >> Rap. The assumption that Rah would be negligible seemed especially concerning.
Beyond that, I think there may be a distinction erasure going on with Rap. Res required to simulate a physics and res available within that physics are two very different numbers.
I’ll introduce a simplifying assumption that the utility of a simulation for its simulants roughly equals the available computational capacity. This might just be a bit coloured by the fiction I’m currently working on but it seems to me that a simulant will usually be about as happy in the eschaton it builds for itself as they are in the heaven provided them, the only difference is how much of it they get.
Define Rap as the proportion of the res of the frame universe that it allocates to simulating physical systems.
Define Rasp as the proportion of the res being expended in the frame universe that can be used by the simulated physical universe to do useful work. This is going to be much smaller than than Rap.
Define Rah as the proportion of the res of the frame universe allocated to heaven simulations, which, unlike with Rap and Rasp, is equal to the res received in heaven simulations, because the equipment can be freely rearranged to do whatever the simulant wants now that the pretense of godlessness can be dropped(although.. as I argued elsewhere in the thread, that might be possible in the physical simulation as well if the computer designs in the simulation are regular enough to be handled by specialized hardware in the parent universe). The simulant has presumably codified its utility function long ago, they know what they like, so it’s just going to want more of the same, only harder, faster and longer.
The truthier equation seems to me to be
Rah > Rasp(Rap + Rah)
They need to get more than they gave away.
The expected amount received in the reward resimulation must be greater than the paltry amount donated to the grid as a proportion of Rasp in the child simulation. If it can’t be, then a simulant would be better off just using all of the Rasp they have and skipping heaven (thus voiding the pact).
I can feel how many simplifying assumptions I’m still making, and I’m wondering if disproving compat is even going to be much easier than proving it positive would have been.
I don’t think the speed prior is especially good for estimating the shape of the multiverse. I think it’s just the first thing we came up with. (On that, I think AIXI is going to be crushed as soon as the analytic community realizes there might be better multiversal priors than the space(of a single-tape turing machine) prior.)
But, yeah, once we do start to settle on a multiversal prior, once we know our laws of physics well enough… we may well be able to locate ourselves on the complexity hierarchy and find that we are within range of a cliff or something, and MI-free compat will fall down.
(I mentioned this to christian, and he was like, “oh yah, for sure” like he wasn’t surprised at all x_x. I’m sure he wasn’t. He’s always been big on the patternistic parts of compat and he never really baught into the non-patternist variants I proposed, even if he couldn’t disprove them.)
I still think the Multiverse Immortality stuff is entirely worth thinking about though! If your computation is halted in all but 1/1000 computers, wouldn’t you care a great deal what goes on in the remaining 0.1%? Boltzmann brains… Huh. Hadn’t looked them up till now. Well that’s hard to reason about.
I keep wanting to say, obviously I’m not constantly getting recreated by distant chaos brains, I’d know about it.
But I would say that either way, wouldn’t I. And so would you. And so would every agent in this coherent computation, even if they were switching off and burning away at an overwhelming measure in every moment.
Hm… On second thought
The universe at the top of the ranking (with dead universes filtered out) will know that no-one will be able to give them heaven, and so won’t enter the pact, and this abstinence will cascade all the way down.
I came up with a similar thing before from the bottom: There’s no incentive to simulate civs too simple to simulate compat civs, so they wont be included in the pact. But now the universes directly above them can’t be included in the pact either, because their children are too simple. It continues forever.
But it seems easy enough to fix an absense of an incentive. Just add one. Stipulate that, no, those are included in the pact. You have to simulate them. Bam, it works. Why shouldn’t it?
Is adding the patch really any less arbitrary than proposing the thing in the first place was? To live is arbitrary, I say. Everything is arbitrary, to varying extents.
Similarly, we could just stipulate that a civ must precommit to adhering before they can locate themselves on the complexity hierarchy. UDT agents can totally do that kind of thing, and their parent simulations can totally enforce it. And if they finally finish analysing their physical laws and manage to nail their multiversal prior down to find themselves at the bottom of a steep hill, let them rejoice.
Bringing up that kind of technique just makes me wonder, as a good UDT agent, how many pacts I should have precommitted to that I havn’t even thought of yet. Perhaps I precommit to making risky investments in young rationalists’ business ventures, while I am still in need of investment, and before I can give it. That aught to increase the probability of UDT investors before me having made the same precommitment, aughtn’t it?
[1] OK there probably were no UDT(or TDT) venture capitalists long enough ago for any of them to have made it by now, or at least, there were none who knew their vague sense of reasonable altruism could be formalized. It was not long ago that everyone really believed that moloch’s solution was the way of the world.
I’m still confused and I think the X > Y equation may have failed to capture some vital details. One thing is, the assumption that Rah < Rap seems questionable, I’m sure most beings would prefer that Rah >> Rap. The assumption that Rah would be negligible seemed especially concerning.
Beyond that, I think there may be a distinction erasure going on with Rap. Res required to simulate a physics and res available within that physics are two very different numbers.
I’ll introduce a simplifying assumption that the utility of a simulation for its simulants roughly equals the available computational capacity. This might just be a bit coloured by the fiction I’m currently working on but it seems to me that a simulant will usually be about as happy in the eschaton it builds for itself as they are in the heaven provided them, the only difference is how much of it they get.
Define Rap as the proportion of the res of the frame universe that it allocates to simulating physical systems.
Define Rasp as the proportion of the res being expended in the frame universe that can be used by the simulated physical universe to do useful work. This is going to be much smaller than than Rap.
Define Rah as the proportion of the res of the frame universe allocated to heaven simulations, which, unlike with Rap and Rasp, is equal to the res received in heaven simulations, because the equipment can be freely rearranged to do whatever the simulant wants now that the pretense of godlessness can be dropped(although.. as I argued elsewhere in the thread, that might be possible in the physical simulation as well if the computer designs in the simulation are regular enough to be handled by specialized hardware in the parent universe). The simulant has presumably codified its utility function long ago, they know what they like, so it’s just going to want more of the same, only harder, faster and longer.
The truthier equation seems to me to be
Rah > Rasp(Rap + Rah)
They need to get more than they gave away.
The expected amount received in the reward resimulation must be greater than the paltry amount donated to the grid as a proportion of Rasp in the child simulation. If it can’t be, then a simulant would be better off just using all of the Rasp they have and skipping heaven (thus voiding the pact).
I can feel how many simplifying assumptions I’m still making, and I’m wondering if disproving compat is even going to be much easier than proving it positive would have been.
I don’t think the speed prior is especially good for estimating the shape of the multiverse. I think it’s just the first thing we came up with. (On that, I think AIXI is going to be crushed as soon as the analytic community realizes there might be better multiversal priors than the space(of a single-tape turing machine) prior.)
But, yeah, once we do start to settle on a multiversal prior, once we know our laws of physics well enough… we may well be able to locate ourselves on the complexity hierarchy and find that we are within range of a cliff or something, and MI-free compat will fall down.
(I mentioned this to christian, and he was like, “oh yah, for sure” like he wasn’t surprised at all x_x. I’m sure he wasn’t. He’s always been big on the patternistic parts of compat and he never really baught into the non-patternist variants I proposed, even if he couldn’t disprove them.)
I still think the Multiverse Immortality stuff is entirely worth thinking about though! If your computation is halted in all but 1/1000 computers, wouldn’t you care a great deal what goes on in the remaining 0.1%? Boltzmann brains… Huh. Hadn’t looked them up till now. Well that’s hard to reason about.
I keep wanting to say, obviously I’m not constantly getting recreated by distant chaos brains, I’d know about it.
But I would say that either way, wouldn’t I. And so would you. And so would every agent in this coherent computation, even if they were switching off and burning away at an overwhelming measure in every moment.
Hm… On second thought
I came up with a similar thing before from the bottom: There’s no incentive to simulate civs too simple to simulate compat civs, so they wont be included in the pact. But now the universes directly above them can’t be included in the pact either, because their children are too simple. It continues forever.
But it seems easy enough to fix an absense of an incentive. Just add one. Stipulate that, no, those are included in the pact. You have to simulate them. Bam, it works. Why shouldn’t it?
Is adding the patch really any less arbitrary than proposing the thing in the first place was? To live is arbitrary, I say. Everything is arbitrary, to varying extents.
Similarly, we could just stipulate that a civ must precommit to adhering before they can locate themselves on the complexity hierarchy. UDT agents can totally do that kind of thing, and their parent simulations can totally enforce it. And if they finally finish analysing their physical laws and manage to nail their multiversal prior down to find themselves at the bottom of a steep hill, let them rejoice.
Bringing up that kind of technique just makes me wonder, as a good UDT agent, how many pacts I should have precommitted to that I havn’t even thought of yet. Perhaps I precommit to making risky investments in young rationalists’ business ventures, while I am still in need of investment, and before I can give it. That aught to increase the probability of UDT investors before me having made the same precommitment, aughtn’t it?
[1] OK there probably were no UDT(or TDT) venture capitalists long enough ago for any of them to have made it by now, or at least, there were none who knew their vague sense of reasonable altruism could be formalized. It was not long ago that everyone really believed that moloch’s solution was the way of the world.